RDF, OData, and GData are data protocols with differences in their logical models, physical implementations, and intents. RDF uses a graph/EAV logical model and is intended for data syndication and linking on the web. It has open extensibility through RDFS/OWL. OData uses a graph/EAV model grounded in AtomPub and EDM, and is intended for data publishing and syndication. It reuses some Microsoft types and namespaces. GData has an unclear logical model behind Google services, and is intended for Google cloud data publishing through AtomPub/JSON formats only.
XML Technologies for RESTful Services Developmentruyalarcon
This document discusses XML technologies for developing RESTful services. It begins by outlining EMC's motivation for establishing an integration architecture based on REST principles. The document then covers key REST principles like identifying resources with URIs and using a uniform interface. It describes implementing RESTful services using JAX-RS and binding operations to XProc pipelines. The framework supports developing domain-specific RESTful APIs that address all four REST principles through the use of XML, XQuery, XSLT, and an XML database. Hypermedia is added through XSLT transformations, and the framework measures up well against JAX-RS in addressing REST principles.
DC-2008 Tutorial 3 - Dublin Core and other metadata schemasMikael Nilsson
The document discusses metadata standards and interoperability. It provides an overview of Dublin Core and other metadata schemas. It describes how Dublin Core terms are defined both for human understanding through textual definitions, as well as machine understanding through formal semantics expressed in RDF. This allows metadata using Dublin Core terms to be combined and processed in an interoperable way on the Semantic Web.
The document discusses using linked data to solve problems of identity and data access/integration. It describes linked data as data that is accessible over HTTP and implicitly associated with metadata. It then outlines problems around identity, such as repeating credentials across different apps/enterprises. The solution proposed is assigning individuals HTTP-based IDs and binding IDs to certificates and profiles. Problems of data silos across different databases and apps are also described, with the solution being to generate conceptual views over heterogeneous sources using middleware and RDF.
The Virtuoso product family provides a virtual database engine that can manage data in multiple formats including SQL, RDF, XML and free text. It offers features like distributed query optimization, SQL and SPARQL support, full ACID transactions, clustering and high availability. It also provides native storage and management of relational, XML and RDF data with full text search capabilities.
Linked Data Driven Data Virtualization for Web-scale Integrationrumito
- Linked data and data virtualization can help address challenges of growing data heterogeneity, complexity, and need for agility by providing a common data model and identifiers.
- Linked data uses RDF to represent information as graphs of triples connected by URIs, allowing different data sources to be integrated and queried together.
- As more data is published using common vocabularies and linking to existing URIs, it increases opportunities for discovery, integration and novel ways to extract value from diverse data sources.
The document discusses the concepts of linked data, how it can be created and deployed from various data sources, and how it can be exploited. Linked data allows accessing data on the web by reference using HTTP-based URIs and RDF, forming a giant global graph. It can be generated from existing web pages, services, databases and content, and deployed using a linked data server. Exploiting linked data allows discovery, integration and conceptual interaction across silos of heterogeneous data on the web and in enterprises.
The document summarizes a presentation on implementing OpenURL version 1.0. Key points include:
- OpenURL 1.0 expands on version 0.1 by allowing richer metadata, new genres, extensibility through formatting and registering new elements.
- It separates the ContextObject, which describes a referenced item and its context, from its transport via HTTP. ContextObjects can be passed by value or reference.
- The San Antonio Profile provides guidelines for compliant implementation, including recommended formats, entities, and transports.
- Creating OpenURL links involves specifying the resolver URL, referrer, referent identifiers, and optional metadata in a key-value format.
The document describes the structure and format of blocks in a key-value store. It outlines the header, data, leaf index, meta, and trailer blocks. The header includes metadata like the block type and sizes. Data blocks contain compressed and uncompressed key-value entries. Leaf index blocks point to data blocks and include offsets. Meta blocks store metadata indexes. The trailer contains version and load-on-open metadata.
XML Technologies for RESTful Services Developmentruyalarcon
This document discusses XML technologies for developing RESTful services. It begins by outlining EMC's motivation for establishing an integration architecture based on REST principles. The document then covers key REST principles like identifying resources with URIs and using a uniform interface. It describes implementing RESTful services using JAX-RS and binding operations to XProc pipelines. The framework supports developing domain-specific RESTful APIs that address all four REST principles through the use of XML, XQuery, XSLT, and an XML database. Hypermedia is added through XSLT transformations, and the framework measures up well against JAX-RS in addressing REST principles.
DC-2008 Tutorial 3 - Dublin Core and other metadata schemasMikael Nilsson
The document discusses metadata standards and interoperability. It provides an overview of Dublin Core and other metadata schemas. It describes how Dublin Core terms are defined both for human understanding through textual definitions, as well as machine understanding through formal semantics expressed in RDF. This allows metadata using Dublin Core terms to be combined and processed in an interoperable way on the Semantic Web.
The document discusses using linked data to solve problems of identity and data access/integration. It describes linked data as data that is accessible over HTTP and implicitly associated with metadata. It then outlines problems around identity, such as repeating credentials across different apps/enterprises. The solution proposed is assigning individuals HTTP-based IDs and binding IDs to certificates and profiles. Problems of data silos across different databases and apps are also described, with the solution being to generate conceptual views over heterogeneous sources using middleware and RDF.
The Virtuoso product family provides a virtual database engine that can manage data in multiple formats including SQL, RDF, XML and free text. It offers features like distributed query optimization, SQL and SPARQL support, full ACID transactions, clustering and high availability. It also provides native storage and management of relational, XML and RDF data with full text search capabilities.
Linked Data Driven Data Virtualization for Web-scale Integrationrumito
- Linked data and data virtualization can help address challenges of growing data heterogeneity, complexity, and need for agility by providing a common data model and identifiers.
- Linked data uses RDF to represent information as graphs of triples connected by URIs, allowing different data sources to be integrated and queried together.
- As more data is published using common vocabularies and linking to existing URIs, it increases opportunities for discovery, integration and novel ways to extract value from diverse data sources.
The document discusses the concepts of linked data, how it can be created and deployed from various data sources, and how it can be exploited. Linked data allows accessing data on the web by reference using HTTP-based URIs and RDF, forming a giant global graph. It can be generated from existing web pages, services, databases and content, and deployed using a linked data server. Exploiting linked data allows discovery, integration and conceptual interaction across silos of heterogeneous data on the web and in enterprises.
The document summarizes a presentation on implementing OpenURL version 1.0. Key points include:
- OpenURL 1.0 expands on version 0.1 by allowing richer metadata, new genres, extensibility through formatting and registering new elements.
- It separates the ContextObject, which describes a referenced item and its context, from its transport via HTTP. ContextObjects can be passed by value or reference.
- The San Antonio Profile provides guidelines for compliant implementation, including recommended formats, entities, and transports.
- Creating OpenURL links involves specifying the resolver URL, referrer, referent identifiers, and optional metadata in a key-value format.
The document describes the structure and format of blocks in a key-value store. It outlines the header, data, leaf index, meta, and trailer blocks. The header includes metadata like the block type and sizes. Data blocks contain compressed and uncompressed key-value entries. Leaf index blocks point to data blocks and include offsets. Meta blocks store metadata indexes. The trailer contains version and load-on-open metadata.
Flexible metadata schemes for research data repositories - Clarin Conference...Vyacheslav Tykhonov
The development of the Common Framework in Dataverse and the CMDI use case. Building AI/ML based workflow for the prediction and linking concepts from external controlled vocabularies to the CMDI metadata values.
This document discusses the DDS-PSM-Cxx standard for implementing the Data Distribution Service (DDS) in C++. It provides an overview of the key concepts in DDS including domains, topics, publishers, subscribers, datawriters and datareaders. It also describes content filtering, queries, instances and state-based selection. The document notes that simd-cxx influenced DDS-PSM-Cxx and that simd-cxx v1.0 implements this standard. It provides references to related DDS implementations and APIs.
HFile: A Block-Indexed File Format to Store Sorted Key-Value PairsSchubert Zhang
HFile is a mimic of Google’s SSTable. Now, it is available in Hadoop HBase-0.20.0. And the previous releases of HBase temporarily use an alternate file format – MapFile, which is a common file format in Hadoop IO package. I think HFile should also become a common file format when it becomes mature, and should be moved into the common IO package of Hadoop in the future.
Presentation for CLARIAH IG Linked Open Data on the latest developments for Dataverse FAIR data repository. Building SEMAF workflow with external controlled vocabularies support and Semantic API.
This document discusses Last.fm's use of HFiles outside of HBase. It summarizes tests performed comparing Last.fm's original plain text file format to a new binary format based on HFiles. The HFile format reduced file size by 80% and query times by over 90%. Last.fm is moving its chartserver data storage to HBase to address indexing slowness and allow different teams to use different NoSQL systems. The document also advertises two open data scientist positions at Last.fm.
DODS (Distributed Oceanographic Data System) is a software package that helps users provide and access data over the internet in a consistent way. It allows data analysis tools to access datasets from any location as if the data were local. DODS uses client and server architecture, with clients making requests via URLs that are processed by DODS servers to deliver subsetted data in the expected format. This transforms tools into "network-savvy" clients that can access remote data from any DODS server regardless of the native data format.
The document discusses the goals and major specifications of the DCMI Architecture Forum. It aims to document the DCMI metadata framework, develop technical specifications, and provide feedback on technical issues. Major specifications discussed include the DCMI Abstract Model, expressions for expressing DCMI metadata in different formats like RDF and XML, and the Singapore Framework for DC Application Profiles. It also discusses different levels of interoperability and introduces Description Set Profiles as a way to formally represent the constraints of a Dublin Core Application Profile.
Data FAIRport Prototype & Demo - Presentation to Elsevier, Jul 10, 2015Mark Wilkinson
A discussion and demonstration of a functional Data FAIRport, using W3C's Linked Data Platform, Ruben Verborgh's Linked Data Fragments, and Hydra's hypermedia controlled vocabularies. This is the output of the "Skunkworks" working group of the larger Data FAIRport project (http://datafairport.org).
The document proposes making the Metadata for Learning Resources (MLR) standard interoperable by basing it on semantic technologies and the Resource Description Framework (RDF) model to allow machines to process metadata consistently across systems. It suggests MLR define properties, classes, and application profiles to structure metadata and leverage existing standards like Dublin Core rather than creating a new "metadata island". Developing MLR in this way would enable large-scale interoperability through linked open data.
This document summarizes three popular Java frameworks for working with RDF and SPARQL: Jena, Sesame, and JRDF. It describes how each framework represents RDF data using a graph model with subjects, predicates, and objects. It also discusses how each framework supports querying RDF data using SPARQL or alternative query languages, and persisting RDF graphs to databases.
Java 5 PSM for DDS: Initial Submission (out of date)Rick Warren
Presentation to the OMG's MARS Task Force in June, 2010 on proposed improvements to the Java API to the OMG's Data Distribution Service specification (DDS).
Deploying PHP applications using Virtuoso as Application Serverwebhostingguy
Virtuoso can act as an application server for PHP applications, providing both web server and database functionality. It exposes application data as RDF, allowing for more advanced querying across applications. Existing PHP applications like PHPBB, Drupal, and WordPress have been set up to work with Virtuoso and expose their data as RDF through a mapping process. Developers can build Virtuoso from source to include PHP support, enabling the hosting of PHP applications and accessing of application data as RDF through a SPARQL endpoint.
Provide a solution for Semantic Web issues - metadata vocabularies, ontological modeling resources, automated reasoning according to user profile - within a Web browser. It focused on issues such as automatic classification of sites visited by the user, with some similar references in terms of content or design.
Oleg Bogut - Decoupled Drupal: how to build stable solution with JSON:API, Re...DrupalCamp Kyiv
This document discusses building a decoupled Drupal site architecture using JSON:API, ReactJS, and Elasticsearch. It defines decoupled Drupal as exposing Drupal data via web services for consumption by other applications. Key points covered include advantages of decoupling like content syndication and frontend developer experience. JSON:API and GraphQL are presented as options for the Drupal API. ReactJS is recommended for building client-side applications. Elasticsearch is proposed for site search. Performance tuning and caching strategies are also addressed.
This document discusses two techniques for accessing OpenDocument Format (ODF) documents on the web: converting ODF to Atom format and converting ODF to JSON format. It provides examples of using these techniques to generate metadata feeds and publish spreadsheet data. Potential uses and enhancements of these approaches are also outlined.
Alex Wade, Digital Library Interoperabilityparker01
This document discusses digital library interoperability and Microsoft's efforts to support interoperability through various initiatives and technologies. Microsoft External Research aims to advance research through partnerships and provides tools and services to support the entire research process. Microsoft is committed to interoperability and provides open access, open tools, and open technologies. Microsoft has established several interoperability principles around open connections, standards support, and data portability. Microsoft is working to improve document and data interoperability through various projects and platforms like Zentity, which provides a repository for research outputs that supports various standards and protocols. Challenges and opportunities around digital libraries and interoperability in cloud computing environments are also discussed.
This document compares the IndexedDB and SQLite databases. It begins with an introduction that describes the competition between native and web applications and the need for local data storage in web applications. It then outlines the theoretical background of IndexedDB and SQLite, describing their key characteristics and features. The document presents the research questions regarding the performance and security of IndexedDB compared to SQLite.
Introduction Java Web Framework and Web Server.suranisaunak
The document discusses Java 2 Enterprise Edition (J2EE) and frameworks. It defines J2EE as a set of standard specifications for building large distributed applications using components like Java servlets, JavaServer Pages, and Enterprise JavaBeans. Frameworks provide reusable code and APIs that help develop applications faster by handling common tasks. The document lists several Java persistence and web service frameworks and describes features that distinguish frameworks from normal libraries like inversion of control.
Flexible metadata schemes for research data repositories - Clarin Conference...Vyacheslav Tykhonov
The development of the Common Framework in Dataverse and the CMDI use case. Building AI/ML based workflow for the prediction and linking concepts from external controlled vocabularies to the CMDI metadata values.
This document discusses the DDS-PSM-Cxx standard for implementing the Data Distribution Service (DDS) in C++. It provides an overview of the key concepts in DDS including domains, topics, publishers, subscribers, datawriters and datareaders. It also describes content filtering, queries, instances and state-based selection. The document notes that simd-cxx influenced DDS-PSM-Cxx and that simd-cxx v1.0 implements this standard. It provides references to related DDS implementations and APIs.
HFile: A Block-Indexed File Format to Store Sorted Key-Value PairsSchubert Zhang
HFile is a mimic of Google’s SSTable. Now, it is available in Hadoop HBase-0.20.0. And the previous releases of HBase temporarily use an alternate file format – MapFile, which is a common file format in Hadoop IO package. I think HFile should also become a common file format when it becomes mature, and should be moved into the common IO package of Hadoop in the future.
Presentation for CLARIAH IG Linked Open Data on the latest developments for Dataverse FAIR data repository. Building SEMAF workflow with external controlled vocabularies support and Semantic API.
This document discusses Last.fm's use of HFiles outside of HBase. It summarizes tests performed comparing Last.fm's original plain text file format to a new binary format based on HFiles. The HFile format reduced file size by 80% and query times by over 90%. Last.fm is moving its chartserver data storage to HBase to address indexing slowness and allow different teams to use different NoSQL systems. The document also advertises two open data scientist positions at Last.fm.
DODS (Distributed Oceanographic Data System) is a software package that helps users provide and access data over the internet in a consistent way. It allows data analysis tools to access datasets from any location as if the data were local. DODS uses client and server architecture, with clients making requests via URLs that are processed by DODS servers to deliver subsetted data in the expected format. This transforms tools into "network-savvy" clients that can access remote data from any DODS server regardless of the native data format.
The document discusses the goals and major specifications of the DCMI Architecture Forum. It aims to document the DCMI metadata framework, develop technical specifications, and provide feedback on technical issues. Major specifications discussed include the DCMI Abstract Model, expressions for expressing DCMI metadata in different formats like RDF and XML, and the Singapore Framework for DC Application Profiles. It also discusses different levels of interoperability and introduces Description Set Profiles as a way to formally represent the constraints of a Dublin Core Application Profile.
Data FAIRport Prototype & Demo - Presentation to Elsevier, Jul 10, 2015Mark Wilkinson
A discussion and demonstration of a functional Data FAIRport, using W3C's Linked Data Platform, Ruben Verborgh's Linked Data Fragments, and Hydra's hypermedia controlled vocabularies. This is the output of the "Skunkworks" working group of the larger Data FAIRport project (http://datafairport.org).
The document proposes making the Metadata for Learning Resources (MLR) standard interoperable by basing it on semantic technologies and the Resource Description Framework (RDF) model to allow machines to process metadata consistently across systems. It suggests MLR define properties, classes, and application profiles to structure metadata and leverage existing standards like Dublin Core rather than creating a new "metadata island". Developing MLR in this way would enable large-scale interoperability through linked open data.
This document summarizes three popular Java frameworks for working with RDF and SPARQL: Jena, Sesame, and JRDF. It describes how each framework represents RDF data using a graph model with subjects, predicates, and objects. It also discusses how each framework supports querying RDF data using SPARQL or alternative query languages, and persisting RDF graphs to databases.
Java 5 PSM for DDS: Initial Submission (out of date)Rick Warren
Presentation to the OMG's MARS Task Force in June, 2010 on proposed improvements to the Java API to the OMG's Data Distribution Service specification (DDS).
Deploying PHP applications using Virtuoso as Application Serverwebhostingguy
Virtuoso can act as an application server for PHP applications, providing both web server and database functionality. It exposes application data as RDF, allowing for more advanced querying across applications. Existing PHP applications like PHPBB, Drupal, and WordPress have been set up to work with Virtuoso and expose their data as RDF through a mapping process. Developers can build Virtuoso from source to include PHP support, enabling the hosting of PHP applications and accessing of application data as RDF through a SPARQL endpoint.
Provide a solution for Semantic Web issues - metadata vocabularies, ontological modeling resources, automated reasoning according to user profile - within a Web browser. It focused on issues such as automatic classification of sites visited by the user, with some similar references in terms of content or design.
Oleg Bogut - Decoupled Drupal: how to build stable solution with JSON:API, Re...DrupalCamp Kyiv
This document discusses building a decoupled Drupal site architecture using JSON:API, ReactJS, and Elasticsearch. It defines decoupled Drupal as exposing Drupal data via web services for consumption by other applications. Key points covered include advantages of decoupling like content syndication and frontend developer experience. JSON:API and GraphQL are presented as options for the Drupal API. ReactJS is recommended for building client-side applications. Elasticsearch is proposed for site search. Performance tuning and caching strategies are also addressed.
This document discusses two techniques for accessing OpenDocument Format (ODF) documents on the web: converting ODF to Atom format and converting ODF to JSON format. It provides examples of using these techniques to generate metadata feeds and publish spreadsheet data. Potential uses and enhancements of these approaches are also outlined.
Alex Wade, Digital Library Interoperabilityparker01
This document discusses digital library interoperability and Microsoft's efforts to support interoperability through various initiatives and technologies. Microsoft External Research aims to advance research through partnerships and provides tools and services to support the entire research process. Microsoft is committed to interoperability and provides open access, open tools, and open technologies. Microsoft has established several interoperability principles around open connections, standards support, and data portability. Microsoft is working to improve document and data interoperability through various projects and platforms like Zentity, which provides a repository for research outputs that supports various standards and protocols. Challenges and opportunities around digital libraries and interoperability in cloud computing environments are also discussed.
This document compares the IndexedDB and SQLite databases. It begins with an introduction that describes the competition between native and web applications and the need for local data storage in web applications. It then outlines the theoretical background of IndexedDB and SQLite, describing their key characteristics and features. The document presents the research questions regarding the performance and security of IndexedDB compared to SQLite.
Introduction Java Web Framework and Web Server.suranisaunak
The document discusses Java 2 Enterprise Edition (J2EE) and frameworks. It defines J2EE as a set of standard specifications for building large distributed applications using components like Java servlets, JavaServer Pages, and Enterprise JavaBeans. Frameworks provide reusable code and APIs that help develop applications faster by handling common tasks. The document lists several Java persistence and web service frameworks and describes features that distinguish frameworks from normal libraries like inversion of control.
This document describes a final year project to develop an SQL converter tool. The tool will convert SQL database files to XML and JSON file formats. The objectives are to identify suitable semi-structured data formats for converted structured SQL data and develop a tool that allows users to upload SQL files, select an output format, and download the converted XML or JSON files. The project uses Java and follows an iterative development methodology. The prototype developed allows users to perform basic SQL to XML/JSON conversions through a web interface.
gRPC, GraphQL, REST - Which API Tech to use - API Conference Berlin oct 20Phil Wilkins
The document discusses different API technologies including gRPC, GraphQL, and REST. It provides overviews of each technology, describing their origins, key concepts, pros, and cons. gRPC was developed by Google and uses protocol buffers for messages and HTTP/2 for transport. GraphQL was created by Facebook and uses a query language for clients to specify the exact data they need. REST is the more established standard based on HTTP and uses URIs for identification of resources.
XML and JSON are both commonly used data formats, but they have key differences. XML is an extensible markup language that defines rules for encoding documents in a human and machine-readable format. It was created by the W3C and supports features like namespaces, comments, and complex data types. JSON is a simpler text format used for data interchange, derived from JavaScript. It supports native representation of arrays and objects, and is commonly used to transmit data between servers and web apps. JSON has a simpler syntax than XML and is generally easier for developers to work with.
This document discusses several technologies for data transactions in rich internet applications (RIAs), including REST, AMF, Flex-Ajax Bridge, JSON, and JSONRequest. REST uses XML, URIs and HTTP to enable distributed computing on the web. AMF is a proprietary data format used in Flash applications. The Flex-Ajax Bridge allows exposing ActionScript classes to JavaScript. JSON is a lightweight data interchange format used for transmitting data between client and server. JSONRequest proposes a new browser service for two-way data exchange using JSON.
The document discusses Apache CouchDB, a NoSQL database management system. It begins with an overview of NoSQL databases and their characteristics like being non-relational, distributed, and horizontally scalable. It then provides details on CouchDB, describing it as a document-oriented database using JSON documents and JavaScript for queries. The document outlines CouchDB's features like schema-free design, ACID compliance, replication, RESTful API, and MapReduce functions. It concludes with examples of CouchDB use cases and steps to set up a sample project using a CouchDB instance with sample employee data and views/shows to query the data.
Decoupled Drupal: What This Means for DevelopersAcquia
Recently, decoupled content management has been taking the front-end world by storm as developers seek new ways to leverage battle-tested back ends alongside more flexible, extensible front ends. JavaScript frameworks with ever-quickening advances and native applications can integrate seamlessly with "headless" back ends such as Drupal by bypassing the theme layer completely.
What are some of the implications of this newly decoupled world for front-end Drupal developers and designers? In this webinar, gain insight into the trends and new ideas emerging on the topic of decoupled Drupal. Also learn about decoupled Drupal against the backdrop of the rapidly changing front-end ecosystem, taking into consideration the impacts in areas such as Web Components, abstract DOMs, Drupal’s theme layer, and presentation.
Designers, front end developers, and Drupal themers of all skill levels will benefit from this webinar. Attendees will learn:
- Advantages and disadvantages of going headless, as well as for going with a JavaScript framework
- Managing content and headless Drupal - what this means for developers
- How to integrate with frameworks and native applications
- The future of markup and the theme layer, as well as the future of the front end and Drupal
Video: https://www.youtube.com/watch?v=Rt2oHibJT4k
Technologies such as Hadoop have addressed the "Volume" problem of Big Data, and technologies such as Spark have recently addressed the "Velocity" problem – but the "Variety" problem is largely unaddressed – there is a lot of manual "data wrangling" to mange data models.
These manual processes do not scale well. Not only is the variety of data increasing, also the rate of change in the data definitions is increasing. We can’t keep up. NoSQL data repositories can handle storage, but we need effective models of the data to fully utilize it.
This talk will present tools and a methodology to manage Big Data Models in a rapidly changing world. This talk covers:
Creating Semantic Metadata Models of Big Data Resources
Graphical UI Tools for Big Data Models
Tools to synchronize Big Data Models and Application Code
Using NoSQL Databases, such as Amazon DynamoDB, with Big Data Models
Using Big Data Models with Hadoop, Storm, Spark, Giraph, and Inference
Using Big Data Models with Machine Learning to generate Predictive Models
Developer Collaborative/Coordination processes using Big Data Models and Git
Managing change – Big Data Models with rapidly changing Data Resources
This a talk that I gave at BioIT World West on March 12, 2019. The talk was called: A Gen3 Perspective of Disparate Data:From Pipelines in Data Commons to AI in Data Ecosystems.
X api chinese cop monthly meeting feb.2016Jessie Chuang
The document summarizes the topics discussed at an XAPI Chinese CoP meeting in February 2016. It covered the XAPI vocabulary specification, linked data/semantic web, linked data in education and content recommendation, semantic search and Google Knowledge Graph, monetizing data and adding intelligence. It also included a case study on Hong Ding Educational Technology using XAPI data and partnerships to provide differentiated learning paths. The document emphasized collaborating on standards for competency, user data, content metadata and xAPI statements to enable partnerships and monetizing data while ensuring security, regulation and collective decision making.
This document provides an overview of CouchDB, a document-oriented NoSQL database. It discusses key CouchDB concepts like using JSON documents to store data, JavaScript-based MapReduce functions to query data, and an HTTP-based API. It also covers CouchDB features such as replication and eventual consistency. Pros noted are flexibility in data schemas and parallel indexing for queries. Cons include needing to pre-define views for queries and implementing join/sort logic client-side. Related projects like PouchDB and TouchDB are also mentioned.
The document compares IIOP, RMI, and HTTP protocols. It states that IIOP is a CORBA transport protocol that enables interoperability between CORBA-compliant ORBs over TCP/IP. RMI is Java's built-in ORB that provides remote method invocation but lacks features like language interoperability that IIOP supports. While RMI is easier for Java programmers to use, IIOP is better suited as a backbone for the internet due to its supported services.
The document provides an overview of several topics related to web development and open source software:
- It describes search engine optimization (SEO) techniques including white hat and black hat approaches.
- It explains what .htaccess files are and how they are used to configure Apache web servers.
- It defines open source software as software with publicly accessible source code that can be modified and shared by anyone.
- It discusses ontologies, the semantic web, and key technologies like RDF, SPARQL and OWL that power semantic data linking and querying.
- It briefly introduces GNU, an open source operating system, and virtualization which creates virtual computer resources.
Similar to Difference between rdf, odata and gdata (20)
Difference between wcf and asp.net web apiUmar Ali
WCF is Microsoft's unified programming model for building service-oriented applications that supports multiple transport protocols and message exchange patterns. It enables building secure and reliable services that can integrate across platforms. ASP.NET Web API is a framework for building HTTP services and is optimized for browser and mobile access. It only supports HTTP protocol but provides MVC features like routing and controllers. WCF supports advanced protocols like reliable messaging while ASP.NET Web API is best for resource-oriented HTTP services that need to support a broad range of clients. The document compares key differences between WCF and ASP.NET Web API across areas like protocols, hosting, description, and when to choose each technology.
Difference between ActionResult() and ViewResult()Umar Ali
ActionResult() is an abstract base class that defines the general result type for MVC actions. ViewResult() is a concrete subclass of ActionResult() that renders a specified view to the response stream. Some key subtypes of ActionResult() include ViewResult(), PartialViewResult(), EmptyResult(), RedirectResult(), and JsonResult(). ActionResult() allows for polymorphism and dynamic behavior by returning different result types from an action. It should be used as the return type when an action may have different behaviors, while ViewResult() can be used when an action will definitely return a view.
Difference between asp.net mvc 3 and asp.net mvc 4Umar Ali
The document compares ASP.NET MVC 3 and ASP.NET MVC 4 across 12 categories. Some key differences include:
- Bundling and minification, display modes, and custom controller locations are only supported in MVC 4.
- The empty project template is truly empty in MVC 4, unlike MVC 3.
- Features like WebSockets, SignalR, recipes, mobile project templates, and Web API are new to MVC 4.
- Asynchronous controller implementation is simpler using async/await in MVC 4 versus AsyncController in MVC 3.
- MVC 4 has better support for Azure, Facebook/Twitter authentication, and various new project templates.
Difference between asp.net web api and asp.net mvcUmar Ali
The document compares ASP.NET Web API and ASP.NET MVC. ASP.NET Web API is focused on outputting raw data through HTTP services, while ASP.NET MVC is focused on outputting HTML views. Some key differences include: ASP.NET Web API assumes data comes from the query string or form body, while MVC assumes multiple sources; Web API supports content negotiation and self-hosting, while MVC does not; and Web API is better for non-browser clients while MVC is optimized for browsers. Both can be used together in a single project.
Difference between asp.net web forms and asp.net mvcUmar Ali
The document compares ASP.NET WebForms and ASP.NET MVC across 14 criteria. Some key differences include:
- ASP.NET WebForms uses a "Page Controller" pattern where each page has a code-behind class controller, while ASP.NET MVC uses a "Front Controller" pattern with a single central controller.
- ASP.NET WebForms is tightly coupled with the controller dependent on the view, while ASP.NET MVC is loosely coupled with separate and independent controller and view.
- This loose coupling makes ASP.NET MVC easier to test through test-driven development compared to WebForms.
- ASP.NET MVC gives developers full control over HTML, JavaScript and
ASP.NET MVC difference between questions list 1Umar Ali
The document lists 41 questions asking about differences between various concepts in ASP.NET MVC, web development and design patterns including:
- Differences between MVC and MVP, ViewBag and ViewData, routing in webforms vs MVC, MVC and Web API, Razor and ASPX view engines, MVC and Web Forms.
- Differences between TempData and Session, MVC project templates, asynchronous controller implementations between MVC versions.
- Differences between ways to render views and redirect in MVC, ViewData vs ViewBag vs TempData vs Session, ActionResult and ViewResult.
- Differences between MVC versions, partial and strongly typed views, MVC vs MVP vs M
This document provides two reference links to websites that can be used to quickly check if file hosting links are working or dead without having to manually check each link. The referenced sites allow for bulk checking of file hosting links to determine their status in a fast and efficient manner.
This document lists the Alexa global rankings and brief descriptions of various affiliate marketing network sites. It includes top-ranking sites like Google and Amazon affiliates as well as smaller networks ranging from the hundreds to tens of thousands in global rank. The document provides a high-level overview of major affiliate networks for people to consider and get more details by visiting the listed sites.
This document lists the Alexa global rankings and brief details of various online learning and tutorial sites. The top ranked sites include TutsPlus at 1,062, Lynda at 2,495, and Udemy at 5,818. Other popular sites mentioned are Teamtreehouse, Video2Brain, Pluralsight, Informit, VTC, Infiniteskills, Total Training, Educator, and Tekpub. The document directs to a site for further updates on global rankings of online learning sites.
This document lists the Alexa global rankings and brief details of the top 20 news websites, including Google News at #2, Yahoo News at #4, CNN at #8, Huffington Post at #15, New York Times at #17, and Fox News at #19. It concludes by providing a URL for more information on global website rankings.
How to create user friendly file hosting link sitesUmar Ali
This document provides tips for creating effective file hosting link sites using WordPress. It recommends using responsive themes for mobile access, creating unique content to improve search engine indexing, keeping posts SEO optimized with plugins, adding image alt text, checking links before posting with checker tools, removing broken links daily with plugins, and regularly updating dead links to maintain a high number of working downloads. The goal is to effectively share verified file links across devices while maintaining good rankings over time.
The document discusses a collection of weak hadiths that contradict and confuse Muslims. It notes that some hadiths contain errors or fabrications. The author argues that Muslims should critically examine hadiths to distinguish authentic ones from inauthentic ones, in order to have sound religious beliefs and practices based on reliable Islamic sources.
This document contains 23 hadith (sayings or traditions of the Prophet Muhammad) related to purification and cleanliness as narrated by various Sahabah (companions of the Prophet). The hadith cover topics like the parts of the body to be washed during purification, maintaining cleanliness in the home and clothes, and the virtues of being clean. The hadith are brief, usually only a few sentences, and emphasize the importance of physical and spiritual purification in Islam.
This provides a brief statistics of how many websites in the world are developed with ASP.Net Technology and the current Job Opportunities of studying the .NET.
This document lists the Alexa rankings of various Indian news websites along with brief details about each site. It provides the site name, global Alexa ranking, and a brief description for over 15 Indian news sites, with Indiatimes ranking the highest at 126 and Telegraphindia ranking the lowest at over 21,000. It concludes by providing a URL for more regular updates on global site rankings.
This document lists the Alexa global rankings and brief details for 15 photo sharing and hosting websites, ranging from Tumblr at rank 32 to Snapfish at rank 52,040. It also provides a link for further updates on website rankings and information.
There comes need to find files hosted on file hosting sites alone like rapidshare.com,mediafire.com,extabit.com etc., . Users who want to search effectively, then the following list of file hosting search engines will be useful.
AJAX allows asynchronous data loading without page reloads, while jQuery is a JavaScript library that simplifies AJAX calls and DOM manipulation. AJAX uses multiple technologies like CSS, HTML, DOM to provide new functionality by combining server-side processing with client-side changes. jQuery can access the front-end more easily without needing to understand the full AJAX procedure. AJAX can overload servers due to many connections, while jQuery is lighter weight and causes less overload.
The document discusses several differences between ADO.NET concepts including:
1) DataReader allows reading one record at a time in a forward-only manner while DataAdapter allows navigating records and updating data in a disconnected manner.
2) DataSet allows caching and manipulating disconnected data across multiple tables while DataReader requires an open connection and only retrieves data from a single query.
3) DataSet.Copy() copies both structure and data of a DataSet while DataSet.Clone() only copies the structure without any data.
4) ADO.NET uses XML, disconnected architecture, and the DataSet object while classic ADO uses binary format, requires active connections, and the Recordset object.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
1. Difference between RDF, OData and GData
RDF OData GData
Abbreviation: Abbreviation: Abbreviation:
RDF stands for Resource OData stands for Open Data GData stands for Google
Description Framework Protocol Data Protocol
Meaning: Meaning: Meaning:
RDF is a framework which The Open Data Protocol Gdata provides a simple
follows W3C technology for (OData) is an open web protocol for reading and
representing information in protocol for querying and writing data on the Internet,
the Web. updating data. The protocol designed by Google. GData
allows for a consumer to combines common XML-
The design of RDF is query a datasource over the based syndication formats
intended to meet the HTTP protocol and get the (Atom and RSS) with a feed-
following goals: result back in formats like publishing system based on
Atom, JSON or plain XML, the Atom Publishing
i.having a simple data including pagination, Protocol, plus some
model ordering or filtering of the extensions for handling
ii.having formal semantics data. queries. It relies on XML or
and provable inference JSON as a data format.
iii.using an extensible URI- Many of the building blocks
based vocabulary that make up OData are Google provides GData
iv.using an XML-based standardized via Atom and client libraries for Java,
syntax AtomPub. The OData JavaScript, .NET, PHP,
v.supporting use of XML specification is available Python, and Objective-C.
schema datatypes under the Microsoft Open
vi.allowing anyone to make Specification Promise (OSP).
statements about any resource Microsoft has released an
OData software development
It is used in Mozilla to kit (SDK) consisting of
integrate and aggregate libraries for .NET, PHP,
Internet resources. Java, JavaScript, webOS,
and the iPhone.
Mozilla RDF was originally
used to support the
Aurora/Sidebar user interface
and SmartBrowsing metadata
services. It's main use in
Mozilla now is as a common
data model and API for use in
XUL-based applications
Logical Model: Logical Model: Logical Model:
Graph/EAV.Technology Graph/EAV. AtomPub and Unclear/Mixed – whatever
grounding (esp OWL ) in EDM grounding in entity google logical Model is
Description Logic.[12, 13]. relationship modelling [11]. behind services, but
“Open World Assumption” “Closed World transcoded and exposed as
[27] Assumption”[28] view (?) AtomPub/JSON. Data
but with “OpenTypes” and relations and graphs not
“Dynamic Properties”[29] controllable by API – eg
2. cannot define a link between
data elements that doesnt
already exist. GData is
primarily a client API.
Physical model: Physical model: Physical model:
Not mandated, but probably Not mandated, but probably Google applications and
backed by a triple store and backed by existing RDBMS services publishing data in
serialised over Http to persistence [4 - "Abstract AtomPub/JSON format, with
RDF/XML, Json,TTL, N3 or Data Model"], or more Google Data Namespace[58]
other format. RDBMS precisely a non-triple store. element.
backing or proxying possible. (I have no evidence to
support this, but the gist of
docs and examples suggests
it as a typical use case) and
serialised over Http with
Atom/JSON according to
Entity Data Model (EDM)[6]
and Conceptual Schema
Definition Language (CSDL)
[11]
Intent: Intent: Intent:
Data syndication and web Data publishing and Google cloud data
level linking : "The goal of syndication : "There is a vast publishing [55] : "The
the W3C SWEO Linking amount of data available Google Data Protocol
Open Data community today and data is now being provides a secure means for
project is to extend the Web collected and stored at a rate external developers to write
with a data commons by never seen before. Much, if new applications that let end
publishing various open data not most, of this data users access and update the
sets as RDF on the Web and however is locked into data stored by many Google
by setting RDF links between specific applications or products.External developers
data items from different data formats and difficult to can use the Google Data
sources" access or to integrate into Protocol directly, or they can
new uses" use any of the supported
programming languages
provided by the client
libraries"
Protocol,operations: Protocol,operations: Protocol,operations:
http, content negotiation, http, content negotiation, http,REST (PUT/POST?
RDF, REST-GET. Sparql 1.1 AtomPub/JSON, REST- GET/PATCH/DELETE)[56]
for update GET/PUT/POST/DELETE
[9]
Openness/Extensibility: Openness/Extensibility: Openness/Extensibility:
Any and all,create your own Any and all (with a “legacy” Google applications and
ontology/namespace/URIs Microsoft base), while reuse services only.
with RDFS/OWL/SKOS/…, Microsoft classes and
large opensource tooling & types,namespaces (EDM)[6]
community, multiple with Atom/JSON
serialisation serialisation. Large microsoft
3. RDF/XML,JSON, N3, TTL, tooling and integration with
… others following.[7,8]
URI minting,dereferencing : URI URI minting,dereferencing
Create your own URIs and minting,dereferencing : :
namespaces following Unclear whether concept Atom namespace. <link
guidelines (“slash vs hash”) URI and Location URI are rel=”self”
[15,16] Subject, predicate and distinguished in specification …/> denotes URI
object URIs must be -values can certainly be of item. ETags also used for
dereferencible, content Location URIs, and IDs can versioned updates. Google
negotiation expected. be URIs, but attribute Data namespace for content
Separation of concept URI properties aren’t “Kinds”.[59],
and location URI central. dereferencible to Location no dereferencing.
URIs.Well specified URI
conventions [21]
Linking, matching, Linking,matching, Linking,matching,
equivalence: equivalence: equivalence:
External entities can Navigation properties link URIS Not dereferencable,
inherently be directly linked entity elements within a linkage outside of google not
by reference, and equivalence single OData materialisation possible.
is possible with owl:sameAs, -external linkage not
owl:seeAlso (and other possible. Dereferencable
equivalence assertions) attribute properties not
possible but proposed[10].
Namespace handling, Namespace handling, Namespace handling,
vocabularies: vocabularies: vocabularies:
Declare namespaces as Namespaces supported in AtomPub and Google Data
required when importing EDM but unclear if possible namespace only.
public or “well known” to create and use
ontologies/vocabularies, namespace,or if it can be
creating SPARQL queries, backed with a custom
short hand URIs,create new class/property definition
as required for your own (ontology). $metadata seems
custom classes, instances. to separate logically and
physically type and service
metadata from instance data
– ie oData doesn’t “eat its
own dog food”.
Content negotiation: Content negotiation: Content negotiation:
Client and server negotiate Client specifies or server Use alt query param (accept-
content to best determination. fails, or default to Atom header not used)[57]
[17,18] representation.[19]. Only
XML serialisation for service
metadata.[40]. New mime-
types introduced.
Query capability : Query capability : Query capability :
Dereferencibility central Proposed dereferencible Query by
4. principle to linked data, URIs with special $metadata author,category,fields.
whether in document, local path element allow type
endpoint or federated. metadata to be retrieved [10].
SPARQL [14] query language Running a structured query
allows suitably equipped against an OData service
endpoints to service with something like
structured query requests and SPARQL isn’t possible.
return serialised RDF, json,
csv, html, …
Security, privacy, Security, privacy, Security, privacy,
provenance: provenance: provenance:
No additional specifications No additional specifications Http wire protocols, but in
above that supplied in above that mandated in addition authentication
web/http architecture. CORS http/atom/json.[23, 31] (OpenID) and authorization
becoming popular as access CORS use possible for cross are required(OAuth).
filter method for cross-site site syndication. “ClientLogin”
syndication capability at Dallas/Azure Datamarket for ; and AuthSub are
client level. Server side “trusted commercial and deprecated. [60]. No
access control. Standards for premium public domain provenance handling.
Provenance and privacy data”.[26]
planned and under
development[24]. W3C XG
provenance group[25]
Sources:
http://uoccou.wordpress.com/2011/02/17/linked-data-odata-gdata-datarss-comparison-matrix/
http://en.wikipedia.org/wiki/Open_Data_Protocol
http://en.wikipedia.org/wiki/GData
http://en.wikipedia.org/wiki/Resource_Description_Framework
http://www.w3.org/TR/2003/PR-rdf-concepts-20031215/
References:
[1] http://www.w3.org/wiki/SweoIG/TaskForces/CommunityProjects/LinkingOpenData
[2] http://www.w3.org/DesignIssues/LinkedData.html
[3] http://www.w3.org/TR/webarch
[4] http://www.microsoft.com/interop/osp/default.mspx
[5] http://www.w3.org/QA/2010/03/microsoft_bring_odata_to_a_w3c.html
[6] http://www.odata.org/developers/protocols/overview#EntityDataModel
[7] http://www.odata.org/producers
[8] http://www.odata.org/consumers
[9] http://www.odata.org/developers/protocols/operations