The RDF Report Card: Beyond the Triple CountLeigh Dodds
My talk from the Semtech Biz conference in London.
I argued that it is time to move beyond discussing size of datasets and encourage a more nuanced view to understand quality and utility.
The RDF Report Card is offered as one simple, high-level visualization.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
Presentation about User Contributed Interlinking at Scripting for the Semantic Web (SFSW) 2008 workshop at European Semantic Web Conference (ESWC) 2008
The RDF Report Card: Beyond the Triple CountLeigh Dodds
My talk from the Semtech Biz conference in London.
I argued that it is time to move beyond discussing size of datasets and encourage a more nuanced view to understand quality and utility.
The RDF Report Card is offered as one simple, high-level visualization.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
Presentation about User Contributed Interlinking at Scripting for the Semantic Web (SFSW) 2008 workshop at European Semantic Web Conference (ESWC) 2008
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
Slides from my workshop at Open Repositories 2016 about DSpace's Linked Data support. The slides include a short introduction into the Semantic Web and Linked Data, the main ideas behind the Linked Data support of DSpace, information on how to configure this feature and some examples about how to query DSpace installations for Linked Data.
A Web-scale Study of the Adoption and Evolution of the schema.org Vocabulary ...Robert Meusel
Promoted by major search engines, schema.org has become a widely adopted standard for marking up structured data in HTML web pages. In this paper, we use a series of largescale Web crawls to analyze the evolution and adoption of schema.org over time. The availability of data from dierent points in time for both the schema and the websites deploying data allows for a new kind of empirical analysis of standards adoption, which has not been possible before. To conduct our analysis, we compare dierent versions of the schema.org vocabulary to the data that was deployed on hundreds of thousands of Web pages at dierent points in time. We measure both top-down adoption (i.e., the extent to which changes in the schema are adopted by data providers) as well as bottom-up evolution (i.e., the extent to which the actually deployed data drives changes in the schema). Our empirical analysis shows that both processes can be observed.
This presentation addresses the main issues of Linked Data and scalability. In particular, it provides gives details on approaches and technologies for clustering, distributing, sharing, and caching data. Furthermore, it addresses the means for publishing data trough could deployment and the relationship between Big Data and Linked Data, exploring how some of the solutions can be transferred in the context of Linked Data.
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
Global Forest Decimal Classification (GFDC) and Global Forest Information Service (GFIS)
suggestions and implementation possibilities
Oxford, UK, 2-DEC-2005
Big Linked Data - Creating Training CurriculaEUCLID project
This presentation includes an overview of the basic rules to follow when developing training and education curricula for Linked Data and Big Linked Data
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
This presentation by Shana McDanold of Georgetown University was presented during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016
Slides from my workshop at Open Repositories 2016 about DSpace's Linked Data support. The slides include a short introduction into the Semantic Web and Linked Data, the main ideas behind the Linked Data support of DSpace, information on how to configure this feature and some examples about how to query DSpace installations for Linked Data.
A Web-scale Study of the Adoption and Evolution of the schema.org Vocabulary ...Robert Meusel
Promoted by major search engines, schema.org has become a widely adopted standard for marking up structured data in HTML web pages. In this paper, we use a series of largescale Web crawls to analyze the evolution and adoption of schema.org over time. The availability of data from dierent points in time for both the schema and the websites deploying data allows for a new kind of empirical analysis of standards adoption, which has not been possible before. To conduct our analysis, we compare dierent versions of the schema.org vocabulary to the data that was deployed on hundreds of thousands of Web pages at dierent points in time. We measure both top-down adoption (i.e., the extent to which changes in the schema are adopted by data providers) as well as bottom-up evolution (i.e., the extent to which the actually deployed data drives changes in the schema). Our empirical analysis shows that both processes can be observed.
This presentation addresses the main issues of Linked Data and scalability. In particular, it provides gives details on approaches and technologies for clustering, distributing, sharing, and caching data. Furthermore, it addresses the means for publishing data trough could deployment and the relationship between Big Data and Linked Data, exploring how some of the solutions can be transferred in the context of Linked Data.
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
Global Forest Decimal Classification (GFDC) and Global Forest Information Service (GFIS)
suggestions and implementation possibilities
Oxford, UK, 2-DEC-2005
Big Linked Data - Creating Training CurriculaEUCLID project
This presentation includes an overview of the basic rules to follow when developing training and education curricula for Linked Data and Big Linked Data
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
Sigma EE: Reaping low-hanging fruits in RDF-based data integrationRichard Cyganiak
A presentation I gave at I-Semantics 2010 on Sigma EE, an RDF-based data integration front-end.
Sigma EE is now available for download here: http://sig.ma/?page=help
Overview of how data on the Web of Data can be consumed (first and foremost Linked Data) and implications for the development of usage mining approaches.
References:
Elbedweihy, K., Mazumdar, S., Cano, A. E., Wrigley, S. N., & Ciravegna, F. (2011). Identifying Information Needs by Modelling Collective Query Patterns. COLD, 782.
Elbedweihy, K., Wrigley, S. N., & Ciravegna, F. (2012). Improving Semantic Search Using Query Log Analysis. Interacting with Linked Data (ILD 2012), 61.
Raghuveer, A. (2012). Characterizing machine agent behavior through SPARQL query mining. In Proceedings of the International Workshop on Usage Analysis and the Web of Data, Lyon, France.
Arias, M., Fernández, J. D., Martínez-Prieto, M. A., & de la Fuente, P. (2011). An empirical study of real-world SPARQL queries. arXiv preprint arXiv:1103.5043.
Hartig, O., Bizer, C., & Freytag, J. C. (2009). Executing SPARQL queries over the web of linked data (pp. 293-309). Springer Berlin Heidelberg.
Verborgh, R., Hartig, O., De Meester, B., Haesendonck, G., De Vocht, L., Vander Sande, M., ... & Van de Walle, R. (2014). Querying datasets on the web with high availability. In The Semantic Web–ISWC 2014 (pp. 180-196). Springer International Publishing.
Verborgh, R., Vander Sande, M., Colpaert, P., Coppens, S., Mannens, E., & Van de Walle, R. (2014, April). Web-Scale Querying through Linked Data Fragments. In LDOW.
Luczak-Rösch, M., & Bischoff, M. (2011). Statistical analysis of web of data usage. In Joint Workshop on Knowledge Evolution and Ontology Dynamics (EvoDyn2011), CEUR WS.
Luczak-Rösch, M. (2014). Usage-dependent maintenance of structured Web data sets (Doctoral dissertation, Freie Universität Berlin, Germany), http://edocs.fu-berlin.de/diss/receive/FUDISS_thesis_000000096138.
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
An introduction to the OCRE Cloud Framework - an EU-compliant procurement framework for cloud infrastructure as a service (IaaS), platform as a service (PaaS) and associated software as a service (SaaS).
Shared responsibility - a model for good cloud securityAndy Powell
An overview of the shared responsibility model that is typically adopted by cloud providers and its impact on the way that Jisc members should build secure solutions in public cloud.
Building the modern institution: how Jisc can support your cloud-based digita...Andy Powell
Building the Modern Institution (a nod to Microsoft's use of the term 'Modern Workplace') requires a complete digital transformation of the way HE and FE organisations work. Whilst this is highly likely to also require significant migration to the public cloud, the primary challenges will be around leadership and culture more than around technology.
Developing a Cloud Based Infrastructure to Transform Working Practices and Se...Andy Powell
Digital transformation is a leadership challenge. Not a technology challenge. Start with why. What do you want to achieve. A move to public cloud will be a likely consequence - but is not, in itself, a driver for change.
IT : Strategy, management and DIY in HE - a breakout group summaryAndy Powell
A report summarising two breakout sessions run at the Association of Heads of University Administration (AHUA) 2013 Autumn Conference in Nottingham, held during September 2013.
The breakout sessions were run by Stephen Butcher and Andy Powell of Eduserv and involved a total of around 35 senior managers at UK HE institutions. The intention was to investigate why HEIs tend to adopt a DIY approach to IT services.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
RDTF Metadata Guidelines: an update
1. RDTF Metadata Guidelinesan update Andy Powell (and Pete Johnston) RDTF Management Framework Project Board 29 March2011
2. Functional requirement help libraries, museums and archives expose existing metadata (and new metadata created using existing practice) in ways that support the development of aggregator services integrate well with the web (and the emerging web of data) note:RDTF is not about re-engineering cataloguing practice in the LAM sectors
3. Guiding principles support the RDTF Vision informed by Paul Miller’s desk study review flow from the JISC IE Technical Review meeting in line with Linked Data principles based on the W3C Linked Open Data Star Scheme in line with Designing URI Sets for the UK Public Sector take into account the Europeana Data Model and ESE be informed by mainstream web practice and search engine behaviour and be broadly in line with the notion of “making better websites” across the library, museum and archives sectors
4. Draft proposal used the W3C Linked Open Data star scheme as framework (at 3, 4 and 5 star levels) three approaches community formats RDF data Linked Data 196 comments – on pretty much all aspects of the draft
5. Guiding principles support the RDTF Vision informed by Paul Miller’s desk study review flow from the JISC IE Technical Review meeting in line with Linked Data principles based on the W3C Linked Open Data Star Scheme in line with Designing URI Sets for the UK Public Sector take into account the Europeana Data Model and ESE be informed by mainstream web practice and search engine behaviour and be broadly in line with the notion of “making better websites” across the library, museum and archives sectors We probably failed in this!!
6. Re-conceptualising the guidelines RDF Not-RDF Individual ItemDescriptions Linked Data “page per thing” Collectionsof Descriptions “RDF Data” “bulk download”
7. The draft guidelines RDF Not-RDF Individual ItemDescriptions Linked Data “page per thing” Collectionsof Descriptions “RDF Data” “bulk download”
8. The Web! RDF Not-RDF Individual ItemDescriptions Linked Data “page per thing” Collectionsof Descriptions “RDF Data” “bulk download”
9. Possible adoption path RDF Not-RDF Individual ItemDescriptions Linked Data “page per thing” Collectionsof Descriptions “RDF Data” “bulk download”
10. Bulk download “give us what you’ve got” serve existing community bulk-formats (e.g. files containing collections of MARC, MODS, BibTeX, DC/XML, SPECTRUM or EAD records) or CSV over RESTfulHTTP use sitemaps and robots.txt and/or RSS/Atom to advertise availability and GZip for compression for CSV, providea column called ‘label’ or ‘title’ so we’ve got something to display give us separate records (for CSV, read ‘rows’) about separate resources (where you can) simples!
11. Page per thing “build better websites” serve an HTML page (i.e. a description) for every “thing” of interest over RESTful HTTP optionally serve alternative format(s) for each description (e.g. a MODS or DC/XML record) at separate URIs and link from the HTML descriptions using “<link rel=“alternative” … /> use “cool” ‘http’ URIs for all descriptions use sitemaps and robots.txt and/or RSS/Atom to advertise availability optionally offer OAI-PMH server to allow harvesting of all descriptions/formats
12. RDF data “RDF bulk download” serve big buckets of RDF (as RDF/XML, N-Tuples or N-Quads) over RESTful HTTP re-use existing conceptual models and vocabularies where you can assign URIs to every “thing” of interest use Semantic Sitemaps and the Vocabulary of Interlinked Datasets (VoID) to advertise availability of the buckets
13. Linked Data “W3C 5 star approach” serve HTML and RDF/RDFa for every “thing” of interest over RESTful HTTP assign‘http’ URIs to every “thing” (and every description of a thing) follow “cool URIs for the semantic web” recommended practice become part of the web of data - link to other people’s stuff using their URIs
14. Recommendations use the four-quadrant model to frame the guidelines (we think all four quadrants are useful, and that there should probably be some guidance on each area) develop specific guidance for serving an HTML page description per 'thing' of interest (possibly with associated, and linked, alternative formats such as DC/XML) develop (or find) specific guidance about how to sensibly assign persistent 'http' URIs to everything of interest (including both things and descriptions of things)
15. Also… that the definition of 'open' needs more work (particularly in the context of whether commercial use is allowed) but that this needs to be sensitive to not stirring up IPR-worries in those domains where they are less of a concern currently that mechanisms for making statements of provenance, licensing and versioning be developed where RDF triples are being made available (possibly in collaboration with Europeana work) that a fuller list of relevant models that might be adopted, the relationships between them, and any vocabularies commonly associated with them be maintained separately from the guidelines themselves