A brief introduction to Linked Data Licensing, Rights Expression Languages and Linked Data Business Models given on September 6, 2013 at the I-SEMANTICS 2013, the 9th international conference on semantic systems, in Graz, Austria.
How problems with data protection affect science researchers, especially when sharing large datasets with researchers around the globe: issues and solutions?
Ai and applications in the legal domain studium generale maastricht 20191101jcscholtes
November 20, 2019, it was my great pleasure to present a special lecture on Artificial Intelligence and Application in the Legal Domain. In this lecture I discuss how the development of machines that can learn, reason and act intelligently – Artificial Intelligence (AI) – is advancing rapidly in the legal domain. In some areas, machine intelligence have even already surpassed the limits of what the brightest human minds are capable of achieving, especially in the field of eDiscovery and Legal Review of large data set.
In others, machines still struggle with seemingly basic tasks. Nonetheless, breakthroughs in AI already have profound impact on the legal profession. AI is set to improve our world now and will continue to do so in the future. At the same time, there is the fear of losing control.
This lecture was part of a larger series on AI organized by our department of data science and knowledge engineering: https://www.maastrichtuniversity.nl/events/artificial-intelligence.
More information can be found here: https://textmining.nu
October 29, 2019, I was invited to present the keynote of the LegalTech Alliance meeting on eDiscovery and Big Data, in which 11 law departments from the Universities of Applied Sciences in the Netherlands participate.
eDiscovery is more and more important than ever. Future legal professionals must be able to deal with large electronic data sets so they can:
- Take decisions based on facts and not based on guesses and assumptions;
- Answer information requests timely, accurately and complete;
- Avoid high cost, reputation damage, regulatory measures, business disruption and stress!
It is great that the LegalTech Alliance understands that need and that they embed eDiscovery in their educational programs.
Attached are slides of the workshop were we presented the course eDiscovery (including the hands-on with ZyLAB) which we developed together with the University of Applied Sciences in Amsterdam
Paper 192. in CISTI 2021: OntoDRE: An Ontology For The Requirements...James Miranda
TITLE: "OntoDRE: An Ontology For The Requirements Engineering Decision Process"
TO CITE:
J. W. Pontes Miranda and R. Cristiane Gratão de Souza, "OntoDRE: An ontology for the requirements engineering decision process," 2021 16th Iberian Conference on Information Systems and Technologies (CISTI), 2021, pp. 1-6, DOI: 10.23919/CISTI52073.2021.9476446.
BiBTex:
@INPROCEEDINGS{9476446, author={Pontes Miranda, James William and Cristiane Gratão de Souza, Rogéria}, booktitle={2021 16th Iberian Conference on Information Systems and Technologies (CISTI)}, title={OntoDRE: An ontology for the requirements engineering decision process}, year={2021}, volume={}, number={}, pages={1-6}, doi={10.23919/CISTI52073.2021.9476446}}
The official presentation took place online on 24th Jun 2021 during the "Software Systems, Architectures, Applications and Tools" session. For more information, visit http://www.cisti.eu/
Open Insights Harvard DBMI - Personal Health Train - Kees van Bochove - The HyveKees van Bochove
In this talk, the Personal Health Train concept will be introduced, which enables running personalized medicine workflows as trains visiting data stations (e.g. hospital records, primary care records, clinical studies and registries, patient-held data from e.g. wearable sensors etc.) The Personal Health Train is a very powerful concept, which is however dependent on source medical data to be coded with appropriate metadata on consent, license, scope etc. of the data, and the data itself to be encoded using biomedical data standards, which is an ever growing field in biomedical informatics. In order to realize the Personal Health Train biomedical data will need to be FAIR, i.e. adopt the FAIR Guiding Principles. This talk will cover the emerging GO-FAIR international movement, and provide examples of how several European health data networks currently are adopting open standards based stacks, to enable routine health care data to be come accessible for research.
OSFair2017 Workshop | Towards a Policy Framework for the European Open Scienc...Open Science Fair
Workshop title: Towards a Policy Framework for the European Open Science Cloud
Workshop abstract:
The workshop provides a hands on approach in relation both to the understanding of the EU open science policies and their application by related stakeholders. It will seek to explore, propose and test different aspects of policy documents created by and for different types of stakeholders (e.g. RPOs, funders, policy makers etc) in the context of EOSC. Drawing on the work by the EOSC policy work, the workshop invites participants to bring their own policies or work on model policies to develop a simple but comprehensive policy document tailored to their needs and conforming to the EU policy and legal framework.
It is useful to the broader Open Science community as it brings together services, stakeholders and policies and allows for a better understanding of the interaction between different constituencies.
DAY 2 - PARALLEL SESSION 3
Libraries and Research Data Management – What Works? Lessons Learned from the...LIBER Europe
This presentation by Dr Birgit Schmidt was given at the Scholarly Communication and Research Infrastructures Steering Committee Workshop. The workshop title was Libraries and Research Data Management – What Works?
How problems with data protection affect science researchers, especially when sharing large datasets with researchers around the globe: issues and solutions?
Ai and applications in the legal domain studium generale maastricht 20191101jcscholtes
November 20, 2019, it was my great pleasure to present a special lecture on Artificial Intelligence and Application in the Legal Domain. In this lecture I discuss how the development of machines that can learn, reason and act intelligently – Artificial Intelligence (AI) – is advancing rapidly in the legal domain. In some areas, machine intelligence have even already surpassed the limits of what the brightest human minds are capable of achieving, especially in the field of eDiscovery and Legal Review of large data set.
In others, machines still struggle with seemingly basic tasks. Nonetheless, breakthroughs in AI already have profound impact on the legal profession. AI is set to improve our world now and will continue to do so in the future. At the same time, there is the fear of losing control.
This lecture was part of a larger series on AI organized by our department of data science and knowledge engineering: https://www.maastrichtuniversity.nl/events/artificial-intelligence.
More information can be found here: https://textmining.nu
October 29, 2019, I was invited to present the keynote of the LegalTech Alliance meeting on eDiscovery and Big Data, in which 11 law departments from the Universities of Applied Sciences in the Netherlands participate.
eDiscovery is more and more important than ever. Future legal professionals must be able to deal with large electronic data sets so they can:
- Take decisions based on facts and not based on guesses and assumptions;
- Answer information requests timely, accurately and complete;
- Avoid high cost, reputation damage, regulatory measures, business disruption and stress!
It is great that the LegalTech Alliance understands that need and that they embed eDiscovery in their educational programs.
Attached are slides of the workshop were we presented the course eDiscovery (including the hands-on with ZyLAB) which we developed together with the University of Applied Sciences in Amsterdam
Paper 192. in CISTI 2021: OntoDRE: An Ontology For The Requirements...James Miranda
TITLE: "OntoDRE: An Ontology For The Requirements Engineering Decision Process"
TO CITE:
J. W. Pontes Miranda and R. Cristiane Gratão de Souza, "OntoDRE: An ontology for the requirements engineering decision process," 2021 16th Iberian Conference on Information Systems and Technologies (CISTI), 2021, pp. 1-6, DOI: 10.23919/CISTI52073.2021.9476446.
BiBTex:
@INPROCEEDINGS{9476446, author={Pontes Miranda, James William and Cristiane Gratão de Souza, Rogéria}, booktitle={2021 16th Iberian Conference on Information Systems and Technologies (CISTI)}, title={OntoDRE: An ontology for the requirements engineering decision process}, year={2021}, volume={}, number={}, pages={1-6}, doi={10.23919/CISTI52073.2021.9476446}}
The official presentation took place online on 24th Jun 2021 during the "Software Systems, Architectures, Applications and Tools" session. For more information, visit http://www.cisti.eu/
Open Insights Harvard DBMI - Personal Health Train - Kees van Bochove - The HyveKees van Bochove
In this talk, the Personal Health Train concept will be introduced, which enables running personalized medicine workflows as trains visiting data stations (e.g. hospital records, primary care records, clinical studies and registries, patient-held data from e.g. wearable sensors etc.) The Personal Health Train is a very powerful concept, which is however dependent on source medical data to be coded with appropriate metadata on consent, license, scope etc. of the data, and the data itself to be encoded using biomedical data standards, which is an ever growing field in biomedical informatics. In order to realize the Personal Health Train biomedical data will need to be FAIR, i.e. adopt the FAIR Guiding Principles. This talk will cover the emerging GO-FAIR international movement, and provide examples of how several European health data networks currently are adopting open standards based stacks, to enable routine health care data to be come accessible for research.
OSFair2017 Workshop | Towards a Policy Framework for the European Open Scienc...Open Science Fair
Workshop title: Towards a Policy Framework for the European Open Science Cloud
Workshop abstract:
The workshop provides a hands on approach in relation both to the understanding of the EU open science policies and their application by related stakeholders. It will seek to explore, propose and test different aspects of policy documents created by and for different types of stakeholders (e.g. RPOs, funders, policy makers etc) in the context of EOSC. Drawing on the work by the EOSC policy work, the workshop invites participants to bring their own policies or work on model policies to develop a simple but comprehensive policy document tailored to their needs and conforming to the EU policy and legal framework.
It is useful to the broader Open Science community as it brings together services, stakeholders and policies and allows for a better understanding of the interaction between different constituencies.
DAY 2 - PARALLEL SESSION 3
Libraries and Research Data Management – What Works? Lessons Learned from the...LIBER Europe
This presentation by Dr Birgit Schmidt was given at the Scholarly Communication and Research Infrastructures Steering Committee Workshop. The workshop title was Libraries and Research Data Management – What Works?
General introduction to legal technology and legal AI, presented at the inaugural Helsinki Legal Tech Meetup on 2016-03-17 (for a more thorough overview, please see my Introduction to Legal Technology slides for lectures 1–10, also on SlideShare)
Based on social network theory, the article takes the co-patent network of China's mobile phone industry from 2003 to 2017 as the research object. Poisson regression model is used to investigate the impact of network structure embeddedness on innovation output. The empirical research results show that, (1) there exists optimal cooperation size for firms in the co-patent network, that is, moderate degree centrality may mean higher innovation output; (2) occupying more structural hole positions in the co-patent network may increase firms' innovation output; and (3) higher clustering coefficients significantly reduce firms’ innovation output.
OSFair2017 Workshop | Service provisioning for excellent sciencesOpen Science Fair
Daan Broeder presents the EUDAT community
Workshop title: Organising high-quality research data management services
Workshop abstract:
Open science needs high quality data management where researchers can create, use and share data according to well defined standards and practices. this is one of the pillars of Open Science. In the data management landscape we find quite a few organisations that aim at achieving this, however to get it right, a collaboration is called for where all can play a suitable role and present this in a consistent way to the researcher.
The proposed workshop brings together representatives of standard organisation (RDA), eInfrastructures (EUDAT) and Libraries (LIBER) that together can organise the high quality data management for research.
DAY 1 - PARALLEL SESSION 2
http://opensciencefair.eu/workshops/organising-high-quality-research-data-management-services
Demystifying Semantics:Practical Utilization of Semantic Technologies for Rea...OSTHUS
In our webinar on Jan 17th, 2017, Eric and Heiner gave attendees insights on the following:
1. What semantics are (model/data separation, graphs, apply better meaning to data, etc.)
2. Why you should consider using these technologies (real world examples of benefits our customers are seeing)
3. How to pick the right tech for your needs (provide a description of the types of graph/RDF stores out there – we have a matrix based on features – and show how various SPARQL queries work against legacy data.)
Current challenges facing the implementation of NoSQL-type databases involve how to use advanced rule-based analytics on large tables and key value stores, where metadata is often sparse. Graph databases or triple stores are great for utilizing one’s metadata, but are often computationally inefficient compared to NoSQL stores. To combat this problem, Modus Operandi will showcase a Predicate Store inside of its MOVIA product that can run advanced, first-order level, logical rule sets and queries against large tables or column stores directly to provide a scalable, rapid and advanced data analytics for cloud applications. This provides graph complexity in terms of content with the performance and scalability of NoSQL data approaches. The system also allows for both statistical algorithms as well as logic-based rule sets to be run concurrently, meaning that a host of parallel analytics can be run at once, providing deep analysis over a multitude of important pattern types.
Introduction to Legal Technology, lecture 7 (2015)Anna Ronkainen
Slides for lecture 7 of the course Introduction to Legal Technology at the University of Turku Law School, presented Feb 17 2015.
This lecture is the third of three lectures on specific legal technology applications: decision support, prediction, automation, and self-service.
Introduction to Legal Technology, lecture 2 (2015)Anna Ronkainen
Slides for lecture 2 of the course Introduction to Legal Technology at the University of Turku Law School, presented Jan 27 2015.
This lecture presents a brief history and overview of legal technology and legal AI through the 20th century.
CINECA webinar slides: Ethical, legal and societal issues in international da...CINECAProject
The CINECA webinar series continues with a presentation by Dr. Éloïse Gennet (INSERM) and Dr. Melanie Goisauf (BBMRI-ERIC) on Ethical, Legal and Societal Issues in international data sharing.
The goal of this webinar will be to present the first findings of the ELSI activities in the CINECA project, ranging from questions of ethics of data sharing across continents to legal basis of secondary processing of personal data, consent requirements and vulnerable groups or public and stakeholders’ attitudes toward sharing of genomic and health related data for research.
CINECA (Common Infrastructure for National Cohorts in Europe, Canada, and Africa) project, aims to develop a federated cloud enabled infrastructure to make population scale genomic and biomolecular data accessible across international borders, to accelerate research, and improve the health of individuals across continents. CINECA will leverage international investment in human cohort studies from Europe, Canada, and Africa to deliver a paradigm shift of federated research and clinical applications.
This webinar took place on 24th January 2020. Recording of the webinar is available through the CINECA website.
https://www.cineca-project.eu/news-events-all/ethical-legal-and-societal-issues-in-international-data-sharing
For upcoming CINECA webinars see:
https://www.cineca-project.eu/webinars
TIPPSS for Enabling & Securing our Increasingly Connected World – Trust, Iden...PacificResearchPlatform
Securing Research Data: A Workshop on Emerging Practices in Computation and
Storage for Sensitive Data - August 22, 2019
Florence Hudson, Founder and CEO, FDHint LLC
NSF Cybersecurity Center of Excellence, Indiana University - Special Advisor
Northeast Big Data Innovation Hub, Columbia University – Special Advisor
IEEE Engineering in Medicine and Biology Society – Standards Committee
This talk will provide a means to discuss the capture, integration and dissemination of data across large enterprises. We will show how data variety is continuing to grow, meaning new data sources are steadily becoming available for use in analysis. Data veracity is also of importance since a large amount of data is fuzzy (uncertain) in nature. The ability to integrate these various data sources and provide improved capabilities to understand and use it is of increasing importance in today’s pharma climate. We call this Reference Master Data Management (RMDM).
This talk will span an arc of data lifecycle management, beginning with instrument data, moving across to clinical studies, production, regulatory affairs and finally e-archiving (see Fig. 1). I will show how these systems can use a common semantics for modeling of important metadata, which can apply the FAIR principles of Findability, Accessibility, Interoperability and Reusability to a common “semantic hub” that can connect data sources of different varieties across the enterprise. ADF files, for example, use their Data Description layer to provide semantic metadata about file contents. Similarly, semantics can be used to describe clinical trials data, regulatory data, etc., through to archiving, for improved storage and search over long periods of time.
General introduction to legal technology and legal AI, presented at the inaugural Helsinki Legal Tech Meetup on 2016-03-17 (for a more thorough overview, please see my Introduction to Legal Technology slides for lectures 1–10, also on SlideShare)
Based on social network theory, the article takes the co-patent network of China's mobile phone industry from 2003 to 2017 as the research object. Poisson regression model is used to investigate the impact of network structure embeddedness on innovation output. The empirical research results show that, (1) there exists optimal cooperation size for firms in the co-patent network, that is, moderate degree centrality may mean higher innovation output; (2) occupying more structural hole positions in the co-patent network may increase firms' innovation output; and (3) higher clustering coefficients significantly reduce firms’ innovation output.
OSFair2017 Workshop | Service provisioning for excellent sciencesOpen Science Fair
Daan Broeder presents the EUDAT community
Workshop title: Organising high-quality research data management services
Workshop abstract:
Open science needs high quality data management where researchers can create, use and share data according to well defined standards and practices. this is one of the pillars of Open Science. In the data management landscape we find quite a few organisations that aim at achieving this, however to get it right, a collaboration is called for where all can play a suitable role and present this in a consistent way to the researcher.
The proposed workshop brings together representatives of standard organisation (RDA), eInfrastructures (EUDAT) and Libraries (LIBER) that together can organise the high quality data management for research.
DAY 1 - PARALLEL SESSION 2
http://opensciencefair.eu/workshops/organising-high-quality-research-data-management-services
Demystifying Semantics:Practical Utilization of Semantic Technologies for Rea...OSTHUS
In our webinar on Jan 17th, 2017, Eric and Heiner gave attendees insights on the following:
1. What semantics are (model/data separation, graphs, apply better meaning to data, etc.)
2. Why you should consider using these technologies (real world examples of benefits our customers are seeing)
3. How to pick the right tech for your needs (provide a description of the types of graph/RDF stores out there – we have a matrix based on features – and show how various SPARQL queries work against legacy data.)
Current challenges facing the implementation of NoSQL-type databases involve how to use advanced rule-based analytics on large tables and key value stores, where metadata is often sparse. Graph databases or triple stores are great for utilizing one’s metadata, but are often computationally inefficient compared to NoSQL stores. To combat this problem, Modus Operandi will showcase a Predicate Store inside of its MOVIA product that can run advanced, first-order level, logical rule sets and queries against large tables or column stores directly to provide a scalable, rapid and advanced data analytics for cloud applications. This provides graph complexity in terms of content with the performance and scalability of NoSQL data approaches. The system also allows for both statistical algorithms as well as logic-based rule sets to be run concurrently, meaning that a host of parallel analytics can be run at once, providing deep analysis over a multitude of important pattern types.
Introduction to Legal Technology, lecture 7 (2015)Anna Ronkainen
Slides for lecture 7 of the course Introduction to Legal Technology at the University of Turku Law School, presented Feb 17 2015.
This lecture is the third of three lectures on specific legal technology applications: decision support, prediction, automation, and self-service.
Introduction to Legal Technology, lecture 2 (2015)Anna Ronkainen
Slides for lecture 2 of the course Introduction to Legal Technology at the University of Turku Law School, presented Jan 27 2015.
This lecture presents a brief history and overview of legal technology and legal AI through the 20th century.
CINECA webinar slides: Ethical, legal and societal issues in international da...CINECAProject
The CINECA webinar series continues with a presentation by Dr. Éloïse Gennet (INSERM) and Dr. Melanie Goisauf (BBMRI-ERIC) on Ethical, Legal and Societal Issues in international data sharing.
The goal of this webinar will be to present the first findings of the ELSI activities in the CINECA project, ranging from questions of ethics of data sharing across continents to legal basis of secondary processing of personal data, consent requirements and vulnerable groups or public and stakeholders’ attitudes toward sharing of genomic and health related data for research.
CINECA (Common Infrastructure for National Cohorts in Europe, Canada, and Africa) project, aims to develop a federated cloud enabled infrastructure to make population scale genomic and biomolecular data accessible across international borders, to accelerate research, and improve the health of individuals across continents. CINECA will leverage international investment in human cohort studies from Europe, Canada, and Africa to deliver a paradigm shift of federated research and clinical applications.
This webinar took place on 24th January 2020. Recording of the webinar is available through the CINECA website.
https://www.cineca-project.eu/news-events-all/ethical-legal-and-societal-issues-in-international-data-sharing
For upcoming CINECA webinars see:
https://www.cineca-project.eu/webinars
TIPPSS for Enabling & Securing our Increasingly Connected World – Trust, Iden...PacificResearchPlatform
Securing Research Data: A Workshop on Emerging Practices in Computation and
Storage for Sensitive Data - August 22, 2019
Florence Hudson, Founder and CEO, FDHint LLC
NSF Cybersecurity Center of Excellence, Indiana University - Special Advisor
Northeast Big Data Innovation Hub, Columbia University – Special Advisor
IEEE Engineering in Medicine and Biology Society – Standards Committee
This talk will provide a means to discuss the capture, integration and dissemination of data across large enterprises. We will show how data variety is continuing to grow, meaning new data sources are steadily becoming available for use in analysis. Data veracity is also of importance since a large amount of data is fuzzy (uncertain) in nature. The ability to integrate these various data sources and provide improved capabilities to understand and use it is of increasing importance in today’s pharma climate. We call this Reference Master Data Management (RMDM).
This talk will span an arc of data lifecycle management, beginning with instrument data, moving across to clinical studies, production, regulatory affairs and finally e-archiving (see Fig. 1). I will show how these systems can use a common semantics for modeling of important metadata, which can apply the FAIR principles of Findability, Accessibility, Interoperability and Reusability to a common “semantic hub” that can connect data sources of different varieties across the enterprise. ADF files, for example, use their Data Description layer to provide semantic metadata about file contents. Similarly, semantics can be used to describe clinical trials data, regulatory data, etc., through to archiving, for improved storage and search over long periods of time.
FEDERATED LEARNING FOR PRIVACY-PRESERVING: A REVIEW OF PII DATA ANALYSIS IN F...ijseajournal
There has been tremendous growth in the field of AI and machine learning. The developments across these
fields have resulted in a considerable increase in other FinTech fields. Cyber security has been described
as an essential part of the developments associated with technology. Increased cyber security ensures that
people remain protected, and that data remains safe. New methods have been integrated into developing AI
that achieves cyber security. The data analysis capabilities of AI and its cyber security functions have
ensured that privacy has increased significantly. The ethical concept associated with data privacy has also
been advocated across most FinTech regulations. These concepts and considerations have all been
engaged with the need to achieve the required ethical requirements. The concept of federated learning is a
recently developed measure that achieves the abovementioned concept. It ensured the development of AI
and machine learning while keeping privacy in data analysis. The research paper effectively describes the
issue of federated learning for confidentiality. It describes the overall process associated with its
development and some of the contributions it has achieved. The widespread application of federated
learning in FinTech is showcased, and why federated learning is essential for overall growth in FinTech.
Webinar presented live on May 11, 2017.
As data is increasingly accessed and shared across geographic boundaries, a growing web of conflicting laws and regulations dictate where data can be transferred, stored, and shared, and how it is protected. The Object Management Group® (OMG®) and the Cloud Standards Customer Council™ (CSCC™) recently completed a significant effort to analyze and document the challenges posed by data residency. Data residency issues result from the storage and movement of data and metadata across geographies and jurisdictions.
Attend this webinar to learn more about data residency:
• How it may impact users and providers of IT services (including but not limited to the cloud)
• The complex web of laws and regulations that govern this area
• The relevant aspects – and limitations -- of current standards and potential areas of improvement
• How to contribute to future work
Read the OMG's paper, Data Residency Challenges and Opportunities for Standardization: http://www.omg.org/data-residency/
Read the CSCC's edition of the paper, Data Residency Challenges: http://www.cloud-council.org/deliverables/data-residency-challenges.htm
Data accessibility and the role of informatics in predicting the biosphereAlex Hardisty
The variety, distinctiveness and complexity of life – biodiversity in other words and by implication the ecosystems in which it is situated – is our life support system. It is absolutely essential and more important than almost everything else but it is typically taken for granted. Today’s big societal challenges – food and water security, coping with environmental change and aspects of human health – are beyond the abilities of any one individual or research group to solve. Solving them depends not only on collaboration to deliver the appropriate scientific evidence but increasingly on vast amounts of data from multiple sources (environmental, taxonomic, genomic and ecological) gathered by manual observation and automated sensors, digitisation, remote sensing, and genetic sequencing. In April 2012 we called the biodiversity and ecosystems research communities to arms to formulate a consensus view on establishing an infrastructure to improve the accessibility of the ever-increasing volumes of biological data. We published the whitepaper: “A decadal view of biodiversity informatics: challenges and priorities” that has since been viewed more than 24,000 times. We envisage a shared and maintained multi-purpose network of computationally-based processing services sitting on top of an open data domain. By open data domain we mean data that is accessible i.e., published, registered and linked. BioVeL, pro-iBiosphere, ViBRANT and other FP7 funded projects have all explored aspects of this vision.
Beyond Privacy: Learning Data Ethics - European Big Data Community Forum 2019...e-SIDES.eu
This is the slide-deck of the community event held on November 14, 2019 in Brussels, titled "Beyond Privacy: Learning Data Ethics - European Big Data Community Forum 2019". It includes the presentations given by the speakers.
Beyond Privacy: Learning Data Ethics - European Big Data Community Forum 2019...IDC4EU
This is the slide-deck of the community event held on November 14, 2019 in Brussels, titled "Beyond Privacy: Learning Data Ethics - European Big Data Community Forum 2019". It includes the presentations given by the speakers.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Licensing Linked Data
1. Licensing Linked Data
Workshop
I-SEMANTICS 2013 Conference
September 6, 2013
Graz / Austria
Tassilo Pellegrini
firstname.lastname[at]fhstp.ac.at
http://de.slideshare.net/pellegrinit/licensing-linked-data
2. Introductory Statement: Challenges of Linked
Data Licensing
• Licensing has been widely neglected in Linked Data R&D
• Data licensing is not a trivial issue – especially under conditions of dual licensing
• Requires technological knowledge
• Requires asset diversification awareness & strategy
• Depends on business strategy & models
• Is confronted with competing legal regimes (i.e. EU vs. USA)
• Data licensing shapes social relationships by granting and restricting access to
resources.
• (Linked) Data licensing defines the access conditions under which transactions
will be performed in the future (by machines).
• Exposing licensing information as Linked Data is the precondition for automated
rights clearance & brokering systems.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
2
3. Overview
1. The Economic Rationale of Linked Data
2. Creating Licensing Policies for Linked Data
3. Mapping Licenses to Business Models
4. Conclusion
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
3
5. Metadata Shift
Research Area Pre-Web Post-Web
Metadata Applications / Uses -- 16 %
Cataloging / Classification 14 % 15 %
Classifying Web Information -- 14 %
Interoperability -- 13 %
Machine Assisted Knowledge
Organization
14 % 12 %
Education 7 % 7 %
Digital Preservation/ Libraries -- 7 %
Thesauri Initiatives 7 % 5 %
Indexing / Abstracting 29 % 4 %
Organizing Corporate or Business
Information
-- 4 %
Librarians as Knowledge
Organizers of the Web
-- 2 %
Cognitive Models 29 % 1 %
Research Areas in Library and Information Science (Source: Saumure, Kristie; Shiri, Ali (2008). Knowledge organization trends in
library and information studies: a preliminary comparison of pre- and post-web eras. In: Journal of Information Science, 34/5, 2008, p.
651–666)
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St. Pölten, Licensing Linked Data 5
The survey illustrates four trends:
1) the spectrum of research areas has broadened significantly;
2) certain areas have kept their status over the years (i.e.
Cataloging & Classification or Machine Assisted Knowledge
Organization),
3) new areas of research have entered the discipline (i.e.
Metadata Applications & Uses, Classifying Web Information,
Interoperability Issues) and others have declined or
dissolved into other areas;
4) metadata issues have significantly increased in importance
in terms of the quantity of papers that is explicitly and
implicitly dealing with corresponding issues.
6. Content-Assets
Metadata-Assets
Information Load
EconomicRelevance
Source: Haase, Kenneth (2004). Context for Semantic
Metadata.
In: MM’04, October 10–16, 2004, New York, New York,
USA. ACM
Price Waterhouse Coopers (2009). Technology
Forecast: Spinning a Web of Data. Spring 2009
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
6
Metadata as a Network Good
„The Value of Metadata rises as the product of the log of the corpus size and the log of the size of the user
community increases.“ (Kenneth Haase, 2004)
Metcalfe`s Law
7. Data in the Content Value Chain
Content
Acquisition
Content
Editing
Content
Bundling
Content
Distribuiton
Content
Consumption
Harvesting,
storage &
integration of
internal or
external data
sources for
purposes like
Content
Pooling
Semantic
analysis,
adaptation &
linking of data
for purposes
like Content
Enrichment
Contextualisation
& perso-nalisation
of information
products for
purposes like
Landing Pages,
Dossiers or
Customized
Delivery
Provision of
machine-readable
& semantically
interoperable data
& metadata via
APIs or Endpoints
Improved
findability,
navigability &
visualization on
top of semantic
metadata via
Semantic Search
& Recommenda-
tion Engines
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
7
Pellegrini, Tassilo (2012). Semantic Metadata in the News Production Process. Achievements and Challenges. In: Lugmayr, Artur; Franssila, Heljä; Paavilainen, Janne;
Kärkkäinen, Hannu (Eds). Proceeding of the 16th International Academic MindTrek Conference 2012, Tampere / Finland. ACM SIGMM, p. 125-133
8. Data Traffic Patterns
Source: Andreas Blumauer, Semantic Web Company, 2011
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
8
10. Licenses on the LOD
Cloud – State of the Art
License Number of
Datasets
License Not Specified 251
Creative Commons Attribution 135
Creative Commons CCZero 72
Creative Commons Attribution Share-Alike 71
Creative Commons Non-Commercial (Any) 49
Other (Attribution) 38
UK Open Government Licence (OGL) 36
Open Data Commons Open Database License (ODbL) 28
Open Data Commons Public Domain Dedication and Licence (PDDL) 27
Other (Not Open) 26
Other (Open) 25
Other (Public Domain) 25
Open Data Commons Attribution License 14
GNU Free Documentation License 9
Other (Non-Commercial) 9
ukcrown-withrights 6
W3C 1
apache 1
gpl-2.0 1
gpl-3.0 1
LicensesontheLODCloud(Source:Pellegrini&Ermilov2013…toappear)
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St. Pölten, Licensing Linked Data
10
1) Licensing has long been neglected, but
awareness is rising
2) High heterogeneity of licenses (CC, ODC, GPL,
APACHE, individual licenses …)
3) Insufficient / unappropriate protection of
intellectual assets (not all asset types are
covered)
4) The „meaning“ of the various licenses stays
implicit (not machine-readable) – source of
errors & legal uncertainty
A community discussion & standardization
process is required to nuture a licensing culture
for Linked Data
See also Prateek et al. (2013): There is no money in
LOD (http://knoesis.wright.edu/faculty/pascal/pub/nomoneylod.pdf)
11. Why Linked Data Licensing Matters?
• Data is an intellectual asset and can be protected by intelllectual
property rights
• Licenses secure (y)our property rights – for private and public
purposes!
• Licenses create a secure business environment
• Licenses are an efficient means to diversify business models
• Dual Licensing can be used to extend traditional copyright and allow
to reuse, share and consume data for purposes not originally
intended
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
11
12. Protecting Data as Intellectual Property
Legal Protection Instruments
Copyright Database
Right
Unfair
Practice
Patents
Linked
Data
Assets
Instance Data Case by Case yes yes Case by Case
Metadata Case by Case yes yes Case by Case
Ontology yes yes yes Case by Case
Content yes no yes no
(Services) yes no yes yes
(Technology) yes no yes yes
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St. Pölten, Licensing Linked Data 12
Pellegrini, Tassilo (2012). Semantic Metadata in the News Production Process. Achievements and Challenges. In: Lugmayr, Artur; Franssila, Heljä; Paavilainen, Janne; Kärkkäinen, Hannu (Eds). Proceeding of
the 16th International Academic MindTrek Conference 2012, Tampere / Finland. ACM SIGMM, p. 125-133
Legend:
Copyright … protects the
originality of creative works.
Database Right … protects the
investment made in compiling a
database, even when this does not
involve the 'creative' aspect that is
reflected by copyright.
Unfair Practices Act … protects
against fraud, misrepresentation,
and oppressive or unconscionable
acts or practices by businesses.
Patents … protects a novel solution
to a specific technological
problem.
13. Components of a Linked Data Licensing Policy
A Linked Data licensing policy should consist of three components: a machine-readable statement
about content-related assets (copyright), a machine-readable statement about database-related
assets (database right) and a human-readable Community Norm.
• Herein the contents of a linked dataset, which are comprised of the terms, definitions and its
ontological structure, are protected by copyright (or Creative Commons).
• The underlying database, which is comprised of all independent elements and works that are
arranged in a systematic or methodological way and are accessible by electronic or other means,
are protected by database right (or Open Data Commons).
• The Community Norm explicitly defines the expectations of the rights holder towards “good
conduct” when a dataset is being utilized.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
13
14. Benefits & Limitations of traditional Copyright
/ Datebase Right
• Benefits:
• Easy to handle: rights are usually granted automatically at the moment of publication
• Internationally established institutions & experience of conduct (legal affairs, trials etc.)
• Strong property rights are often the foundation of established business models
• Limitations:
• Very restrictive – not suiteable to generate network effects or open innovation
• Regional differences in legal issues (USA vs. Europe)
• Costly & risky to diversify the IPR strategy (i.e. error prone process, learning curves, fears to
„let go“)
• Hard to enforce
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
14
15. Alternative Protection Instruments I: Creative
Commons
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
15
Creative Commons is an extension to copyright which allows various
degrees of freedom to repurpose content via granularly defined
constraints. The various licenses can be ordered within a hierarchy of
restrictions depending on the usage rights and associated permissions
granted by the specific license.
• Benefits:
• Enables fine granular expression of usage rights
• Allows diversification of creation & distribution of assets
• Allows diversification of business models
• Contributes to the public domain
• Limitations:
• Complex to handle
• Might interfere with etsablished business models
• Requires cultural change
• Hard to enforce
16. Alternative Protection Instruments II: Open
Data Commons
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
16
Open Data Commons are an extension of Database Right and work analogue
to Creative Commons. The various licenses can be ordered within a hierarchy
of restrictions depending on the usage rights and associated permissions
granted by the specific license.
• Benefits:
• Enables fine granular expression of usage rights
• Allows diversification of creation & distribution of assets
• Allows diversification of business models
• Contributes to the public domain
• Limitations:
• Very new instrument – work in progress / little experience
• Might interfere with etsablished business models
• Requires cultural change
• Hard to enforce
17. Community Norm I
• Beside licensing information expressed by Copyright / Creative Commons and
Database Right / Open Data Commons a so called Community Norm is the third
component of a Linked Data licensing policy.
• A community norm is basically a human-readable recommendation of how the
data should be used, managed and structured as intended by the data provider. It
should provide administrative information (i.e. creator, publisher, license and
rights), structural information about the dataset (i.e. version number, quantity of
attributes, types of relations) and recommendations for interlinking (i.e.
preferred vocabulary to secure semantic consistency).
• Community norms can differ widely in depth and complexity.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
17
18. Community Norm II: Examples
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
18
http://www.embeddedmetadata.org/embedded-metatdata-manifesto.php
19. Rights Expression Languages I: ODRL
• Rights Expression Languages are used to express usage rights about a digital asset in a machine-
readable way.
• A prominent example is ODRL (Open Digital Rights Language), an XML vocabulary to express
rights, rules, and conditions - including permissions, prohibitions, obligations, and assertions - for
interacting with online content. See: http://www.w3.org/community/odrl/
• ODRL utilizes an Entity-Attribute-Value Model to express a policy about rights and restrictions
associated with a digital artefact.
• BUT: ODRL does not provide a licensing attribute. This must be added by referring to other
vocabularies like CCREL.
• There are several possibilities how to provide the licensing information:
• as an annotation of the HTML document using RDFa,
• as a complementary document, which reflects the information on the page for machines (RDF/XML, N3, Turtle
or other notation),
• as a public SPARQL endpoint, which can be queried by applications and users,
• as a dump file.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
19
20. Rights Expression Languages II: CCREL
• The Creative Commons Community has developed CCREL (Creative Commons Rights Expression
Language) to represent the various CC licenses in a machine-readable format. See
http://www.w3.org/Submission/CCREL/ or http://creativecommons.org/schema.rdf
• CCREL complements the ODRL vocabulary. It provides a condensed and hierarchically ordered set
of properties that define the actions allowed with certain licenses. These properties can be
seamlessly integrated into the ODRL vocabulary and allow to define fine-granular usage policies
and constraints associated with a certain asset.
• A combination of ODRL and CCREL is not obligatory. The semantic expressivity of CCREL is
sufficient to simply annotate existing assets with licensing information for automated processing.
But in case of very complex and differentiated usage scenarios a combination of ODRL and CCREL
is recommended, as ODRL provides the necessary semantic expressivity to define fine-granular
usage policies associated with a certain asset that go beyond the simple explication of licensing
information, i.e. for various user groups or stakeholders.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
20
21. Rights Expression Languages III: CCREL
Examples
• One RDF triple is enough to attach license information to the work, given
that the license URI is dereferenceable and described by RDF vocabulary
provided by Creative Commons Foundation. Here is a basic example of how
the CC-BY license can be attached to the asset (ex:myImage):
• @prefix ex: <http://example.org/>.
• @prefix cc: <http://creativecommons.org/ns#>.
• ex:myImage cc:license <http://creativecommons.org/licenses/by/3.0/> .
• Such an RDF document usually complements an asset (an image in our
case) on a web page, where the licensing information should be
represented in a human-readable fashion (i.e. with HTML). Via the RDF link
an application can attain the information necessary for telling its user how
this asset can be processed.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
21
22. 1 @prefix xml: <http://www.w3.org/XML/1998/namespace>.
2 @prefix cc: <http://creativecommons.org/ns#>.
3 @prefix foaf: <http://xmlns.com/foaf/0.1/>.
4 @prefix dc: <http://purl.org/dc/elements/1.1/>.
5 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
6 @prefix dcq: <http://purl.org/dc/terms/>.
7 <http://creativecommons.org/licenses/by/3.0/> cc:legalcode
<http://creativecommons.org/licenses/by/3.0/legalcode>;
8 cc:licenseClass <http://creativecommons.org/license/>;
9 cc:permits cc:DerivativeWorks,
10 cc:Distribution,
11 cc:Reproduction;
12 cc:requires cc:Attribution,
13 cc:Notice;
14 dc:creator <http://creativecommons.org>;
15 dc:identifier "by";
16 dc:title "${Attribution} 3.0 ${Unported}"@i18n,
...
108 dcq:hasVersion "3.0";
109 a cc:License;
110 foaf:logo <http://i.creativecommons.org/l/by/3.0/80x15.png>,
111 <http://i.creativecommons.org/l/by/3.0/88x31.png>.
Rights Expression Languages IV: CCREL
Examples
• Each RDF license includes the
necessary information encoded in RDF,
such as what is allowed and what is
prohibited. For example, the CC-BY-SA
3.0 used in the example is represented
as follows:
• The code of the CC-BY license defines
its URI, legal code, title and other
attributes.
• The most important properties of this
license are stated on lines 9 - 13: an
asset under this license can be
distributed, reproduced and made
derivation from (cc:permits) if notice,
sharealike and attribution are provided
(cc:requires).
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
22
23. Rights Expression Languages V: ODC Examples
• In contrast to Creative Commons, who have
provided CCREL as a machine readable
language to express licensing information,
ODC licenses are available as plain text only
and thus not easily processable by
machines.
• But as ODC shares several attributes and
characteristics with CC it is possible and
reasonable to apply attributes from the
CCREL vocabulary.
• On the right you see an example how to
combine ODC licensing information with
CCREL expressions (lines 7 - 11). Herein the
description of the license inside the dataset
about a database is the same as in the
previous CCREL example.
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
23
1 @prefix xml: <http://www.w3.org/XML/1998/namespace>.
2 @prefix cc: <http://creativecommons.org/ns#>.
3 @prefix foaf: <http://xmlns.com/foaf/0.1/>.
4 @prefix dc: <http://purl.org/dc/elements/1.1/>.
5 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
6 @prefix dcq: <http://purl.org/dc/terms/>.
7 @prefix ex: <http://example.org/>
8 ex:myDatabase
9 cc:attributionName "Name of the author"^^xsd:string;
10 cc:attributionURL <http://firstname.lastname.me/>;
11 cc:license <http://opendatacommons.org/licenses/by/1.0/>.
12 <http://creativecommons.org/licenses/by/3.0/> cc:legalcode
<http://creativecommons.org/licenses/by/3.0/legalcode>;
13 cc:licenseClass <http://creativecommons.org/license/>;
14 cc:permits cc:DerivativeWorks,
15 cc:Distribution,
16 cc:Reproduction;
17 cc:requires cc:Attribution,
18 cc:Notice;
19 dc:creator <http://creativecommons.org>;
20 dc:identifier "by";
21 dc:title "${Attribution} 3.0 ${Unported}"@i18n,
...
113 dcq:hasVersion "3.0";
114 a cc:License;
115 foaf:logo <http://i.creativecommons.org/l/by/3.0/80x15.png>,
116 <http://i.creativecommons.org/l/by/3.0/88x31.png>.
25. Instance Data
Metadata
Ontology
Content
Services
Technology
Stakeholders
Revenue
Model
Linked
Data Assets
Linked Data
Business Cube
Subsidies
Subscription
Advertising
Certification
Affiliate Program
Value Add
Traffic / SEO
Branding
Revenue Model Legend:
Subscription: Selling data & services access
Advertising: Sell paid placements / advertisements
inside data feeds & services
Certification: Charge for reviews, verification,
compliance checks, quality assurance
Affiliate Program: Charge for affiliate links within data
feeds or services
Value Add: Utilizing Linked Data to enhance data sets &
services
Traffic / SEO: Utilizing Linked Data to improve
findability & generate traffic
Branding: Provide data sets, vocabs & ontologies to
shape market & fuel data driven applications
Subsidies: Public / non-profit funding & regulatory
publishing policies
(Adopted from Brinkner (2010): http://chiefmartec.com/2010/01/the-
8th-linked-data-business-model/)
Stakeholder Legend:
Internal … within a company // Partners … Between strategic partners // B2B …
Business to Business // B2G … Business to Government // B2C … Business to
Customer // C2C … Customer to Customer / Co2Co … Community to Community
29. Conclusion: Challenges of Linked Data
Licensing
• Linked Data Licensing is technologically simple, but business-wise complex.
• Linked Data Licensing is a context sensitive issue and requires a good
understanding of the intersections of technology, law and business development
• Assets & stakeholders
• Markets & ressources
• Regulatory & legal conditions
• Technology & infrastructure
• Linked Data Licensing challenges traditional business models & culture … can be
considered a „radical innovation“
• FUTURE: Linked Licensing Data will bring about new applications & services for
rights clearance, publishing & billing purposes ... High transformation potential
for ecommerce & procurement!
Prof. Dr. Tassilo Pellegrini, University of Applied Sciences St.
Pölten, Licensing Linked Data
29