This document discusses the development of a research object authoring tool as part of the FAIR4CURES project. The tool will allow researchers to bundle different types of digital research outputs like datasets, software, and workflows into structured research objects. It will integrate with the Seven Bridges platform and Mendeley Data repository to register objects with global unique identifiers and expose them in standard formats like JSON-LD. The goal is to advance the FAIR data principles and make research outputs more findable, accessible, interoperable and reusable through the use of structured research objects.
A FAIR Approach to Publishing and Sharing Machine Learning ModelsBen Blaiszik
While there has been a significant increase in the amount of machine learning research across various domains of science, the processes to publish the results and make the resulting models and code available for reuse has been lacking. In this talk, we discuss FAIR data principles applied to machine learning models and how the Data and Learning Hub for Science (DLHub) can help make models more easily discoverable and usable in common scientific workflows. Visit https://www.dlhub.org for more information.
The Web of Linked Open Data, or LOD, is the most relevant achievement of the Semantic Web. Initially proposed by Tim Berners-Lee in a seminal paper published in Scientific American in 2001, the Semantic Web envisions a web where software agents can interact with large volumes of structured, easy to process data. It is now when users have at our disposal the first, mature results of this vision. Among them, and probably the most significant ones, are the different LOD initiatives and projects that publish open data in standard formats like RDF.
This presentation provides an overview and comparison of different LOD initiatives in the area of patent information, and analyses potential opportunities for building new information services based on largely available datasets of patent information. Information is based on different interviews conducted with innovation agents and on the analysis of professional bibliography and current implementations.
LOD opportunities are not only restricted to information aggregators, but also to end-users and innovation agents that need to face with the difficulties of dealing with large amounts of data. In both cases, the opportunities offered by LOD need to be assessed, as LOD has just become a standard, universal method to distribute, share and access data.
Leveraging Open Source Technologies to Enable Scientific Archiving and Discovery; Steve Hughes, NASA; Data Publication Repositories
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Persistent Identifiers in EUDAT services| www.eudat.eu | EUDAT
| www.eudat.eu | The EUDAT data domain handles registered data. Each digital object should have a persistent identifier. This persistent identifier is used for: Replica identification; Identification of the repository of record (in the case of replication); Querying of additional information; Checksum (time stamped)...
A FAIR Approach to Publishing and Sharing Machine Learning ModelsBen Blaiszik
While there has been a significant increase in the amount of machine learning research across various domains of science, the processes to publish the results and make the resulting models and code available for reuse has been lacking. In this talk, we discuss FAIR data principles applied to machine learning models and how the Data and Learning Hub for Science (DLHub) can help make models more easily discoverable and usable in common scientific workflows. Visit https://www.dlhub.org for more information.
The Web of Linked Open Data, or LOD, is the most relevant achievement of the Semantic Web. Initially proposed by Tim Berners-Lee in a seminal paper published in Scientific American in 2001, the Semantic Web envisions a web where software agents can interact with large volumes of structured, easy to process data. It is now when users have at our disposal the first, mature results of this vision. Among them, and probably the most significant ones, are the different LOD initiatives and projects that publish open data in standard formats like RDF.
This presentation provides an overview and comparison of different LOD initiatives in the area of patent information, and analyses potential opportunities for building new information services based on largely available datasets of patent information. Information is based on different interviews conducted with innovation agents and on the analysis of professional bibliography and current implementations.
LOD opportunities are not only restricted to information aggregators, but also to end-users and innovation agents that need to face with the difficulties of dealing with large amounts of data. In both cases, the opportunities offered by LOD need to be assessed, as LOD has just become a standard, universal method to distribute, share and access data.
Leveraging Open Source Technologies to Enable Scientific Archiving and Discovery; Steve Hughes, NASA; Data Publication Repositories
The 2nd Research Data Access and Preservation (RDAP) Summit
An ASIS&T Summit
March 31-April 1, 2011 Denver, CO
In cooperation with the Coalition for Networked Information
http://asist.org/Conferences/RDAP11/index.html
Persistent Identifiers in EUDAT services| www.eudat.eu | EUDAT
| www.eudat.eu | The EUDAT data domain handles registered data. Each digital object should have a persistent identifier. This persistent identifier is used for: Replica identification; Identification of the repository of record (in the case of replication); Querying of additional information; Checksum (time stamped)...
FAIR Workflows: A step closer to the Scientific Paper of the Futuredgarijo
Keynote presented at the Computational and Autonomous Workflows (CAW-2021) at the Oak Ridge National Laboratory. The keynote describes an overview of the different aspects to take into account when aiming to create FAIR workflows and associated resources.
Tutorial at DCMI conference in Seoul, 2019-09-25, by Tom Baker, Joachim Neubert and Andra Waagmeester
Rendered HTML version: https://jneubert.github.io/wd-dcmi2019/#/
Leverage DSpace for an enterprise, mission critical platformAndrea Bollini
Conference: Open Repository, Indianapolis, 8-12 June 2015
Presenters: Andrea Bollini, Michele Mennielli
Cineca, Italy
We would like to share with the DSpace Community some useful tips, starting from how to embed DSpace into a larger IT ecosystem that can provide additional value to the information managed. We will then show how publication data in DSpace - enriched with a proper use of the authority framework - can be combined with information coming from the HR system. Thanks to this, the system can provide rich and detailed reports and analysis through a business intelligence solution based on the Pentaho’s Mondrian OLAP open source data integration tools.
We will also present other use cases related to the management of publication information for reporting purpose: publication record has an extended lifecycle compared to the one in a basic IR; system load is much bigger, especially in writing, since the researchers need to be able to make changes to enrich data when new requirements come from the government or the university researcher office; data quality requires the ability to make distributed changes to the publication also after the conclusion of a validation workflow.
Finally we intend to present our direct experience and the challenges we faced to make DSpace easily and rapidly deployable to more than 60 sites.
Preparing your data for sharing and publishingVarsha Khodiyar
Talk given as part of the MRC Cognition and Brain Sciences Unit Open Science Day on 20th November 2018 , University of Cambridge (https://www.eventbrite.co.uk/e/open-science-day-at-the-mrc-cbu-tickets-50363553745)
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
DSpace-CRIS: a CRIS enhanced repository platformAndrea Bollini
International Conference on Economics and Business Information 19 to 20 April 2016 in Berlin
This presentation introduces you to the version 5.5.0 of the DSpace-CRIS extension. With such extension you can capture the full picture of the research activities conduct in your institution and their context. It enables to showcase the experts, the facilities, the services and much more to attract funding, facilitate collaborations and curate the scientific reputation of your Institution.
A Big Picture in Research Data ManagementCarole Goble
A personal view of the big picture in Research Data Management, given at GFBio - de.NBI Summer School 2018 Riding the Data Life Cycle! Braunschweig Integrated Centre of Systems Biology (BRICS), 03 - 07 September 2018
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
The presentation for the W3C Semantic Web in Health Care and Life Sciences community group by Slava Tykhonov, DANS-KNAW, the Royal Netherlands Academy of Arts and Sciences (October 2020). The recording is available https://www.youtube.com/watch?v=G9oiyNM_RHc
DataCite – Bridging the gap and helping to find, access and reuse data – Herb...OpenAIRE
OpenAIRE Interoperability Workshop (8 Feb. 2013).
DataCite – Bridging the gap and helping to find, access and reuse data – Herbert Gruttemeier, INIST-CNRS
FAIR Workflows: A step closer to the Scientific Paper of the Futuredgarijo
Keynote presented at the Computational and Autonomous Workflows (CAW-2021) at the Oak Ridge National Laboratory. The keynote describes an overview of the different aspects to take into account when aiming to create FAIR workflows and associated resources.
Tutorial at DCMI conference in Seoul, 2019-09-25, by Tom Baker, Joachim Neubert and Andra Waagmeester
Rendered HTML version: https://jneubert.github.io/wd-dcmi2019/#/
Leverage DSpace for an enterprise, mission critical platformAndrea Bollini
Conference: Open Repository, Indianapolis, 8-12 June 2015
Presenters: Andrea Bollini, Michele Mennielli
Cineca, Italy
We would like to share with the DSpace Community some useful tips, starting from how to embed DSpace into a larger IT ecosystem that can provide additional value to the information managed. We will then show how publication data in DSpace - enriched with a proper use of the authority framework - can be combined with information coming from the HR system. Thanks to this, the system can provide rich and detailed reports and analysis through a business intelligence solution based on the Pentaho’s Mondrian OLAP open source data integration tools.
We will also present other use cases related to the management of publication information for reporting purpose: publication record has an extended lifecycle compared to the one in a basic IR; system load is much bigger, especially in writing, since the researchers need to be able to make changes to enrich data when new requirements come from the government or the university researcher office; data quality requires the ability to make distributed changes to the publication also after the conclusion of a validation workflow.
Finally we intend to present our direct experience and the challenges we faced to make DSpace easily and rapidly deployable to more than 60 sites.
Preparing your data for sharing and publishingVarsha Khodiyar
Talk given as part of the MRC Cognition and Brain Sciences Unit Open Science Day on 20th November 2018 , University of Cambridge (https://www.eventbrite.co.uk/e/open-science-day-at-the-mrc-cbu-tickets-50363553745)
RO-Crate: A framework for packaging research products into FAIR Research ObjectsCarole Goble
RO-Crate: A framework for packaging research products into FAIR Research Objects presented to Research Data Alliance RDA Data Fabric/GEDE FAIR Digital Object meeting. 2021-02-25
DSpace-CRIS: a CRIS enhanced repository platformAndrea Bollini
International Conference on Economics and Business Information 19 to 20 April 2016 in Berlin
This presentation introduces you to the version 5.5.0 of the DSpace-CRIS extension. With such extension you can capture the full picture of the research activities conduct in your institution and their context. It enables to showcase the experts, the facilities, the services and much more to attract funding, facilitate collaborations and curate the scientific reputation of your Institution.
A Big Picture in Research Data ManagementCarole Goble
A personal view of the big picture in Research Data Management, given at GFBio - de.NBI Summer School 2018 Riding the Data Life Cycle! Braunschweig Integrated Centre of Systems Biology (BRICS), 03 - 07 September 2018
German Conference on Bioinformatics 2021
https://gcb2021.de/
FAIR Computational Workflows
Computational workflows capture precise descriptions of the steps and data dependencies needed to carry out computational data pipelines, analysis and simulations in many areas of Science, including the Life Sciences. The use of computational workflows to manage these multi-step computational processes has accelerated in the past few years driven by the need for scalable data processing, the exchange of processing know-how, and the desire for more reproducible (or at least transparent) and quality assured processing methods. The SARS-CoV-2 pandemic has significantly highlighted the value of workflows.
This increased interest in workflows has been matched by the number of workflow management systems available to scientists (Galaxy, Snakemake, Nextflow and 270+ more) and the number of workflow services like registries and monitors. There is also recognition that workflows are first class, publishable Research Objects just as data are. They deserve their own FAIR (Findable, Accessible, Interoperable, Reusable) principles and services that cater for their dual roles as explicit method description and software method execution [1]. To promote long-term usability and uptake by the scientific community, workflows (as well as the tools that integrate them) should become FAIR+R(eproducible), and citable so that author’s credit is attributed fairly and accurately.
The work on improving the FAIRness of workflows has already started and a whole ecosystem of tools, guidelines and best practices has been under development to reduce the time needed to adapt, reuse and extend existing scientific workflows. An example is the EOSC-Life Cluster of 13 European Biomedical Research Infrastructures which is developing a FAIR Workflow Collaboratory based on the ELIXIR Research Infrastructure for Life Science Data Tools ecosystem. While there are many tools for addressing different aspects of FAIR workflows, many challenges remain for describing, annotating, and exposing scientific workflows so that they can be found, understood and reused by other scientists.
This keynote will explore the FAIR principles for computational workflows in the Life Science using the EOSC-Life Workflow Collaboratory as an example.
[1] Carole Goble, Sarah Cohen-Boulakia, Stian Soiland-Reyes,Daniel Garijo, Yolanda Gil, Michael R. Crusoe, Kristian Peters, and Daniel Schober FAIR Computational Workflows Data Intelligence 2020 2:1-2, 108-121 https://doi.org/10.1162/dint_a_00033.
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
The presentation for the W3C Semantic Web in Health Care and Life Sciences community group by Slava Tykhonov, DANS-KNAW, the Royal Netherlands Academy of Arts and Sciences (October 2020). The recording is available https://www.youtube.com/watch?v=G9oiyNM_RHc
DataCite – Bridging the gap and helping to find, access and reuse data – Herb...OpenAIRE
OpenAIRE Interoperability Workshop (8 Feb. 2013).
DataCite – Bridging the gap and helping to find, access and reuse data – Herbert Gruttemeier, INIST-CNRS
Research Data (and Software) Management at Imperial: (Everything you need to ...Sarah Anna Stewart
A presentation on research data management tools, workflows and best practices at Imperial College London with a focus on software management. Presented at the 2017 session of the HPC Summer School (Dept. of Computing).
Access the world’s research outputs through the CORE API Matteo Cancellieri
Slides for the webinar: Access the world’s research outputs through the CORE API, 13th January 2022.
Link to the webinar video: https://youtu.be/acRLJNpq4W4
In this webinar, we present our new CORE APIv3.
Presenters Petr Knoth and Matteo Cancellieri walk you through the new features.
At a glance the new APIv3 offers:
- An extended model of the CORE resources to link different versions of a paper.
- Support for medium-size datasets collection.
- Improved analytical tools.
- User management made easier.
- Better documentation.
- A gallery to kick start your journey with the API.
The webinar contains also a quick demo showing the API features and tries to reply to the question "Did research stop during COVID?"
DSP3B: DSpace Interest Group 3B: DSpace-CRIS Workshop · 11/Jun/2015: 3:30pm-5:00pm · Location: Regency E
DSpace-CRIS Workshop
Andrea Bollini, Luigi Andrea Pascarelli, Michele Mennielli, David Palmer
Cineca, Italy; Hong Kong University
The 90-minute workshop will introduce attendees to the latest version of the DSpace-CRIS module, covering its functional and technical aspects.
DSpace-CRIS is an additional open-source module for the DSpace platform. It extends the DSpace data model providing the ability to manage, collect and expose data about any entities of the research domain, such as people, organizational units, projects, grants, awards, patents, publications, and so on. Before OR2015 a new version of the system will be released to follow the new DSpace 5.0 version. The new version contains, among other things, important enhancements of its integration with ORCID.
The DSpace-CRIS extensible data model will be explained in depth, through examples and discussion with participants.
Other main topics are DSpace-CRIS "components", management of relationships and network analysis functionalities.
At the end of the workshop, participants will be able to:
- understand the DSpace-CRIS data model
- evaluate if DSpace-CRIS fits the requirements of their institution
- use the DSpace-CRIS User Interface
- change the default configuration, adapting it to a specific data model.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
FAIR Workflows and Research Objects get a Workout Carole Goble
So, you want to build a pan-national digital space for bioscience data and methods? That works with a bunch of pre-existing data repositories and processing platforms? So you can share FAIR workflows and move them between services? Package them up with data and other stuff (or just package up data for that matter)? How? WorkflowHub (https://workflowhub.eu) and RO-Crate Research Objects (https://www.researchobject.org/ro-crate) that’s how! A step towards FAIR Digital Objects gets a workout.
Presented at DataVerse Community Meeting 2021
Talk at the World Science Festival at Columbia, June 2, 2017: session on Big Data and Physics: http://www.worldsciencefestival.com/programs/big-data-future-physics/
Data Repositories: Recommendation, Certification and Models for Cost RecoveryAnita de Waard
Talk at NITRD Workshop "Measuring the Impact of Digital Repositories" February 28 – March 1, 2017 https://www.nitrd.gov/nitrdgroups/index.php?title=DigitalRepositories
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
CNI 2018: A Research Object Authoring Tool for the Data Commons
1. ELSEVIER | The Research Object Authoring Tool --- CNI 2018 1
FAIR4CURES
A Research Object Authoring Tool for the Data Commons
December 11, 2018
Anita de Waard (she, her)
VP Research Collaborations
2. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Overview:
1. The NIH Data Commons: a very short introduction
2. The FAIR4CURES Project
3. A Global Unique Identifier Broker
4. Research Objects: a very very short introduction
5. Building a Research Object Authoring Tool on Mendeley Data
3. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
The NIH Data Commons Pilot Phase aims to
provide a marketplace for tools, data and
workflows
based on existing technologies of commercial and
academic platforms that strive to embody the FAIR
Data principles.
Overview:
4. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Data Commons Overview:
Goal of the project:
1. Advance the policies and protocols for accessing human subjects data
2. Support global identification, indexing and searching of available data sets;
3. Provide a collection of computational pipelines that can be applied to data sets
4. Utilize standards to globally identify and access data sets, tools and workflows
5. Create policies for data citation, reuse and reproducibility
6. Enable researchers to port their own data and workflows into the cloud
Project structure:
• DCPPC research groups are addressing important Key Capabilities =>
• The Commons will be composed of four stacks, incorporating products from the KCs
Final output:
• Data from three large NIH Databases will be available through all of these systems
• Users can securely access data within all stacks, on multiple cloud providers
• Users have access a basic set of applications that run the same way on all stacks.
https://public.nihdatacommons.us/ExecutiveSummary_4YP/
Key Capabilities:
1: FAIR Guidelines & Metrics
2: Global Unique IDs for FAIR Digital Objects
3: Open Standard APIs
4: Cloud Agnostic Architecture Framework
5: Workspaces for Computation
6: Research Ethics, Privacy, and Security
7: Indexing and Search
5. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Data Commons Guiding Principles:
• 1. Identifiers for data: Develop and implement an interoperable global unique identifier system for digital
objects.
• 2. Data access: Develop and implement authentication and authorization policies and protocols for controlled
access to digital objects and derivatives.
• 3. Findability: Enable search and indexing of digital objects and data sets.
• 4. Software stacks: The Commons will encompass multiple robust and sustainable software stacks
implementing Commons standards and systems.
• 5. Data use, standards: All tools will be build using standard application interfaces.
• 6. Use cases: The Commons will develop and utilize an extensive use case library.
• 7. Community: The Commons is developed through intense Community engagement and support across
multiple levels of expertise.
• 8. Community: Governance, membership, and coordination will be established with and through the
community.
• 9. Evaluation methods and metrics: We plan a culture of frequent release of products, with small iterations,
routine evaluation and redesign.
• 10. FAIR guidelines and metrics: Once FAIR metrics and rubrics are defined, these will be used to measure the
level of “FAIRness” of repositories, datasets, and other digital objects.
https://public.nihdatacommons.us/executive-summary/
6. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Team Xenon – Four partner organisations
Findable Accessible Interoperable Reusable
Collaborative Usable Reproducible Extendable Scalable
The FAIR4CURES Collaboration:
Index 3 datasets:
• Trans-omics for Precision Medicine (TOPMed)
• Genotype Tissue Expression (GTEx)
• Model Organisms Database (MODs)
7. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
The FAIR4CURES PlatformThe FAIR4CURES System:
8. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
• Identifiers for hosted data files within TOPMed studies, GTEx dataset, and MODs
• Feature for researchers to register identifiers for their derived data files on the
platform, making the content public and searchable
• Selecting types of identifiers to support in the Data Commons ecosystem and the
required identifier metadata
• Open Source tool, connected to the SevenBridges Platform
• Also accessible via Github/SmartAPI
Global Unique Identifier Broker:
9. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Digital Object Types Identified following the KC2 Metadata Spec:
Seven Bridges
Object Type
DataCite
Resource
Type
Proposed
Schema.Org
CreativeWork Types
Supported Relationships Notes
File Dataset Dataset Source Of a Task (input file)
Derived From a Task
(output file)
Part Of a Collection
One (or more) files packaged with metadata as a dataset
App (Tool) Software SoftwareSourceCode Part of Task or Collection or
Workflow
Same as dataset, but file is source code
App
(Workflow)
Workflow SoftwareSourceCode
(?)
Has Part of Software An aggregation of Tools (Software). File is CWL definition
describing how the tools are chained.
Task Collection Collection Composition of Files and
Apps (Tools or Workflows)
An aggregation of Apps (either tools or workflows), plus files
(input & output) plus a record of all the settings used for each
App.
Collection
(Study)
Collection Collection Composition of any object An aggregation of heterogeneous objects for purpose of
publishing.
https://docs.google.com/document/d/1FD3aXr_uHnPy-YrFhQhuXET73tBVxu7F_Q5uS9TPUZs/edit
10. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Seven Bridges Data Publication Concept
Requirements Analysis:
1. Landing page URL including GUID
2. URL for page where file can be accessed (downloaded)
3. Metadata for object
4. Reference to the Task (zero or one) that this dataset was Derived From
5. Reference to the Task(s) (zero, one or more) that this dataset is the Source Of
1
2
3
4
5
11. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Seven Bridges Workflow Configuration (CWL)
12. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Standards-based metadata framework for
logically and physically bundling resources
with context
http://researchobject.org
What are Research Objects?
Aggregates
link things together
Annotations
about things & their
relationships
Container
Packaging content & links:
Zip files, BagIt, Docker images
Identification
locate things
regardless where
13. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Research Objects can be used to capture outputs in a wide range of scopes
• Profiles help define the shape and form of a research object.
• A profile defines the general purpose of that type of Research Objects:
• A format (e.g. Research Object Bundle),
• An expectation of what kind of resources should be expected,
• A link to any specific vocabularies that should be used in its annotations.
Applications of Research Objects include BDBags (Big Data Bags):
• In digital libraries, preservation of source artifacts commonly use the BagIt format for archive serialization, capturing
digital resources like audio recordings, document scans and their transcriptions, provenance and annotations.
• The Research Object BagIt archive is a profile for describing a BagIt archive and its content as a Research Object to
structure the metadata and relate the captured resources
• The NIH-funded Big Data for Discovery Science (BDDS) project captures Big Data bags (BDBag) of large complex datasets
from genomics workflows (https://doi.org/10.1109/BigData.2016.7840618).
• A key aspect of BDBag is the ability to use Minimal Viable Identifiers (minid) for referencing potentially large data sources
held in multiple remote repositories, effectively making a “Big Data” Research Object for large-scale workflows
(https://doi.org/10.1101/268755).
• A bag of bags (minid:b9vx04) is a metadata skeleton which may be completed with tools like bdbag to download the big
data
• The bags’ Research Object manifests can be consumed independently, linking to the remote resources.
Research Objects and BDBags:
http://www.researchobject.org/scopes/
14. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Moving from Datasets to Research Objects in Mendeley Data:
In Mendeley Data Repository, datasets are lists of files (stored in our S3 bucket) with metadata packaging (e.g. Titles,
Description, Categories, License) and a persistent identifier DOI).
We will introduce:
• Collections as an aggregation of Datasets. Similar to a Dataset, BUT, the contents are other datasets, not files.
• Software and Workflow as different types of Digital Objects. Similar to a Dataset, BUT files are source code or
workflow specifications (e.g. CWL) and metadata properties could be a bit different.
This forms the foundation for Research Objects, which are:
• Collections or aggregations of different types of Digital Objects (not just datasets)
• References to digital objects on other platforms, based on standard identifiers (e.g. DOIs or ARKs)
• A manifest which lists and describes the contents of the Research Object
• Exposed in JSON-LD:
15. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
GUID Broker (API Only)
Seven Bridges
Fair4CURES Platform
Phase 1
Pilot Project
(Apr – Sep 2018)
Register Datasets (Data Files)
Register Software Objects
Register Workflow Objects
Uses
Register a Collection as a list of
digital objects (data, sw, wf)
In Summary:
Objective 1 – support “Task” type
Research Objects on Seven Bridges
platform.
Objective 2 - support configurable
Research Objects on Mendeley Data
platform.
Phase 2
Project
(Oct 2018 - 2019) Add annotation and relationships
to collection to describe a research
object
Research Object Composer
Serialise Research Object in
standard format based on BDBags
and RO standards Mendeley Data
Platform
Uses Re-uses
http://smart-api.info/ui/
bf9abe9c17c9c78c432832382ef9e16a#/
16. ELSEVIER | The Research Object Authoring Tool --- CNI 2018
Acknowledgements:
• This work is supported by the NIH Data Commons Pilot Phase under the Research Opportunity
Announcement (ROA) RM-17-026 https://commonfund.nih.gov/commons/:
• NIH Data Commons - 1 OT3 OD025463-01
• NHLBI STAGE Project - 1 OT3 HL142478-01
• The FAIR4CURES Project lead by SevenBridges (Alison Leaf, Brandi Davis-Dusenbury and Sarper Avcil)
• We partner in the Project with Repositive UK and the US Dept of Veteran’s Affairs
• The metadata standards development was done by KC2, lead by Team Sodium (esp. Merce Crosas, Tim
Clark, Trisha Cruse and Martin Fenner)
• The Research Objects Authoring Tool work is lead by the University of Manchester, who pioneered work
on Research Objects (Stian Soiland-Reyes and Carole Goble)
• The Mendeley Data team has built the GUID Broker Prototype (Gabriel Oscares, Gareth Harvey