The Data in Context Interest Group at the 3rd RDA Plenary in Dublin discussed developing a common understanding of data context and lifecycles. They reviewed several existing data lifecycle models and standards that address contextual metadata. Their goals are to provide an overview of relevant standardization work, prioritize requirements, and establish a working group to further develop standardized profiles and facilitate transformation between standards to represent data context. The group plans initial deliverables reviewing contextual standards work and prioritizing requirements, to inform establishing a working group.
Data Publishing Models by Sünje Dallmeier-Tiessendatascienceiqss
Data Publishing is becoming an integral part of scholarly communication today. Thus, it is indispensable to understand how data publishing works across disciplines. Are there best practices others can learn from or even data publishing standards? How do they impact interoperability in the Open Science landscape? The presentation will look at a range of examples, and the main building blocks of data publishing today. The work has been conducted as part of the RDA Data Publishing Workflows group.
Persistent Identifier Services and their Metadata by John Kunzedatascienceiqss
Persistent identifiers (Pids) provide machine-actionable links to data and metadata that are vital to APIs (application programming interfaces) for publishing and citation. APIs are essentially request/response patterns that use Pids to reference things and metadata to describe not only the things themselves, but also any actions requested or taken. As a result, metadata design and standardization is wedded to API design and enhancement. With Pids as nouns and metadata as adjectives and qualifiers, Pid services play a key role in API implementation.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Network Effects: RMap Project
Sheila M. Morrissey, Senior Researcher, ITHAKA
Data Citation Implementation Guidelines By Tim Clarkdatascienceiqss
This talk presents a set of detailed technical recommendations for operationalizing the Joint Declaration of Data Citation Principles (JDDCP) - the most widely agreed set of principle-based recommendations for direct scholarly data citation.
We will provide initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data.
We hope that these recommendations along with the new NISO JATS document schema revision, developed in parallel, will help accelerate the wide adoption of data citation in scholarly literature. We believe their adoption will enable open data transparency for validation, reuse and extension of scientific results; and will significantly counteract the problem of false positives in the literature.
Dataverse in the Universe of Data by Christine L. Borgmandatascienceiqss
Data repositories are much more than "black boxes" where data go in but may never come out. Rather, they are situated in communities, with contributors, users, reusers, and repository staff who may engage actively or passively with participants. This talk will explore the roles that Dataverse plays – or could play – in individual communities.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Preservation of Research Data: Dataverse / Archivematica Integration by Allan...datascienceiqss
Scholars Portal, a program of the Ontario Council of University Libraries (OCUL), provides the technical infrastructure to store, preserve, and provide access to shared digital library collections in Ontario - including hosting a local instance of Dataverse since 2011. As part of a national project known as Portage (a project of the Canadian Association of Research Libraries), Scholars Portal is partnering with Artefactual Systems, Dataverse, the University of British Columbia, the University of Alberta, and others, to integrate Dataverse with preservation software Archivematica. When completed, this project will facilitate the long-term preservation of research data according to the Open Archival Information System (OAIS) Reference Model.
Data Publishing Models by Sünje Dallmeier-Tiessendatascienceiqss
Data Publishing is becoming an integral part of scholarly communication today. Thus, it is indispensable to understand how data publishing works across disciplines. Are there best practices others can learn from or even data publishing standards? How do they impact interoperability in the Open Science landscape? The presentation will look at a range of examples, and the main building blocks of data publishing today. The work has been conducted as part of the RDA Data Publishing Workflows group.
Persistent Identifier Services and their Metadata by John Kunzedatascienceiqss
Persistent identifiers (Pids) provide machine-actionable links to data and metadata that are vital to APIs (application programming interfaces) for publishing and citation. APIs are essentially request/response patterns that use Pids to reference things and metadata to describe not only the things themselves, but also any actions requested or taken. As a result, metadata design and standardization is wedded to API design and enhancement. With Pids as nouns and metadata as adjectives and qualifiers, Pid services play a key role in API implementation.
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Network Effects: RMap Project
Sheila M. Morrissey, Senior Researcher, ITHAKA
Data Citation Implementation Guidelines By Tim Clarkdatascienceiqss
This talk presents a set of detailed technical recommendations for operationalizing the Joint Declaration of Data Citation Principles (JDDCP) - the most widely agreed set of principle-based recommendations for direct scholarly data citation.
We will provide initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data.
We hope that these recommendations along with the new NISO JATS document schema revision, developed in parallel, will help accelerate the wide adoption of data citation in scholarly literature. We believe their adoption will enable open data transparency for validation, reuse and extension of scientific results; and will significantly counteract the problem of false positives in the literature.
Dataverse in the Universe of Data by Christine L. Borgmandatascienceiqss
Data repositories are much more than "black boxes" where data go in but may never come out. Rather, they are situated in communities, with contributors, users, reusers, and repository staff who may engage actively or passively with participants. This talk will explore the roles that Dataverse plays – or could play – in individual communities.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Preservation of Research Data: Dataverse / Archivematica Integration by Allan...datascienceiqss
Scholars Portal, a program of the Ontario Council of University Libraries (OCUL), provides the technical infrastructure to store, preserve, and provide access to shared digital library collections in Ontario - including hosting a local instance of Dataverse since 2011. As part of a national project known as Portage (a project of the Canadian Association of Research Libraries), Scholars Portal is partnering with Artefactual Systems, Dataverse, the University of British Columbia, the University of Alberta, and others, to integrate Dataverse with preservation software Archivematica. When completed, this project will facilitate the long-term preservation of research data according to the Open Archival Information System (OAIS) Reference Model.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
Metadata & Data Curation Services by Thu-Mai Christiandatascienceiqss
The Odum Institute was an early adopter of the Dataverse Network™ (DVN) virtual archive platform, transferring all of its holdings to the Virtual Data Center (VDC), the DVN’s precursor, in 2005. This presentation will illustrate the Odum Institute Data Archive’s integration of the Dataverse Network™ into its current data curation pipeline process and discuss the Dataverse Network’s role in the Institute’s tiered levels of data curation services.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
10-15-13 “Metadata and Repository Services for Research Data Curation” Presen...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series," Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 2: “Metadata and Repository Services for Research Data Curation”
Presented by Declan Fleming, Chief Technology Strategist, Arwen Hutt, Metadata Librarian & Matt Critchlow, Manager of Development and Web ServicesUC, San Diego Library.
Scratchpads: the Virtual Research Environment for biodiversity dataVince Smith
Rycroft, S., Roberts, D., Smith, V., Heaton, A., Bouton, K., Livermore, L., Koureas, D., Baker, E. 2013. Scratchpads: the Virtual Research Environment for biodiversity data. TDWG, Biodiversity Information Standards. Grand Hotel Mediterraneo Florence, Italy, 27 Oct - 1 Nov., 2013.
A Controlled Crowdsourcing Approach for Practical Ontology Extensions and Met...dgarijo
Traditional approaches to ontology development have a large lapse between the time when a user using the ontology has found a need to extend it and the time when it does get extended. For scientists, this delay can be weeks or months and can be a significant barrier for adoption. We present a new approach to ontology development and data annotation enabling users to add new metadata properties on the fly as they describe their datasets, creating terms that can be immediately adopted by others and eventually become standardized. This approach combines a traditional, consensus-based approach to ontology development, and a crowdsourced approach where ex-pert users (the crowd) can dynamically add terms as needed to support their work. We have implemented this approach as a socio-technical system that includes: 1) a crowdsourcing platform to support metadata annotation and addition of new terms, 2) a range of social editorial processes to make standardization decisions for those new terms, and 3) a framework for ontology revision and updates to the metadata created with the previous version of the ontology. We present a prototype implementation for the paleoclimate community, the Linked Earth Framework, currently containing 700 datasets and engaging over 50 active contributors. Users exploit the platform to do science while extending the metadata vocabulary, thereby producing useful and practical metadata
FAIRy stories: tales from building the FAIR Research CommonsCarole Goble
Plenary Lecture Presented at INCF Neuroinformatics 2019 https://www.neuroinformatics2019.org
Title: FAIRy stories: tales from building the FAIR Research Commons
Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any kind of Research Object is a mantra; a method; a meme; a myth; a mystery. For the past 15 years I have been working on FAIR in a range of projects and initiatives in the Life Sciences as we try to build the FAIR Research Commons. Some are top-down like the European Research Infrastructures ELIXIR, ISBE and IBISBA, and the NIH Data Commons. Some are bottom-up, supporting FAIR for investigator-led projects (FAIRDOM), biodiversity analytics (BioVel), and FAIR drug discovery (Open PHACTS, FAIRplus). Some have become movements, like Bioschemas, the Common Workflow Language and Research Objects. Others focus on cross-cutting approaches in reproducibility, computational workflows, metadata representation and scholarly sharing & publication. In this talk I will relate a series of FAIRy tales. Some of them are Grimm. There are villains and heroes. Some have happy endings; all have morals.
This is a joint presentation by Jeroen Bosman and Bianca Kramer, given during a joint NISO-ICSTI webinar, held on Wednesday, October 26, on Enabling Innovation in Researcher Workflow and Scholarly Communication.
PIDs, Data and Software: How Libraries Can Support Researchers in an Evolving...Sarah Anna Stewart
Presentation given at the M25 Consortium of Academic Libraries, CPD25 Event on 'The Role of the Library in Supporting Research'. Provides an introduction to data, software and PIDs and a brief look at how libraries can enable researchers to gain impact and credit for their research data and software.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
Metadata & Data Curation Services by Thu-Mai Christiandatascienceiqss
The Odum Institute was an early adopter of the Dataverse Network™ (DVN) virtual archive platform, transferring all of its holdings to the Virtual Data Center (VDC), the DVN’s precursor, in 2005. This presentation will illustrate the Odum Institute Data Archive’s integration of the Dataverse Network™ into its current data curation pipeline process and discuss the Dataverse Network’s role in the Institute’s tiered levels of data curation services.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
10-15-13 “Metadata and Repository Services for Research Data Curation” Presen...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series," Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 2: “Metadata and Repository Services for Research Data Curation”
Presented by Declan Fleming, Chief Technology Strategist, Arwen Hutt, Metadata Librarian & Matt Critchlow, Manager of Development and Web ServicesUC, San Diego Library.
Scratchpads: the Virtual Research Environment for biodiversity dataVince Smith
Rycroft, S., Roberts, D., Smith, V., Heaton, A., Bouton, K., Livermore, L., Koureas, D., Baker, E. 2013. Scratchpads: the Virtual Research Environment for biodiversity data. TDWG, Biodiversity Information Standards. Grand Hotel Mediterraneo Florence, Italy, 27 Oct - 1 Nov., 2013.
A Controlled Crowdsourcing Approach for Practical Ontology Extensions and Met...dgarijo
Traditional approaches to ontology development have a large lapse between the time when a user using the ontology has found a need to extend it and the time when it does get extended. For scientists, this delay can be weeks or months and can be a significant barrier for adoption. We present a new approach to ontology development and data annotation enabling users to add new metadata properties on the fly as they describe their datasets, creating terms that can be immediately adopted by others and eventually become standardized. This approach combines a traditional, consensus-based approach to ontology development, and a crowdsourced approach where ex-pert users (the crowd) can dynamically add terms as needed to support their work. We have implemented this approach as a socio-technical system that includes: 1) a crowdsourcing platform to support metadata annotation and addition of new terms, 2) a range of social editorial processes to make standardization decisions for those new terms, and 3) a framework for ontology revision and updates to the metadata created with the previous version of the ontology. We present a prototype implementation for the paleoclimate community, the Linked Earth Framework, currently containing 700 datasets and engaging over 50 active contributors. Users exploit the platform to do science while extending the metadata vocabulary, thereby producing useful and practical metadata
FAIRy stories: tales from building the FAIR Research CommonsCarole Goble
Plenary Lecture Presented at INCF Neuroinformatics 2019 https://www.neuroinformatics2019.org
Title: FAIRy stories: tales from building the FAIR Research Commons
Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any kind of Research Object is a mantra; a method; a meme; a myth; a mystery. For the past 15 years I have been working on FAIR in a range of projects and initiatives in the Life Sciences as we try to build the FAIR Research Commons. Some are top-down like the European Research Infrastructures ELIXIR, ISBE and IBISBA, and the NIH Data Commons. Some are bottom-up, supporting FAIR for investigator-led projects (FAIRDOM), biodiversity analytics (BioVel), and FAIR drug discovery (Open PHACTS, FAIRplus). Some have become movements, like Bioschemas, the Common Workflow Language and Research Objects. Others focus on cross-cutting approaches in reproducibility, computational workflows, metadata representation and scholarly sharing & publication. In this talk I will relate a series of FAIRy tales. Some of them are Grimm. There are villains and heroes. Some have happy endings; all have morals.
This is a joint presentation by Jeroen Bosman and Bianca Kramer, given during a joint NISO-ICSTI webinar, held on Wednesday, October 26, on Enabling Innovation in Researcher Workflow and Scholarly Communication.
PIDs, Data and Software: How Libraries Can Support Researchers in an Evolving...Sarah Anna Stewart
Presentation given at the M25 Consortium of Academic Libraries, CPD25 Event on 'The Role of the Library in Supporting Research'. Provides an introduction to data, software and PIDs and a brief look at how libraries can enable researchers to gain impact and credit for their research data and software.
Slides from our tutorial on Linked Data generation in the energy domain, presented at the Sustainable Places 2014 conference on October 2nd in Nice, France
An introduction to Research Data Management and Data Management Planning presented at the University of the West of England on Wednesday 9th July 2014.
Implementing Open Access: Effective Management of Your Research DataMartin Hamilton
The slides from my session with the DCC's Martin Donnelly at the Understanding ModernGov "Implementing Open Access" event in June 2014. Our talk is all about the support available from Jisc and the DCC to help you manage your research data, and potential future initiatives that might help institutions to handle the move to "open science".
Steven McEachern - ADA, DDI (metadata standard) and the Data LifecycleSteve Androulakis
Dr. McEachern is Director of the Australian Data Archive at the Australian National University, and has research interests in data management and archiving, community and social attitude surveys, new data collection methods, and reproducible research methods.
This talk was given for the Monthly Tech Talks event hosted by Australian data infrastructure groups ANDS, NeCTAR, RDS and others.
Slides from Friday 3rd August - Data in the Scholarly Communications Life Cycle Course which is part of the FORCE11 Scholarly Communications Institute.
Presenter - Natasha Simons
Researchers require infrastructures that ensure a maximum of accessibility, stability and reliability to facilitate working with and sharing of research data. Such infrastructures are being increasingly summarised under the term Research Data Repositories (RDR). The project re3data.org – Registry of Research Data Repositories – began to index research data repositories in 2012 and offers researchers, funding organisations, libraries and publishers an overview of the heterogeneous research data repository landscape. In December 2014 re3data.org listed more than 1,030 research data repositories, which are described in detail using the re3data.org schema (http://dx.doi.org/10.2312/re3.003). Information icons help researchers to identify easily an adequate repository for the storage and reuse of their data. This talk describes the heterogeneous RDR landscape and presents a typology of institutional, disciplinary, multidisciplinary and project-specific RDR. Further, it outlines the features of re3data. org and it shows current developments for integration into data management planning tools and other services.
By the end of 2015 re3data.org and Databib (Purdue University, USA) will merge their services, which will then be managed under the auspices of DataCite. The aim of this merger is to reduce duplication of effort and to serve the research community better with a single, sustainable registry of research data repositories. The talk will present this organisational development as a best practice example for the development of international research information services.
10-1-13 “Research Data Curation at UC San Diego: An Overview” Presentation Sl...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series, " Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 1: “Research Data Curation at UC San Diego: An Overview”
Presented by David Minor & Declan Fleming, Chief Technology Strategist, UC San Diego Library
Presentation by Sally Rumsey, The Bodleian Libraries, University of Oxford at Science and Engineering South (SES) Event - Helping Researchers Manage their Data - Friday 9th May 2014 held at Imperial College London
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. Brief History
• 1st Plenary Gothenburg Preparing a WG Proposal/Case
Statement „Contextual Metadata“
• A lot of interest
• Revision of Initial Use Cases
• Use Cases as specific as possible
• Alignment with other WGs / Activities
• Four revised use cases:
– Researcher: Find data ..
– Manager: Indicate to funder
– Provenance: Allow to take segments from streamed data
workflows
– Interoperability: Exchange of contextual metadata
• Rename Group to „Data in Context“
3. Data in Context IG Approach
• Lifecycle Approach
– Linear Sequence of Elements
– Cyclic Repetition of Elements
• Investigate Lifecycle Models
– DCC: Conceptualize; Create;
Access; Use; Appraise; Select;
Dispose; etc
– DDI: Discovery & Planning;
Initial Data Collection; etc.
– Research Lifecycle (Jisc):
Research Process: Simulate
Experiment; Manage Data;
Analyse; etc.
– etc. ??
• Investigate contextually or
subcontextually-aware
standardization work
– OAIS; CASRAI; CERIF; VIVO;
PROV; PREMIS; MARC; CKAN;
DCAT; ISO; W3C; OMG; Research
Objects, etc.
• Investigate / Prioritize
Reusable Requirements
• Deliverables:
– M6: Overview of contextually-
aware standardization work
– M12: Priority List of
Requirements
• Goal:
– Set up of a Working Group
– Implementation of Standardized
Profiles
• Long-term Goal:
– Automated Transformation
Between Standards
4. Collaboration / Exchange
• RDA Foundation and Terminology
• RDA Metadata Standards Directory WG
• RDA PID Information Types WG
• ICSU Open Metadata Catalogue and Knowledge
Networks WG
• RDA/WDS Workflows for Publishing Data IG
• RDA Data Description Registry Interoperability
• RDA Semantic Interoperability Activity
• RDA Metadata Interest Group
• Various W3C groups (LOD, SW....)
5. Requirements / Needs
• Stakeholders
• Data Producers
• Data Consumers
• Standardized Open Vocabularies
• Standardized Formal Data Profiles
• Standardized Formal Semantics
Template
First Steps taken with
developing a Template
Apply
6. DCC – The Curation Lifecycle
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semantics
http://www.dcc.ac.uk/digital-curation/what-digital-curation
7. DDI Lifecycle
http://www.ddialliance.org/Specification/DDI-CV/
DDI Controlled Vocabularies
Analysis Unit; Character Set; Commonality Type;
Lifecycle Event Type; Response Unit;
Software Package; Summary Statistic Type
Time Method
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semantics
8. Data Assets Framework
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semanticshttp://www.data-audit.eu/
9. Research Lifecycle
DDI Controlled Vocabularies
Analysis Unit; Character Set; Commonality Type;
Lifecycle Event Type; Response Unit;
Software Package; Summary Statistic Type
Time Method
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semantics
http://www.jisc.ac.uk/whatwedo/campaigns/res3/jischelp.aspx
10. RDA Practical Policy WG
Policy Categories
Collec on-
based
Policies
Integrity
Data Lifecycle
Management
Data Staging
Federa on
Descrip on
Publica on
Compliance
Data
Management
Plans
Access
Control
Preserva onProvenance
Replica on
Regulatory
Management
Administrative
Assessment
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semantics
Src: Slide Extract Rainer Stotzka, Reagan Moore provided for „Data in Context“ session, RDA 3rd Plenary
11. Data Lifecycle
Stakeholders
Data Producer
Data Consumer
Standardized Open Vocabularies
Standardized Formal Data Profiles
Standardized Formal Semantics
DATA
Collaboration
&
Visualisation
Dissemination
&
Sharing
Archiving
&
Preserving
Analysis
&
Data Mining
Acquisition
&
Modeling
Src: Keynote Tony Hey at RDA 3rd Plenary
12. Experimental Context, Publishing and
Research Objects
Proposal
Approval
Scheduling
Experiment/
Investigation
Data storage
Record
Publication
Scientist submits
application for
beamtime
Facility committee
approves
application
Facility registers,
trains, and
schedules
scientist’s visit
Scientists visits
facility, run’s
experiment
Subsequent
publication
registered with
facility
Raw data filtered,
and stored
Data
analysis
Tools for
processing made
available
Investigation as a
first class object
Src: Slide extract Brian Matthews, STFC provided for „Data in Context“ session, RDA 3rd Plenary
13. Liberalised Meta-Data
Is a network
13
Citation
Coverage
(Temporal,
Spatial, Topic)
Use, Caveats,
Lineage,
Methods, and
Licenses
Publisher
People
Institutions
RDI Outputs/
Online
Resources
Projects
Initiatives
Networks
Funders
Relationships are contributed by (1) meta-data mining (2)
information from websites conforming to schema (3) social-
media-type sites and VREs (4) existing network contributions (5)
scraping existing websites (6) ontologies and vocabularies (…)
Src: Slide Extract Wim Hugo, ICSU WDS provided for „Data in Context“ session, RDA 3rd Plenary
14. Etc.
• Data Curation Profiles (Purdue University)
• ODP Model (ISO Reference Model for Open
Distributed Processing)
15. Standards
Jeffery et. al. 2013 http://resources.metapress.com/pdf-preview.axd?code=vl5422n2u7112669&size=largest
• e.g.
• OAIS
• CASRAI
• CERIF
• VIVO
• PROV
• PREMIS
• MARC
• CKAN
• DCAT
• ISO
• W3C
• OMG
• ODP
• etc.
17. Agenda
Session 1: Thursday, March 27 - 15:30 - 17:00
• Introduction and Overview from Co-Chairs
• Contributions from RDA Members
– Data Publishing Workflows, DCC Data Profiles (Angus Whyte)
– Data Description Registry Interoperability (Amir Aryani)
– Long-tail Data IG, Data Publishing IG (Jochen Schirrwagen)
– WDS Knowledge Network activity (Wim Hugo)
– Experimental Context, Publishing and Research Objects (Brian Matthews)
– Reference Model Proposal (Yin Chen)
• Discussion
Notes Taking:
Alessia Bardi, RDA Early Career Researchers Programme recipient.
18. Agenda
Session 2: Friday, March 28 - 11:00 – 12:30
• Recap and Overview from Co-Chairs
• Contributions from RDA Members
– Semantic Interoperability, (Gery Berg-Cross)
– Metadata WGs (Keith Jeffery, Rebecca Koskela)
– Practical Policy Sessions (Slides Reagan Moore)
• Discussion
Notes Taking:
Alessia Bardi, RDA Early Career Researchers Programme
recipient.
19. Rough Work Plan
• M6: Overview of contextually aware
standardization work
• M12: Priority List of Requirements
From there set up a RDA Working Group
Requirements-driven
Implementation of Standards WG Plan