1. The document discusses creating learning health systems (LHS) that use data to continually improve healthcare delivery and establish a social contract to share data for public benefit.
2. It proposes connected health cities (CHC) pilots in four regions of Northern England to test LHS approaches and share knowledge between regions.
3. The goals are to optimize care delivery using data, engage the public on data sharing, and accelerate digital health business growth in Northern England.
Research Data Management - A DIY Guide: What? Why? How?Sarah Anna Stewart
A tutorial on research data management workflows and digital tools presented to Ph.D. students and researchers at the Computational Methods Hub, at Imperial College London.
A presentation given at the RECODE workshop on 25th September 2014. It covers what is happening in terms of opening up access to research data at the University of Glasgow and via the Digital Curation Centre. The RECODE project is developing policy recommendations for open access to research data in Europe - http://recodeproject.eu
Kirsty Meddings CrossrefResearch funders are increasingly setting the agenda for scholarly communications, mandating certain editorial practices such as open peer review and data sharing, elevating the importance of preprints, and advocating for better use of existing community-run infrastructures like those maintained by Crossref, DataCite, and ORCID. This session will explain what’s new and next for the funding and infrastructure space, introducing a key project around persistent identifiers and metadata for grants, including use of facilities. Whilst the scholarly community has adopted standard persistent identifiers (PIDs) — for people (e.g. ORCID), content (e.g. DOIs, PMCIDs), and soon organizations (ROR.community) including funders (the Funder Registry) — the record of the award is not captured in a consistent way across funders worldwide. And they are not easily linked up with the literature or the researchers or the institutions. Harmonizing grant identifiers with one common universal schema will not just help people better measure reach and return, but will offer researchers a system that works more smoothly and accurately. In this session, hear from funding organizations about what they want, learn about the findings from the grant identifier pilot, and discover the next steps for this initiative.
Shaping future research environments: digital challenges and opportunitiesJisc
This document discusses the launch of a new Jisc digital research community aimed at connecting, sharing, and collaborating to address issues in the research process. It provides background on the community development process starting in mid-2020. The community council identified several priority issues, and potential community activities are discussed like webinars, discussions, case studies, and guides. An introductory webinar covered the purpose and priorities of the community, and gathered input from participants on desired activities and ways to get involved through a Menti polling tool. Next steps include making recordings available and continuing discussions to shape the emerging community.
RDM and data sharing landscape: overview for Salford DCC training 20140522L Molloy
Research data management and data sharing: a brief overview of where we are in the UK right now and some main drivers and benefits. Prepared for Salford university Digital Curation Centre training session, 22 May 2014. Contains material from across DCC resources.
Preparing for the UK Research Data Registry and Discovery ServiceRepository Fringe
The document discusses the UK Research Data Registry and Discovery Service (RDRDS) project. It provides an overview of the project's vision and progress, including participating data repositories in the initial pilot phase. It also discusses what participation in RDRDS means for data repositories, including requirements for metadata and options for syndicating metadata through harvesting. The goals of the second phase of the project are outlined as further defining use cases, evaluating platform options, and testing the system usability.
1. The document discusses creating learning health systems (LHS) that use data to continually improve healthcare delivery and establish a social contract to share data for public benefit.
2. It proposes connected health cities (CHC) pilots in four regions of Northern England to test LHS approaches and share knowledge between regions.
3. The goals are to optimize care delivery using data, engage the public on data sharing, and accelerate digital health business growth in Northern England.
Research Data Management - A DIY Guide: What? Why? How?Sarah Anna Stewart
A tutorial on research data management workflows and digital tools presented to Ph.D. students and researchers at the Computational Methods Hub, at Imperial College London.
A presentation given at the RECODE workshop on 25th September 2014. It covers what is happening in terms of opening up access to research data at the University of Glasgow and via the Digital Curation Centre. The RECODE project is developing policy recommendations for open access to research data in Europe - http://recodeproject.eu
Kirsty Meddings CrossrefResearch funders are increasingly setting the agenda for scholarly communications, mandating certain editorial practices such as open peer review and data sharing, elevating the importance of preprints, and advocating for better use of existing community-run infrastructures like those maintained by Crossref, DataCite, and ORCID. This session will explain what’s new and next for the funding and infrastructure space, introducing a key project around persistent identifiers and metadata for grants, including use of facilities. Whilst the scholarly community has adopted standard persistent identifiers (PIDs) — for people (e.g. ORCID), content (e.g. DOIs, PMCIDs), and soon organizations (ROR.community) including funders (the Funder Registry) — the record of the award is not captured in a consistent way across funders worldwide. And they are not easily linked up with the literature or the researchers or the institutions. Harmonizing grant identifiers with one common universal schema will not just help people better measure reach and return, but will offer researchers a system that works more smoothly and accurately. In this session, hear from funding organizations about what they want, learn about the findings from the grant identifier pilot, and discover the next steps for this initiative.
Shaping future research environments: digital challenges and opportunitiesJisc
This document discusses the launch of a new Jisc digital research community aimed at connecting, sharing, and collaborating to address issues in the research process. It provides background on the community development process starting in mid-2020. The community council identified several priority issues, and potential community activities are discussed like webinars, discussions, case studies, and guides. An introductory webinar covered the purpose and priorities of the community, and gathered input from participants on desired activities and ways to get involved through a Menti polling tool. Next steps include making recordings available and continuing discussions to shape the emerging community.
RDM and data sharing landscape: overview for Salford DCC training 20140522L Molloy
Research data management and data sharing: a brief overview of where we are in the UK right now and some main drivers and benefits. Prepared for Salford university Digital Curation Centre training session, 22 May 2014. Contains material from across DCC resources.
Preparing for the UK Research Data Registry and Discovery ServiceRepository Fringe
The document discusses the UK Research Data Registry and Discovery Service (RDRDS) project. It provides an overview of the project's vision and progress, including participating data repositories in the initial pilot phase. It also discusses what participation in RDRDS means for data repositories, including requirements for metadata and options for syndicating metadata through harvesting. The goals of the second phase of the project are outlined as further defining use cases, evaluating platform options, and testing the system usability.
Research data and the ANDS agenda in AustraliaAndrew Treloar
This document discusses research data and the agenda of the Australian National Data Service (ANDS) in Australia. ANDS was established in 2009 to enable Australian researchers to more easily publish, discover, access and reuse research data. It provides several national services and has funded over 200 projects. The document also outlines relevant national policies and ANDS's involvement in international organizations like the Research Data Alliance.
Closing plenary - John Wilkin and David MaguireJisc
Infrastructure for US research and scholarship
Speaker: John Wilkin, dean of libraries and university librarian at the University of Illinois, previous executive director, HathiTrust.
Efficient infrastructure for UK research
Speaker: David Maguire, vice-chancellor of the University of Greenwich and chair of Jisc.
Jisc and CNI conference, 6 July 2016
Opening up data – Jisc and CNI conference 10 July 2014Jisc
The document discusses research data management and open data. It notes that Creative Commons tools can be used to make data openly available, and have been successfully implemented in various disciplines. It also discusses requirements and guidelines from funders like NIH and NSF to share data. Trends in data sharing policies from journals in different fields over time are shown. Challenges to sharing research data are presented. The development of infrastructure to support open data is discussed.
Advocacy in Research Data Management. Session 3.2 of the RDMRose v3 materials.
The JISC funded RDMRose project (June 2012-May 2013) was a collaboration between the libraries of the University of Leeds, Sheffield and York, with the Information School at Sheffield to provide an Open Educational Resource for information professionals on Research Data Management. The materials were revised between November 2014 and February 2015 for the consortium of North West Academic Libraries (NoWAL).
http://www.sheffield.ac.uk/is/research/projects/rdmrose
A summary of the outputs from the Organisational Identifiers Working Group, part of the Jisc CASRAI-UK pilot, in particular the report reviewing selected organisational IDs and development of use cases. Presented at Jornadas FCCN, Lisbon, Portugal 10th Feb 2015.
The document summarizes a seminar about eduroam(UK), a service that allows students and staff to access wireless networks when traveling. It discusses the steady growth in membership numbers, issues with the previous platform struggling with load, solutions implemented to improve performance, and plans for continued expansion and improvements to further enhance the user experience.
Research data spring: extending the OPD to cover RDMJisc RDM
The research data spring project "Extending the Organisational Profile Document to cover Research Data Management" slides for the third sandpit workshop. Project led by Joy Davidson from the Digital Curation Centre.
The document discusses Researcher Links workshops which are organized to foster partnerships between UK and Kazakhstan universities. The Newton Al-Farabi fund will sponsor workshops between October 2014 and March 2015, bringing together 15-20 UK researchers and 15-20 Kazakhstan researchers. The workshops are organized by two leading researchers from each country who apply on behalf of their universities. The coordinators can also propose additional mentors to pair with early career researchers. The workshops aim to create research partnerships, build professional networks, and provide career development opportunities for young scholars.
Knowledge Unlatched is a global library consortium that works with publishers to make scholarly books openly accessible. It aims to provide a sustainable path to open access for humanities and social sciences monographs. The consortium shares the fixed costs of publishing digital editions among member libraries. This lowers the risk for both libraries and publishers. Initially, Knowledge Unlatched will select a modest collection of 30-50 titles from publisher submissions for its first unlatching package in 2013-2014. It will then scale up by repeating the process with additional subject collections, titles, and publishers over time. The goal is to spread publishing costs across institutions to make open access a reality for scholarly books in a minimally disruptive way.
Certifying and Securing a Trusted Environment for Health Informatics Research...Jisc
The document discusses the certification and securing of a trusted environment for health informatics research data at the University of Dundee. It provides an overview of the Health Informatics Centre, its research data management platform, safe haven architecture, and ISO27001 certification. The platform standardizes data extraction and release, adds metadata and quality checks. A safe haven uses pseudonymized data and virtual environments prevent data from leaving. ISO27001 certification provides governance and reduces documentation through standardized information security practices.
British Oceanographic Data Centre's Published Data LibraryAdam Leadbetter
The document outlines the objectives, design, and current status of the Published Data Library (PDL) system. The objectives are to deliver meaningful and discoverable data collections that are fixed, usable without additional context, and assured to be available long-term. The design assigns DOIs to datasets through DataCite, with DOIs resolving to HTML landing pages containing metadata and links to usage metadata and data. Currently, descriptive pages and an 8-dataset DOI catalogue are live, along with some DOI landing pages containing human and machine-readable metadata in HTML and RDFa formats. Future work includes developing a database backend and linking to other data repositories.
The Jisc Research Data Discovery Service Project aims to build a UK research data discovery service that enables discovery of UK research data and meets requirements. Phase 2 will build on previous pilot work to lay foundations for the future delivery of the service, including developing use cases, agreeing metadata standards, and creating a business case. The project team is working with participating universities and data centers to ingest metadata and gather feedback to develop an effective solution.
This document summarizes a presentation about open science and higher education in Africa given by Jacqueline Nnam. It discusses the status of open science in African universities, challenges to open science implementation, and opportunities and priorities for promoting open science through RUFORUM. RUFORUM aims to encourage open publication of research and has an open access repository. Challenges include varying policies, lack of infrastructure, and incentives for researchers. Opportunities include data science education and harmonizing policies. Priorities are awareness, capacity building, infrastructure, standards, and piloting open data projects.
Supporting access: interventions that seek to improve the ways in which decision makers are able to access research based information.
Preseantation by Faye Reagon, HSRC (South Africa) at the Locating the Power of the In-between conference July 08
Northumbria University is working to implement a robust research data management (RDM) solution. It has engaged in several activities to assess current RDM practices and infrastructure needs, including interviews with grant holders, a survey of researchers, and workshops with the Digital Curation Centre. Through these workshops, the university used the RISE model to evaluate its capabilities for data ingest, access, preservation, and more across several potential repository platforms. This helped provide evidence to secure budget and staffing to pilot and roll out a new RDM system starting in 2018. The university aims to go to procurement in September 2017 after finalizing business requirements and an options appraisal.
This document provides guidance on developing research data management services at universities. It discusses 10 key steps: 1) Understanding current practices, 2) Deciding what services are needed, 3) Balancing the needs of stakeholders, 4) Securing input and buy-in, 5) Defining roles and responsibilities, 6) Positioning support appropriately, 7) Balancing internal and external provision, 8) Being agile and adaptable to change, 9) Linking systems to integrate services, and 10) Planning for long-term sustainability. The overall message is that developing effective RDM requires understanding user needs, engaging stakeholders, and continually adapting services.
Research data and the ANDS agenda in AustraliaAndrew Treloar
This document discusses research data and the agenda of the Australian National Data Service (ANDS) in Australia. ANDS was established in 2009 to enable Australian researchers to more easily publish, discover, access and reuse research data. It provides several national services and has funded over 200 projects. The document also outlines relevant national policies and ANDS's involvement in international organizations like the Research Data Alliance.
Closing plenary - John Wilkin and David MaguireJisc
Infrastructure for US research and scholarship
Speaker: John Wilkin, dean of libraries and university librarian at the University of Illinois, previous executive director, HathiTrust.
Efficient infrastructure for UK research
Speaker: David Maguire, vice-chancellor of the University of Greenwich and chair of Jisc.
Jisc and CNI conference, 6 July 2016
Opening up data – Jisc and CNI conference 10 July 2014Jisc
The document discusses research data management and open data. It notes that Creative Commons tools can be used to make data openly available, and have been successfully implemented in various disciplines. It also discusses requirements and guidelines from funders like NIH and NSF to share data. Trends in data sharing policies from journals in different fields over time are shown. Challenges to sharing research data are presented. The development of infrastructure to support open data is discussed.
Advocacy in Research Data Management. Session 3.2 of the RDMRose v3 materials.
The JISC funded RDMRose project (June 2012-May 2013) was a collaboration between the libraries of the University of Leeds, Sheffield and York, with the Information School at Sheffield to provide an Open Educational Resource for information professionals on Research Data Management. The materials were revised between November 2014 and February 2015 for the consortium of North West Academic Libraries (NoWAL).
http://www.sheffield.ac.uk/is/research/projects/rdmrose
A summary of the outputs from the Organisational Identifiers Working Group, part of the Jisc CASRAI-UK pilot, in particular the report reviewing selected organisational IDs and development of use cases. Presented at Jornadas FCCN, Lisbon, Portugal 10th Feb 2015.
The document summarizes a seminar about eduroam(UK), a service that allows students and staff to access wireless networks when traveling. It discusses the steady growth in membership numbers, issues with the previous platform struggling with load, solutions implemented to improve performance, and plans for continued expansion and improvements to further enhance the user experience.
Research data spring: extending the OPD to cover RDMJisc RDM
The research data spring project "Extending the Organisational Profile Document to cover Research Data Management" slides for the third sandpit workshop. Project led by Joy Davidson from the Digital Curation Centre.
The document discusses Researcher Links workshops which are organized to foster partnerships between UK and Kazakhstan universities. The Newton Al-Farabi fund will sponsor workshops between October 2014 and March 2015, bringing together 15-20 UK researchers and 15-20 Kazakhstan researchers. The workshops are organized by two leading researchers from each country who apply on behalf of their universities. The coordinators can also propose additional mentors to pair with early career researchers. The workshops aim to create research partnerships, build professional networks, and provide career development opportunities for young scholars.
Knowledge Unlatched is a global library consortium that works with publishers to make scholarly books openly accessible. It aims to provide a sustainable path to open access for humanities and social sciences monographs. The consortium shares the fixed costs of publishing digital editions among member libraries. This lowers the risk for both libraries and publishers. Initially, Knowledge Unlatched will select a modest collection of 30-50 titles from publisher submissions for its first unlatching package in 2013-2014. It will then scale up by repeating the process with additional subject collections, titles, and publishers over time. The goal is to spread publishing costs across institutions to make open access a reality for scholarly books in a minimally disruptive way.
Certifying and Securing a Trusted Environment for Health Informatics Research...Jisc
The document discusses the certification and securing of a trusted environment for health informatics research data at the University of Dundee. It provides an overview of the Health Informatics Centre, its research data management platform, safe haven architecture, and ISO27001 certification. The platform standardizes data extraction and release, adds metadata and quality checks. A safe haven uses pseudonymized data and virtual environments prevent data from leaving. ISO27001 certification provides governance and reduces documentation through standardized information security practices.
British Oceanographic Data Centre's Published Data LibraryAdam Leadbetter
The document outlines the objectives, design, and current status of the Published Data Library (PDL) system. The objectives are to deliver meaningful and discoverable data collections that are fixed, usable without additional context, and assured to be available long-term. The design assigns DOIs to datasets through DataCite, with DOIs resolving to HTML landing pages containing metadata and links to usage metadata and data. Currently, descriptive pages and an 8-dataset DOI catalogue are live, along with some DOI landing pages containing human and machine-readable metadata in HTML and RDFa formats. Future work includes developing a database backend and linking to other data repositories.
The Jisc Research Data Discovery Service Project aims to build a UK research data discovery service that enables discovery of UK research data and meets requirements. Phase 2 will build on previous pilot work to lay foundations for the future delivery of the service, including developing use cases, agreeing metadata standards, and creating a business case. The project team is working with participating universities and data centers to ingest metadata and gather feedback to develop an effective solution.
This document summarizes a presentation about open science and higher education in Africa given by Jacqueline Nnam. It discusses the status of open science in African universities, challenges to open science implementation, and opportunities and priorities for promoting open science through RUFORUM. RUFORUM aims to encourage open publication of research and has an open access repository. Challenges include varying policies, lack of infrastructure, and incentives for researchers. Opportunities include data science education and harmonizing policies. Priorities are awareness, capacity building, infrastructure, standards, and piloting open data projects.
Supporting access: interventions that seek to improve the ways in which decision makers are able to access research based information.
Preseantation by Faye Reagon, HSRC (South Africa) at the Locating the Power of the In-between conference July 08
Northumbria University is working to implement a robust research data management (RDM) solution. It has engaged in several activities to assess current RDM practices and infrastructure needs, including interviews with grant holders, a survey of researchers, and workshops with the Digital Curation Centre. Through these workshops, the university used the RISE model to evaluate its capabilities for data ingest, access, preservation, and more across several potential repository platforms. This helped provide evidence to secure budget and staffing to pilot and roll out a new RDM system starting in 2018. The university aims to go to procurement in September 2017 after finalizing business requirements and an options appraisal.
This document provides guidance on developing research data management services at universities. It discusses 10 key steps: 1) Understanding current practices, 2) Deciding what services are needed, 3) Balancing the needs of stakeholders, 4) Securing input and buy-in, 5) Defining roles and responsibilities, 6) Positioning support appropriately, 7) Balancing internal and external provision, 8) Being agile and adaptable to change, 9) Linking systems to integrate services, and 10) Planning for long-term sustainability. The overall message is that developing effective RDM requires understanding user needs, engaging stakeholders, and continually adapting services.
This document discusses data management plans (DMPs), which are brief plans that define how research data will be created, documented, stored, shared, and preserved. DMPs are often required as part of grant applications. The document provides an overview of why DMPs are important, how they benefit researchers and institutions, and key aspects to address in a DMP such as data organization, stakeholders, and making data FAIR (findable, accessible, interoperable, and reusable). Examples of DMPs from real projects are also presented.
A presentation given on the Horizon 2020 open data pilot as part of a series of OpenAIRE webinars for Open Access week 2014 - http://www.fosteropenscience.eu/event/openaire-webinars-during-oa-week-2014
The document discusses guidelines and resources for open research data under Horizon 2020, including the Open Research Data pilot. It provides an overview of key guidelines and requirements, such as developing a data management plan, selecting which data to openly license and share, using standards for interoperability and metadata, depositing data in repositories, and finding discipline-specific infrastructure and support. Resources highlighted include guidelines on licensing, the EUDAT licensing tool, Zenodo and other repositories, metadata standards directories, and training from FOSTER and OpenAIRE.
This document discusses the importance of research data management (RDM) initiatives for universities. It provides examples of how universities in the UK are developing RDM services and policies to support researchers in managing their data according to funder and legal requirements. This includes developing RDM roadmaps and strategies, guidance webpages, training programs, support for data management planning, data storage infrastructure, and institutional data repositories. National programs like the Digital Curation Centre and Jisc are helping to build universities' capabilities in RDM.
Challenges for research support - Sarah Jones, University of Glasgow, Digital...Mari Tinnemans
This document provides guidance on developing research data management services at universities. It discusses 10 key points: 1) Understanding current research data practices, 2) Deciding what services are needed, 3) Balancing the needs of stakeholders, 4) Securing input and buy-in, 5) Defining roles and responsibilities, 6) Positioning support appropriately, 7) Balancing internal and external provision, 8) Being agile and adaptable to change, 9) Linking systems to integrate services, and 10) Planning for long-term sustainability. The overall message is that developing effective RDM requires understanding user needs, engaging stakeholders, and continually adapting services.
This document provides an overview of a webinar on digital curation and research data management for universities. The webinar covers an introduction to digital curation, the benefits and drivers for research data management, current initiatives in UK universities, and the role of libraries in supporting research data management. Libraries are increasingly involved in developing institutional policies, providing training, and advising researchers on writing data management plans and sharing data. The webinar highlights training opportunities for librarians to develop skills in research data management and digital curation.
What are other universities doing to support RDM?Sarah Jones
This document discusses research data management (RDM) activities at other universities. It outlines common RDM activities such as establishing steering groups, developing policy and strategy, and delivering training. It provides examples of specific RDM initiatives at universities, including RDM services at the University of Bath and research data storage at the University of Bristol. The document emphasizes that developing comprehensive RDM services requires involvement from various stakeholders and support services across the university.
Presentation given by Sarah Jones at a seminar run by LSHTM on 6th November 2012. http://www.lshtm.ac.uk/newsevents/events/2012/11/developing-data-management-expertise-in-research---half-day-event
An overview of the LSHTM Research Data Management Policy, outlining the motivations for its introduction, obligations that need to be met and the support available
Rachel Bruce UK research and data management where are we nowJisc
The document discusses the state of research data management in UK universities. It finds that while areas like data cataloguing and access/storage systems are progressing, governance of data access/reuse and digital preservation/planning are lagging. Barriers to progress include low researcher priority, funding availability, and lack of staff/infrastructure. Gaps include defining responsibilities, standards, costs, and tools. Coordination and sharing resources across institutions is needed to help universities advance research data management.
Presentation given by Sarah Jones and Martin Donnelly outlining the UK RDM landscape, JISC MRD programmes, and DCC initiatives.
The presentation was given at Statistics New Zealand on 28th March, ANDS webinars on 29th & 30th March and Monash University on 2nd April 2012.
Research data management and the Digital Curation CentreMartin Donnelly
Slides from a couple of webinars given while visiting ANDS in Canberra, Australia. (N.B. We also gave short talks at Statistics New Zealand and Monash University - the slides are more or less the same.)
Birgit Plietzsch “RDM within research computing support” SALCTG June 2013SALCTG
An overview of Research Data Management: the research process from developing ideas to preservation of data; funder perspectives, the impact on the wider service, Data Asset Frameworks, preservation and access, and cost implications.
The document discusses a leaders conference on UK data management environments and support. It provides information on the current UK research data management policy environment, systems used, and challenges. It introduces Jisc's proposed Research Data Shared Service as a sector-wide approach to address these issues by providing a single, integrated solution for research data management across the UK. Key benefits identified include optimizing costs, growing the value of research data, and increasing compliance with funder requirements for data preservation and sharing. The development history and features of the proposed shared service are outlined.
Stuart Macdonald steps through the process of creating a robust data management plan for researchers. Presented at the European Association for Health Information and Libraries (EAHIL) 2015 workshop, Edinburgh, 11 June 2015.
The document provides information on creating a data management plan (DMP) for grant applications. It discusses what a DMP is, why they are important, and what funders require in a DMP. A DMP outlines how research data will be collected, documented, stored, shared, and preserved. The document recommends addressing six key themes in a DMP: data types and standards; ethics and intellectual property; data access, sharing and reuse; short-term storage and management; long-term preservation; and resourcing. Developing a strong DMP helps researchers manage data effectively and makes data available and reusable by others.
Stuart Macdonald reviews what researchers need to do to comply with the new EPSRC framework concerning the management and provision of access to publicly-funded research data. Presented at the Mobility, Mood and Place Research Committee Meeting workshop at the Edinburgh College of Art, 16 June, 2015.
Keynote presentation given at the Data Fellows 2023 workshop in Berlin on 22-23 June. Presentation gives examples of good communication to explain data management concepts and how to use games and other forms of interactivity in training events
Managing and sharing data: lessons from the European contextSarah Jones
The document discusses a presentation given by Sarah Jones on managing and sharing data openly in the European context. The presentation covered topics such as research data management (RDM), FAIR data principles, open science, the European Open Science Cloud (EOSC), and how universities can support researchers in practicing open science. It provided overviews and definitions of these topics, discussed challenges to open data sharing, and offered practical advice on making data FAIR and open through activities like choosing a license, selecting a repository, and using appropriate file formats and metadata standards.
The EOSC Association conducted a survey to gather feedback on their Multi-Annual Roadmap (MAR) and received 45 completed responses with 191 partial responses. The main themes from the 534 comments included needing more clarity on terminology, emphasizing national investment roles, and greater focus on business models and funding research software engineers. Minor comments requested removing organization examples, clarifying the voluntary nature of EOSC, and reconsidering visual identity. The analysis will be shared with the board and task forces to inform revisions to the MAR text for republication in mid-May.
The document provides an introduction to open science and the European Open Science Cloud (EOSC). It discusses the concepts of open access, open data, open methods, and FAIR data principles. It describes the EOSC as a federation of research infrastructures and services that aims to enable multidisciplinary discovery and use. Key benefits of the EOSC for researchers include access to more services, funding for compute resources, easier discovery of related data, and greater collaboration abilities.
The document summarizes the results of a consultation on the Multi-Annual Roadmap (MAR) for the European Open Science Cloud (EOSC). Over 45 people completed the survey and provided over 500 comments total. The comments covered priorities like engaging researchers, long-term data preservation, standards, and funding. The feedback will be used to update the MAR and align it with the upcoming Horizon Europe work program before publishing a new version in April.
The document provides an introduction to the European Open Science Cloud (EOSC). It defines key concepts like open science, FAIR data, and explains what EOSC is - a federated infrastructure to support open sharing and reuse of research outputs across disciplines. It outlines EOSC's goals like enabling multidisciplinary discovery and connecting previously disconnected research resources and data silos. Examples of current EOSC services and resources available via the EOSC Portal are also briefly described.
This document discusses the challenges facing the European Open Science Cloud (EOSC) and identifies actions that could help address those challenges. Some of the top challenges mentioned are that EOSC is still in the build phase and not yet functioning seamlessly for end users, it is extremely complex due to its multi-stakeholder, multi-country, and multi-disciplinary nature, and its governance was only recently established while its formation occurred organically through projects. Key priority actions identified include extensive testing and iteration based on user feedback, releasing small functionalities incrementally, continuing collaborative and consensus-driven work, and establishing an effective stakeholder forum. The document advocates for putting research community needs at the center and having the EOSC Association and
This document discusses the FAIR data principles and increasing adoption of FAIR. It begins by explaining the 15 FAIR principles for findable, accessible, interoperable and reusable data. It then discusses how adoption is increasing through funder requirements, the role of FAIR within EOSC, and related projects. However, it notes that most data is still not managed or shared according to FAIR principles due to barriers like time and effort required as well as lack of incentives and rewards. The document argues that both cultural and technical aspects must be addressed to fully implement FAIR.
Data Management Planning for researchersSarah Jones
This document provides information about creating a data management plan (DMP) for researchers. It begins with defining what a DMP is - a short plan that outlines what data will be created, how it will be managed and stored, and plans for sharing and preservation. It then discusses the common components of a DMP, including describing the data, standards and methodologies, ethics and intellectual property, data sharing plans, and preservation strategies. The document provides examples of DMP requirements and recommendations from funders. It offers tips for creating a good DMP, including thinking about the needs of future data re-users, consulting stakeholders, grounding plans in reality, and planning for sharing from the outset. Finally, it discusses tools and resources
1) Europe has invested hugely in the European Open Science Cloud (EOSC) over recent years through various initiatives, reports, and projects.
2) EOSC aims to create a federated environment for open sharing and analysis of research data across borders and disciplines.
3) Sharing sensitive data on EOSC requires properly documenting, licensing, identifying, and anonymizing data while making it findable and accessible on repositories or secure services.
Presentation given at the DMPonline 10 year anniversary week, reflecting on lessons learned developing the business model. See https://www.dcc.ac.uk/events/dmponline-10th-year-anniversary-celebration-week and #10yearsDMPonline
This document discusses best practices for supporting open science. It recommends adopting existing solutions where possible rather than developing new ones. It also suggests engaging with researchers, incentivizing open practices, allowing for innovation and failure, collaborating with peers, and keeping service delivery options open. The document concludes by inviting attendees to a workshop on delivering research data management services.
This document provides an overview of new features and updates to the DMPTuuli data management planning tool. Key points include: improvements to the user interface and sharing options; integration with ORCID and adding grant IDs; enhanced admin controls and template versioning; offering feedback on plans; and a usage dashboard and API improvements. Future planned features are also outlined such as conditional questions, custom domains, and integrations. Support resources and ways to connect with the developer are highlighted.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
3. Summary page for each funder
http://www.dcc.ac.uk/resources/policy-and-legal/research-funding-policies/ahrc
4. Key differences in policies
• EPSRC does not want DMPs in grant applications
• Preservation periods range from 3 years to in perpetuity
– most funders ask for 10+ years
• ESRC and NERC support designated data centres
• ESRC and NERC may withhold the final grant payment if data
aren’t offered for deposit
• Cancer Research UK states explicitly that it will NOT provide
additional funds for RDM
5. Ultimately funders expect:
• timely release of data
- once patents are filed or on (acceptance for) publication
• open data sharing
- minimal or no restrictions if possible
• preservation of data
- typically 10+ years for data of ‘long-term value’
See the RCUK Common Principles on data policy:
www.rcuk.ac.uk/research/Pages/DataPolicy.aspx
6. RCUK Common Principles in brief
1. Make data openly available where possible
2. Have policies & plans. Preserve data of long-term value
3. Metadata for discovery / reuse. Link to data from publications
4. Be mindful of legal, ethical and commercial constraints
5. Allow limited embargoes to protect the effort of creators
6. Acknowledge sources to recognise IP and abide by T&Cs
7. Ensure cost-effective use of public funds for RDM
http://www.rcuk.ac.uk/research/Pages/DataPolicy.aspx
7. Eligible costs
The RCUK Common Principles state that:
“It is appropriate to use public funds to support
the management and sharing of publicly-
funded research data.”
However, it is unclear exactly what costs can be
included in grant applications and how. The DCC
held an RDMF event with funders to discuss this.
8. RDMF: funding RDM
25th April 2013 at Aston University
www.dcc.ac.uk/events/research-data-management-forum-rdmf/
rdmf-special-event-funding-research-data-management
Included a panel with representatives from BBSRC, EPSRC, NERC, MRC, STFC
and the Wellcome Trust to answer 30 questions submitted by audience
Blog reports:
• A conversation with the funders:
http://www.dcc.ac.uk/blog/conversation-funders
• Funding RDM: https://research-computing.wp.st-
andrews.ac.uk/2013/05/01/funding-rdm
• For which RDM activities will UK research funders pay?
http://mrdevidence.jiscinvolve.org/wp/2013/05/01/for-which-rdm-
activities-will-uk-research-funders-pay
9. What RDM cost can be included?
Need to distinguish between the costs that are incurred during a
project and those that arise afterwards.
• In-project (direct) costs:
– covers hardware, staff, expenses, costs of preparing data & metadata...
• Post project (largely indirect) costs:
– existing services should be used where possible
– where an institution is going to provide a data repository, costs should
be met through FEC
– outsourcing to a third-party is also an option
Owing to its charity status, the Wellcome Trust in general only
pays directly incurred costs.
10. How should costs be included?
• In-project costs should be included in the direct costs for a
project
• Post-project costs could be direct (e.g. charges levied by data
centres) but typically fall into indirects as universities should
provide infrastructure to support RDM
• The Justifications of Resources should, where
possible, separate out the following RDM cost elements:
– cost of collecting data
– the cost of curating data
– the cost of analysing data
– the cost of preservation and sharing
11. Key messages
• Research data management is but one aspect of an institution’s research
governance and should not be regarded as an optional addition or
something peripheral to it.
• DMPs should make clear what is provided and what activities are being
charged against a grant - funders do not expect to pay for something twice.
• There is no rule of thumb to be used to measure the proportion of a grant
that may acceptably be spent on research data management. The cost of
RDM is project-specific and entirely depends on the type of work.
• It may be possible to set up small research facilities to recover the cost of
RDM (e.g. similar to provision of HPC), possibly as a cross-institutional
service. However, clear added value needs to be shown to the funders and
research community. A small research facility needs to be very close to the
research. It is about creating highly specialised services.
12. Thanks – any questions?
DCC guidance, tools and case studies:
www.dcc.ac.uk/resources
Follow us on twitter:
@digitalcuration and #ukdcc