The document summarizes the results of a consultation on the Multi-Annual Roadmap (MAR) for the European Open Science Cloud (EOSC). Over 45 people completed the survey and provided over 500 comments total. The comments covered priorities like engaging researchers, long-term data preservation, standards, and funding. The feedback will be used to update the MAR and align it with the upcoming Horizon Europe work program before publishing a new version in April.
The document provides an introduction to open science and the European Open Science Cloud (EOSC). It discusses the concepts of open access, open data, open methods, and FAIR data principles. It describes the EOSC as a federation of research infrastructures and services that aims to enable multidisciplinary discovery and use. Key benefits of the EOSC for researchers include access to more services, funding for compute resources, easier discovery of related data, and greater collaboration abilities.
This document discusses the FAIR data principles and increasing adoption of FAIR. It begins by explaining the 15 FAIR principles for findable, accessible, interoperable and reusable data. It then discusses how adoption is increasing through funder requirements, the role of FAIR within EOSC, and related projects. However, it notes that most data is still not managed or shared according to FAIR principles due to barriers like time and effort required as well as lack of incentives and rewards. The document argues that both cultural and technical aspects must be addressed to fully implement FAIR.
University of Liverpool Researcher KnowHow session presented by Judith Carr.
At the end of this session you will know what the FAIR data principles are, what is required and be in a position to think how these would relate to your research practice.
Essential Excel for Business Analysts and ConsultantsAsen Gyczew
Excel is the most often used first-choice tool of every business analyst and consultant. Maybe it is not the most fancy or sophisticated one, yet it is universally understood by everybody especially your boss and your customers.
Excel is still pretty advanced tool with countless number of features and functions. I have mastered quite a lot of them during my studies and while working. After some time in consulting I discovered that most of them are not that useful; some of them bring more problems than solutions. On top of that there are features that we are taught at university that are not flexible and pretty time consuming. While working as a business analyst I developed my own set of tricks to deal with Excel I learned how to make my analyses idiot-proven and extremely universal.
I will NOT teach you the entire Excel as it is simply not efficient (and frankly you don’t need it). This course is organized around 80/20 rule and I want to teach you the most useful (from business analyst / consultant perspective) formulas as fast as possible. I want you also to acquire thanks to the course good habits in Excel that will save you loads of time.
If done properly, this course will transform you in 1 day into pretty good business analyst that knows how to use Excel in the smart way. It is based on my 12 years of experience as a consultant in top consulting companies and as a Board Member responsible for strategy, improvement and turn-arounds in biggest companies from FMCG, SMG, B2B sector that I worked for. On the basis of what you will find in this course I have trained over 100 business analysts who now are Investment Directors, Senior Analyst, Directors in Consulting Companies, Board Members etc.
I teach step by step on the basis of Excel files that will be attached to the course. To make the best out of the course you should follow my steps and repeat what I do with the data after every lecture. Don’t move to the next lecture if you have not done what I show in the lecture that you have gone through.
I assume that you know basic Excel so the basic features (i.e. how to write formula in Excel) are not explained in this course. I concentrate on intermediate and advanced solutions and purposefully get rid of some things that are advanced yet later become very inflexible and useless (i.e. naming the variables). At the end, I will show 4 full blown analyses in Excel that use the tricks that I show in the lectures.
To every lecture you will find attached (in additional resources) the Excel shown in the Lecture so as a part of this course you will also get a library of ready-made analyses that can, with certain modification, be applied by you in your work.
The document summarizes the results of a consultation on the Multi-Annual Roadmap (MAR) for the European Open Science Cloud (EOSC). Over 45 people completed the survey and provided over 500 comments total. The comments covered priorities like engaging researchers, long-term data preservation, standards, and funding. The feedback will be used to update the MAR and align it with the upcoming Horizon Europe work program before publishing a new version in April.
The document provides an introduction to open science and the European Open Science Cloud (EOSC). It discusses the concepts of open access, open data, open methods, and FAIR data principles. It describes the EOSC as a federation of research infrastructures and services that aims to enable multidisciplinary discovery and use. Key benefits of the EOSC for researchers include access to more services, funding for compute resources, easier discovery of related data, and greater collaboration abilities.
This document discusses the FAIR data principles and increasing adoption of FAIR. It begins by explaining the 15 FAIR principles for findable, accessible, interoperable and reusable data. It then discusses how adoption is increasing through funder requirements, the role of FAIR within EOSC, and related projects. However, it notes that most data is still not managed or shared according to FAIR principles due to barriers like time and effort required as well as lack of incentives and rewards. The document argues that both cultural and technical aspects must be addressed to fully implement FAIR.
University of Liverpool Researcher KnowHow session presented by Judith Carr.
At the end of this session you will know what the FAIR data principles are, what is required and be in a position to think how these would relate to your research practice.
Essential Excel for Business Analysts and ConsultantsAsen Gyczew
Excel is the most often used first-choice tool of every business analyst and consultant. Maybe it is not the most fancy or sophisticated one, yet it is universally understood by everybody especially your boss and your customers.
Excel is still pretty advanced tool with countless number of features and functions. I have mastered quite a lot of them during my studies and while working. After some time in consulting I discovered that most of them are not that useful; some of them bring more problems than solutions. On top of that there are features that we are taught at university that are not flexible and pretty time consuming. While working as a business analyst I developed my own set of tricks to deal with Excel I learned how to make my analyses idiot-proven and extremely universal.
I will NOT teach you the entire Excel as it is simply not efficient (and frankly you don’t need it). This course is organized around 80/20 rule and I want to teach you the most useful (from business analyst / consultant perspective) formulas as fast as possible. I want you also to acquire thanks to the course good habits in Excel that will save you loads of time.
If done properly, this course will transform you in 1 day into pretty good business analyst that knows how to use Excel in the smart way. It is based on my 12 years of experience as a consultant in top consulting companies and as a Board Member responsible for strategy, improvement and turn-arounds in biggest companies from FMCG, SMG, B2B sector that I worked for. On the basis of what you will find in this course I have trained over 100 business analysts who now are Investment Directors, Senior Analyst, Directors in Consulting Companies, Board Members etc.
I teach step by step on the basis of Excel files that will be attached to the course. To make the best out of the course you should follow my steps and repeat what I do with the data after every lecture. Don’t move to the next lecture if you have not done what I show in the lecture that you have gone through.
I assume that you know basic Excel so the basic features (i.e. how to write formula in Excel) are not explained in this course. I concentrate on intermediate and advanced solutions and purposefully get rid of some things that are advanced yet later become very inflexible and useless (i.e. naming the variables). At the end, I will show 4 full blown analyses in Excel that use the tricks that I show in the lectures.
To every lecture you will find attached (in additional resources) the Excel shown in the Lecture so as a part of this course you will also get a library of ready-made analyses that can, with certain modification, be applied by you in your work.
Presentation given at Macquarie University in support of the ARDC 'institutional role in the data commons' project on "Implementing FAIR: Standards in Research Data Management" https://ardc.edu.au/news/data-and-services-discovery-activities-successful-applicants/
An introduction to Microsoft Power BI, emphasisng on the usability of Power Query and how it's useful for the excel population. A session delived at Orion India Systems Pvt. Ltd.
This document summarizes key findings from surveys about researchers' data sharing practices and attitudes. It finds that while most researchers agree data should be shared, only a small percentage actually make their data openly available. Researchers typically share data through email, cloud services, or external drives rather than repositories. The document also discusses increasing emphasis on open and FAIR data in research funder policies, but notes researchers face barriers to compliance like unclear terminology, lack of skills and incentives, and confusion between open data and managed/FAIR data. It argues for engagement programs to help researchers better understand and participate in open scholarship.
This document provides an overview and introduction to Tableau. It outlines the basic steps for connecting to different data sources, building initial views, and creating dashboards. The document covers prerequisites, an introduction to the Tableau workspace, demo instructions for connecting to sample data files and modifying data connections, and includes lab exercises for readers to practice the concepts. The goal is to help readers understand the basics of visualizing and exploring data using Tableau.
Apache Atlas provides data governance capabilities for Hadoop including data classification, metadata management, and data lineage/provenance. It models metadata using a flexible type system and stores metadata in a property graph database for relationships and lineage queries. Key features include cross-component lineage mapping, reusable tagging policies for access control, and a business catalog to organize assets by common business terms.
The document discusses the importance of metadata for archiving digital content and history. It describes how Jason Scott transformed from a "metadata skeptic" to a "metadata warrior" after his experiences rescuing data from Geocities. Proper metadata made the rescued data more useful, efficient to archive, and prevented duplication. The document advocates for taking a long-term view of digital content and using metadata to ensure information can be discovered and understood in the future.
Business Case Pratictioners Forum: Business case - an overview, Stefan Sanchez, 22 April 2016
APM Benefits Management SIG
The APM Competence Framework describes the Business Case competence as “The ability to prepare, gain approval of, refine and update business cases that justify the initiation and/or continuation of change initiatives in terms of benefits, costs and risks.”
It further provides information regarding the application and knowledge such as; strategic arguments, options appraisal, benefits and dis-benefits, commercial aspects, risk, time scales and whole-life costs.
Those who work in the public sector will be familiar with the HM Treasury Green Book: appraisal and evaluation in central government and the Five Case model.
Business cases should be understood as both as a product and a process, with involvement from the right stakeholders in order to achieve the spending objectives and deliver benefits.
The inaugural Business Case Practitioners Forum (BCPF) brought together practitioners
-To share business-case related knowledge, experience and good practice drawn from the public, private and third sectors
-To support members to improve standards and consistency business cases
-To create a supportive and professional network of business case practitioners
-To develop a business case practitioner ‘body of knowledge’
FAIR Ddata in trustworthy repositories: the basicsOpenAIRE
This video illustrates how certified digital repositories contribute to making and keeping research data findable, accessible, interoperable and reusable (FAIR). Trustworthy repositories support Open Access to data, as well as Restricted Access when necessary, and they offer support for metadata, sustainable and interoperable file formats, and persistent identifiers for future citation. Presented by Marjan Grootveld (DANS, OpenAIRE).
Main references
• Core Trust Seal for trustworthy digital repositories: https://www.coretrustseal.org/
• EUDAT FAIR checklist: https://doi.org/10.5281/zenodo.1065991
• European Commission’s Guidelines on FAIR data management: http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf
• FAIR data principles: www.force11.org/group/fairgroup/fairprinciples
• Overview of metadata standards and tools: https://rdamsc.dcc.ac.uk/
Data Security at Scale through Spark and Parquet EncryptionDatabricks
Apple logo is a trademark of Apple Inc. This presentation discusses Parquet encryption at scale using Spark and Parquet. It covers goals of Parquet modular encryption including data privacy, integrity, and performance. It demonstrates writing and reading encrypted Parquet files in Spark and discusses the Apache community roadmap for further integration of Parquet encryption.
The document provides an introduction to the European Open Science Cloud (EOSC). It defines key concepts like open science, FAIR data, and explains what EOSC is - a federated infrastructure to support open sharing and reuse of research outputs across disciplines. It outlines EOSC's goals like enabling multidisciplinary discovery and connecting previously disconnected research resources and data silos. Examples of current EOSC services and resources available via the EOSC Portal are also briefly described.
ELIXIR is a European infrastructure that brings together life science resources from across Europe. It offers databases, tools, computing capabilities, and training opportunities. ELIXIR nodes provide these services and connect national data infrastructures. ELIXIR communities connect infrastructure experts to drive service developments. ELIXIR is funded through a mixed model including public sources. It works to sustain important biological data resources and make data FAIR through recommended standards and interoperability resources. ELIXIR also aims to develop a sustainable tools ecosystem and provides training through its portal.
This presentation was provided by Tracy Bergstrom of Ithaka S+R, Todd Carpenter of NISO, Filip Jakobsen of Samhæng, Eva Jurczyk of the University of Toronto Libraries, Stacy McKenna of the University of California, Los Angeles (UCLA) Libraries, Jill Morris of PALCI and Boaz Nadav-Manes of Lehigh University, during the "Collaborative Collections Lifecycle Project Fall Update Webinar." The event was held virtually on September 27, 2023
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Data Science: History repeated? – The heritage of the Free and Open Source GI...Peter Löwe
This document discusses the history and lessons that can be learned from the development of geographic information systems (GIS) and how they relate to the emerging field of data science. It argues that data science may follow a similar path to GIS, and outlines several lessons: (1) the importance of standardization, (2) the benefits of free and open source software in enabling analysis, education and improvement, and (3) the value of communities organized around open science principles of sharing and reuse. It highlights the Open Source Geospatial Foundation as an example of an "umbrella organization" that has supported collaborative development through established best practices around governance, software quality and merit-based participation.
What is eScience, and where does it go from here?Daniel S. Katz
eScience has evolved from focusing on global scientific collaborations enabled by distributed computing infrastructure to emphasizing joint advances in digital infrastructure and how that infrastructure enables new research. This symbiotic relationship between research and infrastructure development could be called Research and Infrastructure Development Symbiosis (RaIDS). Going forward, RaIDS conferences should focus on improving communication between infrastructure developers and researchers to facilitate new collaborations, ensure research publications appropriately attribute enabling infrastructure advances, and standardize catalogs of available infrastructure and research challenges.
The document discusses a global initiative to facilitate open access to scholarly resources and research data across boundaries by building a federation of registries. It provides use cases of how such a system could help postgraduate students, research project leaders, administrators, and ICT specialists discover and monitor globally accessible data relevant to their work. The proposed strategy is to create a "Register of Registries" that would enable consistent discovery services for finding data in collections through a standardized, interoperable model. An initial scoping meeting was held in 2007 and annual meetings since to develop the strategy.
Presentation investigating the state of FAIR practice and what is needed to turn FAIR data into reality given at the Danish FAIR conference in Copenhagen on 20th November 2018. https://vidensportal.deic.dk/en/Programme/FAIR_Toolbox_Nov2018 The presentation reflect on recent FAIR studies and international initiatives and outlines the recommendations emerging from the European Commission's FAIR Data Expert Group report - http://tinyurl.com/FAIR-EG
Research in Intelligent Systems and Data Science at the Knowledge Media Insti...Enrico Motta
The document discusses research directions in intelligent systems and data science. It describes work on making sense of scholarly data through techniques like data mining, semantic technologies, and machine learning. It also discusses mapping and classifying computer science research areas using an automatically generated ontology with over 14,000 topics. Other topics discussed include predicting emerging research areas, applications in smart cities like the MK:Smart project, and potential roles for robots in smart cities like an autonomous health and safety inspector.
Between 2009 and 2012 the Higher Education Funding Council for England (HEFCE) funded a series of programmes to encourage higher education institutions in the UK to release existing educational content as Open Educational Resources (OER) and to embed open practices in the institution. The HEFCE funded UK OER Programmes were run and managed by the JISC and the Higher Education Academy. Over the course of three years about £15M (€17,5M) was invested on projects that investigated the release and collection of OERs by individuals, institutions and subject communities. The Cetis “OER Technology Support Project” provided support for technical innovation across this programme.
In this conference paper we will present our reflections on the technical approaches taken, issues raised and the lessons learnt from the Programmes and the Support Project. The issues covered include resource management, resource description, licensing and attribution, search engine optimisation and discoverability, tracking OERs, and paradata (activity data about learning resources). Technical solutions discussed will include the use of social sharing platforms such as flickr and WordPress for resource dissemination; metadata embedded in HTML documents as RDFa, microdata and using the schema.org ontology; and sharing metadata and paradata using the Learning Registry (a network of schema-free data stores). As well as describing the achievements of the programme, we will also discuss the difficulties encountered and identify areas where further work is required.
Presentation given at Macquarie University in support of the ARDC 'institutional role in the data commons' project on "Implementing FAIR: Standards in Research Data Management" https://ardc.edu.au/news/data-and-services-discovery-activities-successful-applicants/
An introduction to Microsoft Power BI, emphasisng on the usability of Power Query and how it's useful for the excel population. A session delived at Orion India Systems Pvt. Ltd.
This document summarizes key findings from surveys about researchers' data sharing practices and attitudes. It finds that while most researchers agree data should be shared, only a small percentage actually make their data openly available. Researchers typically share data through email, cloud services, or external drives rather than repositories. The document also discusses increasing emphasis on open and FAIR data in research funder policies, but notes researchers face barriers to compliance like unclear terminology, lack of skills and incentives, and confusion between open data and managed/FAIR data. It argues for engagement programs to help researchers better understand and participate in open scholarship.
This document provides an overview and introduction to Tableau. It outlines the basic steps for connecting to different data sources, building initial views, and creating dashboards. The document covers prerequisites, an introduction to the Tableau workspace, demo instructions for connecting to sample data files and modifying data connections, and includes lab exercises for readers to practice the concepts. The goal is to help readers understand the basics of visualizing and exploring data using Tableau.
Apache Atlas provides data governance capabilities for Hadoop including data classification, metadata management, and data lineage/provenance. It models metadata using a flexible type system and stores metadata in a property graph database for relationships and lineage queries. Key features include cross-component lineage mapping, reusable tagging policies for access control, and a business catalog to organize assets by common business terms.
The document discusses the importance of metadata for archiving digital content and history. It describes how Jason Scott transformed from a "metadata skeptic" to a "metadata warrior" after his experiences rescuing data from Geocities. Proper metadata made the rescued data more useful, efficient to archive, and prevented duplication. The document advocates for taking a long-term view of digital content and using metadata to ensure information can be discovered and understood in the future.
Business Case Pratictioners Forum: Business case - an overview, Stefan Sanchez, 22 April 2016
APM Benefits Management SIG
The APM Competence Framework describes the Business Case competence as “The ability to prepare, gain approval of, refine and update business cases that justify the initiation and/or continuation of change initiatives in terms of benefits, costs and risks.”
It further provides information regarding the application and knowledge such as; strategic arguments, options appraisal, benefits and dis-benefits, commercial aspects, risk, time scales and whole-life costs.
Those who work in the public sector will be familiar with the HM Treasury Green Book: appraisal and evaluation in central government and the Five Case model.
Business cases should be understood as both as a product and a process, with involvement from the right stakeholders in order to achieve the spending objectives and deliver benefits.
The inaugural Business Case Practitioners Forum (BCPF) brought together practitioners
-To share business-case related knowledge, experience and good practice drawn from the public, private and third sectors
-To support members to improve standards and consistency business cases
-To create a supportive and professional network of business case practitioners
-To develop a business case practitioner ‘body of knowledge’
FAIR Ddata in trustworthy repositories: the basicsOpenAIRE
This video illustrates how certified digital repositories contribute to making and keeping research data findable, accessible, interoperable and reusable (FAIR). Trustworthy repositories support Open Access to data, as well as Restricted Access when necessary, and they offer support for metadata, sustainable and interoperable file formats, and persistent identifiers for future citation. Presented by Marjan Grootveld (DANS, OpenAIRE).
Main references
• Core Trust Seal for trustworthy digital repositories: https://www.coretrustseal.org/
• EUDAT FAIR checklist: https://doi.org/10.5281/zenodo.1065991
• European Commission’s Guidelines on FAIR data management: http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf
• FAIR data principles: www.force11.org/group/fairgroup/fairprinciples
• Overview of metadata standards and tools: https://rdamsc.dcc.ac.uk/
Data Security at Scale through Spark and Parquet EncryptionDatabricks
Apple logo is a trademark of Apple Inc. This presentation discusses Parquet encryption at scale using Spark and Parquet. It covers goals of Parquet modular encryption including data privacy, integrity, and performance. It demonstrates writing and reading encrypted Parquet files in Spark and discusses the Apache community roadmap for further integration of Parquet encryption.
The document provides an introduction to the European Open Science Cloud (EOSC). It defines key concepts like open science, FAIR data, and explains what EOSC is - a federated infrastructure to support open sharing and reuse of research outputs across disciplines. It outlines EOSC's goals like enabling multidisciplinary discovery and connecting previously disconnected research resources and data silos. Examples of current EOSC services and resources available via the EOSC Portal are also briefly described.
ELIXIR is a European infrastructure that brings together life science resources from across Europe. It offers databases, tools, computing capabilities, and training opportunities. ELIXIR nodes provide these services and connect national data infrastructures. ELIXIR communities connect infrastructure experts to drive service developments. ELIXIR is funded through a mixed model including public sources. It works to sustain important biological data resources and make data FAIR through recommended standards and interoperability resources. ELIXIR also aims to develop a sustainable tools ecosystem and provides training through its portal.
This presentation was provided by Tracy Bergstrom of Ithaka S+R, Todd Carpenter of NISO, Filip Jakobsen of Samhæng, Eva Jurczyk of the University of Toronto Libraries, Stacy McKenna of the University of California, Los Angeles (UCLA) Libraries, Jill Morris of PALCI and Boaz Nadav-Manes of Lehigh University, during the "Collaborative Collections Lifecycle Project Fall Update Webinar." The event was held virtually on September 27, 2023
RDMkit, a Research Data Management Toolkit. Built by the Community for the ...Carole Goble
https://datascience.nih.gov/news/march-data-sharing-and-reuse-seminar 11 March 2022
Starting in 2023, the US National Institutes of Health (NIH) will require institutes and researchers receiving funding to include a Data Management Plan (DMP) in their grant applications, including the making their data publicly available. Similar mandates are already in place in Europe, for example a DMP is mandatory in Horizon Europe projects involving data.
Policy is one thing - practice is quite another. How do we provide the necessary information, guidance and advice for our bioscientists, researchers, data stewards and project managers? There are numerous repositories and standards. Which is best? What are the challenges at each step of the data lifecycle? How should different types of data? What tools are available? Research Data Management advice is often too general to be useful and specific information is fragmented and hard to find.
ELIXIR, the pan-national European Research Infrastructure for Life Science data, aims to enable research projects to operate “FAIR data first”. ELIXIR supports researchers across their whole RDM lifecycle, navigating the complexity of a data ecosystem that bridges from local cyberinfrastructures to pan-national archives and across bio-domains.
The ELIXIR RDMkit (https://rdmkit.elixir-europe.org (link is external)) is a toolkit built by the biosciences community, for the biosciences community to provide the RDM information they need. It is a framework for advice and best practice for RDM and acts as a hub of RDM information, with links to tool registries, training materials, standards, and databases, and to services that offer deeper knowledge for DMP planning and FAIR-ification practices.
Launched in March 2021, over 120 contributors have provided nearly 100 pages of content and links to more than 300 tools. Content covers the data lifecycle and specialized domains in biology, national considerations and examples of “tool assemblies” developed to support RDM. It has been accessed by over 123 countries, and the top of the access list is … the United States.
The RDMkit is already a recommended resource of the European Commission. The platform, editorial, and contributor methods helped build a specialized sister toolkit for infectious diseases as part of the recently launched BY-COVID project. The toolkit’s platform is the simplest we could manage - built on plain GitHub - and the whole development and contribution approach tailored to be as lightweight and sustainable as possible.
In this talk, Carole and Frederik will present the RDMkit; aims and context, content, community management, how folks can contribute, and our future plans and potential prospects for trans-Atlantic cooperation.
Data policy must be partnered with data practice. Our researchers need to be the best informed in order to meet these new data management and data sharing mandates.
Data Science: History repeated? – The heritage of the Free and Open Source GI...Peter Löwe
This document discusses the history and lessons that can be learned from the development of geographic information systems (GIS) and how they relate to the emerging field of data science. It argues that data science may follow a similar path to GIS, and outlines several lessons: (1) the importance of standardization, (2) the benefits of free and open source software in enabling analysis, education and improvement, and (3) the value of communities organized around open science principles of sharing and reuse. It highlights the Open Source Geospatial Foundation as an example of an "umbrella organization" that has supported collaborative development through established best practices around governance, software quality and merit-based participation.
What is eScience, and where does it go from here?Daniel S. Katz
eScience has evolved from focusing on global scientific collaborations enabled by distributed computing infrastructure to emphasizing joint advances in digital infrastructure and how that infrastructure enables new research. This symbiotic relationship between research and infrastructure development could be called Research and Infrastructure Development Symbiosis (RaIDS). Going forward, RaIDS conferences should focus on improving communication between infrastructure developers and researchers to facilitate new collaborations, ensure research publications appropriately attribute enabling infrastructure advances, and standardize catalogs of available infrastructure and research challenges.
The document discusses a global initiative to facilitate open access to scholarly resources and research data across boundaries by building a federation of registries. It provides use cases of how such a system could help postgraduate students, research project leaders, administrators, and ICT specialists discover and monitor globally accessible data relevant to their work. The proposed strategy is to create a "Register of Registries" that would enable consistent discovery services for finding data in collections through a standardized, interoperable model. An initial scoping meeting was held in 2007 and annual meetings since to develop the strategy.
Presentation investigating the state of FAIR practice and what is needed to turn FAIR data into reality given at the Danish FAIR conference in Copenhagen on 20th November 2018. https://vidensportal.deic.dk/en/Programme/FAIR_Toolbox_Nov2018 The presentation reflect on recent FAIR studies and international initiatives and outlines the recommendations emerging from the European Commission's FAIR Data Expert Group report - http://tinyurl.com/FAIR-EG
Research in Intelligent Systems and Data Science at the Knowledge Media Insti...Enrico Motta
The document discusses research directions in intelligent systems and data science. It describes work on making sense of scholarly data through techniques like data mining, semantic technologies, and machine learning. It also discusses mapping and classifying computer science research areas using an automatically generated ontology with over 14,000 topics. Other topics discussed include predicting emerging research areas, applications in smart cities like the MK:Smart project, and potential roles for robots in smart cities like an autonomous health and safety inspector.
Between 2009 and 2012 the Higher Education Funding Council for England (HEFCE) funded a series of programmes to encourage higher education institutions in the UK to release existing educational content as Open Educational Resources (OER) and to embed open practices in the institution. The HEFCE funded UK OER Programmes were run and managed by the JISC and the Higher Education Academy. Over the course of three years about £15M (€17,5M) was invested on projects that investigated the release and collection of OERs by individuals, institutions and subject communities. The Cetis “OER Technology Support Project” provided support for technical innovation across this programme.
In this conference paper we will present our reflections on the technical approaches taken, issues raised and the lessons learnt from the Programmes and the Support Project. The issues covered include resource management, resource description, licensing and attribution, search engine optimisation and discoverability, tracking OERs, and paradata (activity data about learning resources). Technical solutions discussed will include the use of social sharing platforms such as flickr and WordPress for resource dissemination; metadata embedded in HTML documents as RDFa, microdata and using the schema.org ontology; and sharing metadata and paradata using the Learning Registry (a network of schema-free data stores). As well as describing the achievements of the programme, we will also discuss the difficulties encountered and identify areas where further work is required.
The European Open Science Cloud: just what is it?Jisc
The European Open Science Cloud (EOSC) aims to provide a virtual environment for Europe's 1.7 million researchers to store, share, and reuse research outputs. It will reduce duplication of efforts and simplify access across borders and disciplines. The EOSC will be guided by FAIR principles to make data findable, accessible, interoperable, and reusable. Its implementation will focus on engaging stakeholders, developing open standards and interoperable services, and addressing skills gaps in data management. The EOSC seeks to build on existing research infrastructures and e-infrastructures through a distributed and community-driven approach.
The Ascent of Open Science and the European Open Science CloudTiziana Ferrari
EOSC-hub receives funding from the European Union’s Horizon 2020 programme to integrate and manage services for the European Open Science Cloud (EOSC). The presentation discusses the need for open science, open data, and interoperable e-infrastructures. It provides examples like the LIGO and VIRGO collaborations sharing data and the WeNMR community using distributed computing resources. The EOSC-hub project aims to provide a single point of access to services across different providers through a marketplace. It has onboarded many services, engaged with users and service providers, and seen increasing usage of thematic, federation, and common services on the platform. The EOSC has the potential to boost support for open
This online European Open Science Cloud (EOSC) event was held on 15 December 2021.
You’ll get information about:
- Developments in the EOSC Association
- The work of the new EOSC Advisory Groups and Task Forces
- What’s happening in some of the EOSC implementation projects
- Ways you can become involved in EOSC
Managing training materials beyond individual projectsEOSC-hub project
This document discusses managing training materials from EU-funded projects and organizations beyond individual projects. It notes that training materials are currently managed on project websites or solutions like MOOCs. Researchers want better discipline-specific and cross-discipline support, openly available reusable materials, and cataloguing with rich metadata. Projects want uptake of solutions, branding, and measurable impact. Issues include organizing materials across projects to avoid duplication, improving visibility, and long-term sustainability after projects end. Potential solutions discussed include the Elixir Training eSupport System and linking with communities of practice.
The European Open Science Cloud: just what is it?Carole Goble
Presented at Jisc and CNI leaders conference 2018, 2 July 2018, Oxford, UK (https://www.jisc.ac.uk/events/jisc-and-cni-leaders-conference-02-jul-2018). The European Open Science Cloud. What exactly is it? In principle it is conceived as a virtual environment with open and seamless services for storage, management, analysis and re-use of research data, across borders and scientific disciplines. How? By federating existing scientific data infrastructures, currently dispersed across disciplines and Member States. In practice, what it is depends on the stakeholder. To European Research Infrastructures it’s a coordinated mission to organise and exchange their data, metadata, software and services to be FAIR – Findable, Accessible, Interoperable, Reusable – and to use e-Infrastructures, either EU or commercial. To EU e-Infrastructures offering data storage and cloud services, it’s a funding mission to integrate their services, policies and organisational structures, and to be used by the Research Infrastructures. To agencies it’s a means to promote Open Science, standardisation, cross-disciplinary research and coordinated investment with a dream of a “one stop shop” for researchers. And for Libraries?
Presentation at EMTACL10, http://www.ntnu.no/ub/emtacl/
Guus van den Brekel
Central medical library, UMCG
Virtual Research Networks: towards Research 2.0
In the next few years, the further development of social, educational and research networks – with its extensive collaborative possibilities – will be dictating how users will search for, manage and exchange information. The network – evolved by technology – is changing the user's behaviour and that will affect the future of information services. Many envision a possible leading role for libraries in collaboration and community building services.
Users are not only heavily using new tools, but are also creating and shaping their own preferred tools.
Today's students are incorporating Web 2.0 skills in daily life, in their social and learning environments.
Tomorrow's research staff will expect to be able to use their preferred tools and resources within their work environment.
Today's ánd tomorrow's libraries should support students and staff in the learning and research process by integrating library services and resources into their environments.
Institutional repositories capture, preserve, and provide access to the intellectual output of an institution. They consist of formally organized and managed collections of digital content generated by faculty, staff, and students. Institutional repositories allow for the dissemination of knowledge outside the institution, complement traditional forms of publication, and make works visible to colleagues and potential employers or funders. They contribute to an institution's prestige by managing and preserving relevant information that would otherwise remain scattered or inaccessible.
JLeRN Experiment Slides for CETIS Conference 2012 Session on The Learning Reg...Sarah Currier
1. The JLeRN project has set up two experimental common nodes in the UK to share learning resource data from higher education and cultural sectors with the global Learning Registry.
2. Common nodes allow data to be published, accessed, distributed, processed, and the node's status to be queried. The JLeRN nodes support basic publishing and have developed OAI-PMH feed publishing.
3. Nodes can connect to form networks that share common policies. Networks can connect via gateway nodes to form communities that bridge different networks and policies. The JLeRN nodes currently operate as a single network without signatures required.
Keynote presentation given at the Data Fellows 2023 workshop in Berlin on 22-23 June. Presentation gives examples of good communication to explain data management concepts and how to use games and other forms of interactivity in training events
Managing and sharing data: lessons from the European contextSarah Jones
The document discusses a presentation given by Sarah Jones on managing and sharing data openly in the European context. The presentation covered topics such as research data management (RDM), FAIR data principles, open science, the European Open Science Cloud (EOSC), and how universities can support researchers in practicing open science. It provided overviews and definitions of these topics, discussed challenges to open data sharing, and offered practical advice on making data FAIR and open through activities like choosing a license, selecting a repository, and using appropriate file formats and metadata standards.
The EOSC Association conducted a survey to gather feedback on their Multi-Annual Roadmap (MAR) and received 45 completed responses with 191 partial responses. The main themes from the 534 comments included needing more clarity on terminology, emphasizing national investment roles, and greater focus on business models and funding research software engineers. Minor comments requested removing organization examples, clarifying the voluntary nature of EOSC, and reconsidering visual identity. The analysis will be shared with the board and task forces to inform revisions to the MAR text for republication in mid-May.
This document discusses the challenges facing the European Open Science Cloud (EOSC) and identifies actions that could help address those challenges. Some of the top challenges mentioned are that EOSC is still in the build phase and not yet functioning seamlessly for end users, it is extremely complex due to its multi-stakeholder, multi-country, and multi-disciplinary nature, and its governance was only recently established while its formation occurred organically through projects. Key priority actions identified include extensive testing and iteration based on user feedback, releasing small functionalities incrementally, continuing collaborative and consensus-driven work, and establishing an effective stakeholder forum. The document advocates for putting research community needs at the center and having the EOSC Association and
Data Management Planning for researchersSarah Jones
This document provides information about creating a data management plan (DMP) for researchers. It begins with defining what a DMP is - a short plan that outlines what data will be created, how it will be managed and stored, and plans for sharing and preservation. It then discusses the common components of a DMP, including describing the data, standards and methodologies, ethics and intellectual property, data sharing plans, and preservation strategies. The document provides examples of DMP requirements and recommendations from funders. It offers tips for creating a good DMP, including thinking about the needs of future data re-users, consulting stakeholders, grounding plans in reality, and planning for sharing from the outset. Finally, it discusses tools and resources
1) Europe has invested hugely in the European Open Science Cloud (EOSC) over recent years through various initiatives, reports, and projects.
2) EOSC aims to create a federated environment for open sharing and analysis of research data across borders and disciplines.
3) Sharing sensitive data on EOSC requires properly documenting, licensing, identifying, and anonymizing data while making it findable and accessible on repositories or secure services.
Presentation given at the DMPonline 10 year anniversary week, reflecting on lessons learned developing the business model. See https://www.dcc.ac.uk/events/dmponline-10th-year-anniversary-celebration-week and #10yearsDMPonline
This document discusses best practices for supporting open science. It recommends adopting existing solutions where possible rather than developing new ones. It also suggests engaging with researchers, incentivizing open practices, allowing for innovation and failure, collaborating with peers, and keeping service delivery options open. The document concludes by inviting attendees to a workshop on delivering research data management services.
This document provides an overview of new features and updates to the DMPTuuli data management planning tool. Key points include: improvements to the user interface and sharing options; integration with ORCID and adding grant IDs; enhanced admin controls and template versioning; offering feedback on plans; and a usage dashboard and API improvements. Future planned features are also outlined such as conditional questions, custom domains, and integrations. Support resources and ways to connect with the developer are highlighted.
The FAIR Working Group provides recommendations on implementing FAIR data principles to foster cross-disciplinary interoperability. Their goals are to:
1. Develop data standards and sharing agreements
2. Upscale best practices for FAIR data and services
3. Create an EOSC Interoperability Framework identifying service requirements
They have deliverables due in 2019-2020 including metrics for assessing FAIR data and certifying services, a Persistent Identifier policy, and the EOSC Interoperability Framework. They are seeking input from stakeholders on relevant activities, what the framework should comprise, and how to engage communities for feedback.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. www.geant.org
www.geant.org
1 |
Click to edit Master title style
• Click to edit Master text styles
• Second level
• Third level
• Fourth level
• Fifth level
27/04/2023 1
EOSC, Open Science and the role of libraries
www.geant.org
Sarah Jones
EOSC Engagement Manager
sarah.jones@geant.org
Twitter: @sarahroams
Libraries as enablers of scientific research, Tblisi
27th April 2023
2. www.geant.org
Today’s Topics
• Introduction to European Open Science Cloud
• EOSC as an initiative
• EOSC Association and governance
• Role of libraries in supporting Open Science
• Ideas for libraries and EOSC
Photo by Mike Swigunski
4. www.geant.org
www.geant.org
• Three-party collaboration
• Partnership MOU between EC and
EOSC Association (legal entity) to
govern/oversee the implementation
• Representation of Member States in
Steering Board
• Huge EC investment in infrastructure –
€350 million in initial development
phase and at least €1 billion co-
investment foreseen for next 7 years
Large EC initiative to build an OS solution
4 |
EOSC
Association
Steering
Board
European
Commission
5. www.geant.org
The EOSC platform
• Building a “web of FAIR data and services”
• Federating data and resources from
eInfrastructure and Research
Infrastructures
• Environment in which data can be brought
together with services to perform analyses
and address societal challenges
https://eosc-portal.eu
7. www.geant.org
www.geant.org
FAIR is central to principles in EOSC
• Is the glue that connects data & services
• Requirement for FAIR to support reuse
• Use community standards
• Share all types of output (openly)
11. www.geant.org
www.geant.org
What is EOSC Association?
• The legal entity which signed the MOU with the EC in the Partnership Agreement
• A membership organisation to represent the voice of the community
• 168 members and 81 Observers – funders, universities, service providers, publishers…
EOSC Association is part of the governance - not implementation - of EOSC
https://eosc.eu
No Georgian
members
currently…
12. www.geant.org
www.geant.org
• Research funders e.g. DFG, FCT, Science Europe
• Research performing organisations e.g. universities,
institutes, council of rectors..
• Research service providers e.g. NRENs, research
infrastructures, data centres, publishers…
• Other e.g. representative bodies or commercials (LIBER,
EuroCRIS, STM)
https://eosc.eu/general-assembly
Categories of EOSC Association members
12
13. www.geant.org
Mandated organisations
• Every MS & AC can mandate one organisation to represent national
interests in the General Assembly
• The Association currently has 26 mandated organisations
• These members have more influence in strategic decisions which require
2/3 majority voting
• Suggested that mandated organisations play a bi-directional role
• Coordinate discussion and views from the country
• Inform members on EOSC and help the Association to grow membership
• Is a legal entity needed in Georgia to represent the community?
13
15. www.geant.org
EOSC-A is the voice of the community…
Next General Assembly #6, 22-23 May 2023
Next GA: 22-23 May 2023
16. www.geant.org
The Purpose of EOSC-A
Incorporation, 29 July 2020
(1) to provide a single voice for the advocacy and
representation of the broader EOSC stakeholder
community;
(2) to promote the alignment of European Union research
policy and priorities with activities coordinated by the
Association;
(3) to enable seamless access to data through interoperable
services that address the entire research data life cycle.
18. www.geant.org
EOSC-A Brain-Pool: 13 Task Forces
Implementation of EOSC
• Rules of Participation compliance monitoring
• PID policy and implementation
• Researcher engagement and adoption
Technical challenges on EOSC
• Technical interoperability of data and services
• Infrastructure for quality research software
• AAI Architecture
Metadata and data quality
• Semantic interoperability
• FAIR metrics and data quality
Research careers and curricula
• Data stewardship curricula and career paths
• Research careers, recognition and credit
• Upskilling countries to engage in EOSC
Sustaining EOSC
• Financial sustainability
• Long-term data preservation
Over 400 volunteers
23. www.geant.org
www.geant.org
Typical activities libraries are involved in
• defining the institutional strategy
• developing RDM policy
• delivering training courses
• helping researchers to write DMPs
• advising on data sharing and citation
• coordinating data stewards
• setting up data repositories
• ...
24. www.geant.org
www.geant.org
DATA SCIENTIST
Curriculum covering:
• Open Science & RDM
• Ethical use of data
• Data analysis
• Data visualisation
• Machine learning
• Computational infrastructure
http://www.codata.org/working-
groups/research-data-science-
summer-schools
Embrace new, emerging job profiles
24
DATA STEWARD
Strong programme in Netherlands. Data
stewards have a research background
and provide disciplinary support for
research data management and sharing
https://www.rd-
alliance.org/groups/professionalising-
data-stewardship-ig
RESEARCH SOFTWARE
ENGINEER (RSE)
Term coined to promote career
development and recognition for
those who provide software
development expertise to research
groups. RSEs have coding skills but
also have an understanding of the
research area.
https://rse.ac.uk
25. www.geant.org
www.geant.org
DIY training kit for librarians
25 |
• Resources for self-
learning in groups
• Raise awareness of RDM
• Try out typical support
tasks in exercises
https://zenodo.org/record/653
2050#.Y_5PbOzMK3I
28. www.geant.org
www.geant.org
• Register their services (e.g. repositories) via the providers hub
• Expose their data to the EOSC cross-search
• Feed information on data and services into regional catalogues e.g.
NI4OS catalogue
• Advocate for uptake and usage of EOSC by signposting
• Train researchers / help researchers to use EOSC
• Attend events to upskill themselves on EOSC
• ….
Check out training and infographic for libraries from EOSC Future – and
panel session at LIBER in Budapest in July
What do / could libraries do in EOSC?
28 |
29. www.geant.org
www.geant.org
• EOSC is working at a huge scale – European and international
• Domain infrastructure (e.g. CESSDA, ELIXIR), national infra and
institutional RDM are just different levels of the “Commons”
• Every level needs to interact via open APIs as far as possible
• What we all need is a basic framework for Open Science that is
pluggable and allows any type of services and data to be compiled
depending on the context and user need.
See RDA Global Open Research Commons Interest Group
A vision for delivering the Open Science Commons (in general)
29 |
30. www.geant.org
www.geant.org
30 |
At the heart of
building the data
commons is
interoperability and
standards.
Let’s drill into that
layer to think about
how to implement
technically…
31. www.geant.org
www.geant.org
• We all need a core framework to connect all the elements
which need to work together
• This basic open science framework should allow all the
systems, tools, workflows, physcial elements like storage etc
to be brought together in a plug and play way
• It doesn’t matter if those systems/tools/storage etc are
public/free or commercial. The framework has to work with
everything which the user needs
• There should be two way connections between each
element and the framework and between all the elements
so researchers can compile their workflows i.e. pull in what
data, what analysis tools, what storage and compute etc
they need
This is the vision of EOSC
A plug and play OS framework to build any Commons
31 |
Framework:
architecture
platform
basic infra
Tools
Workflows
Storage
Systems
XYZ……
32. www.geant.org
www.geant.org
We are all building commons!
32 |
All of these
Commons need to
interoperate!
Via an open API
layer, series of
protocols,
standards,
schemas,
crosswalks…
34. www.geant.org
www.geant.org
Focus on making things work in the context you have
control over. If it works well locally then it will plug into
EOSC in time when everything connects.
Focus on providing meaningful support to researchers.
Please note this is my personal opinion
not EOSC Association or GÉANT endorsed… yet!
Lesson for you in libraries and Georgia in general
34 |