Fiona Counsell Taylor & FrancisHow do we make what some might think to be boring metadata more appealing? Metadata has a PR problem and it’s time to wrap it in pastry and bake it for 40-45 minutes until golden brown. How can we motivate organizations and businesses in scholarly communications to improve their metadata? How do we support individuals to make the case for metadata solutions to decision makers in their organizations? How might we elevate the importance of metadata to motivate publishers, service providers, and libraries to make the sometimes costly infrastructure changes to enhance the completeness, connectedness, openness and reusability of metadata? ‘Incentives for Improving Metadata’ is one of Metadata 2020’s six projects, and has been described as the ‘vision’ project of the collaboration. Project participants are working to create resources to help organizations across scholarly communications understand the importance of metadata, including helping them identify tangible and appealing operational benefits for infrastructure changes. In this session Fiona will present the resources created to date and engage attendees to consider what additional resources may be helpful in their respective communities.
Laura Wilkinson Crossref
An interactive session to view and discuss how different Crossref members are doing with metadata completeness. Who fares best in terms of including abstracts, or text-mining links, or ORCID iDs? Crossref membership has extended to libraries and funders and scholars themselves, so we won’t just be looking at the “usual suspects”. We’ll also be asking for feedback and ideas for what checks to put in place for the next phase of Crossref participation reports. Drawing on findings from the Metadata 2020 initiative, we will also offer some insights into the barriers publishers and vendors face when collating and registering richer metadata, and advice for how to overcome them.
Metadata 2020 is a collaboration that aims to advance open metadata for research. It involves over 100 individuals from publisher, library, repository, and funder communities. The groups are defining challenges around inconsistent metadata schemas and formats, out-of-date metadata, and high costs of metadata management. Opportunities for collaboration include developing shared vocabularies and guidelines, educating researchers on the importance of metadata, and creating tools to improve metadata quality and interoperability. The goal is for communities to work together on metadata issues to advance scholarly communication.
This document outlines Yale University's knowledge management strategy and roadmap. It discusses establishing subject matter experts and a community of practice to develop knowledge management. The strategy aims to integrate knowledge into business processes like incident, change, and problem management. Metrics will track the number of incidents linked to knowledge base articles and time between submissions and article creations. The roadmap focuses on people, process, and technology improvements over three years like automating workflows, creating an ITS knowledge portal, and further integrating the website and knowledgebase.
This document discusses the role of assessment librarians and the importance of assessment in libraries. It defines assessment as evaluating the importance, size, or value of operations in order to improve customer service. An assessment librarian understands libraries, advocates for customers, is passionate about quality service and assessment, and analyzes and interprets data to advise staff on projects and coordinate assessment efforts. Effective assessment requires library leadership, a customer-centered approach, and turning results into actionable changes.
The document discusses a project to review and publicize a database of research publications by academics from a business school. It notes inconsistencies and inaccuracies found in the existing FT45 database. The project aims to publish a webpage highlighting the database and potential services while getting feedback. It also aims to develop the skills of a graduate trainee and strengthen the position of a circulation officer in the school's research network. Risks include the database requiring more time and entries to complete than expected.
This document discusses how to use Raiser's Edge and action tracks to manage prospects through various stages from identification to cultivation, solicitation, and stewardship. Key tools include queries and reports to identify prospects, documenting research findings, tracking prospect ratings and statuses, creating action tracks to engage prospects according to strategic plans, and monitoring progress through action reports and dashboards. Action tracks make the process more efficient by stringing individual actions together, setting dependencies between actions, and automatically triggering follow-ups.
The document discusses the importance of assessment in libraries. It defines assessment as evaluating the importance, size, or value of operations in order to make data-driven decisions and improve customer service. A culture of assessment relies on analyzing facts and research to deliver optimal services. Reasons to assess include learning user needs, investigating new services, allocating resources, and accountability. Effective assessment requires leadership, customer-focused staff, and collecting meaningful data. The document also provides examples of assessment tools and positions such as the assessment librarian role.
Laura Wilkinson Crossref
An interactive session to view and discuss how different Crossref members are doing with metadata completeness. Who fares best in terms of including abstracts, or text-mining links, or ORCID iDs? Crossref membership has extended to libraries and funders and scholars themselves, so we won’t just be looking at the “usual suspects”. We’ll also be asking for feedback and ideas for what checks to put in place for the next phase of Crossref participation reports. Drawing on findings from the Metadata 2020 initiative, we will also offer some insights into the barriers publishers and vendors face when collating and registering richer metadata, and advice for how to overcome them.
Metadata 2020 is a collaboration that aims to advance open metadata for research. It involves over 100 individuals from publisher, library, repository, and funder communities. The groups are defining challenges around inconsistent metadata schemas and formats, out-of-date metadata, and high costs of metadata management. Opportunities for collaboration include developing shared vocabularies and guidelines, educating researchers on the importance of metadata, and creating tools to improve metadata quality and interoperability. The goal is for communities to work together on metadata issues to advance scholarly communication.
This document outlines Yale University's knowledge management strategy and roadmap. It discusses establishing subject matter experts and a community of practice to develop knowledge management. The strategy aims to integrate knowledge into business processes like incident, change, and problem management. Metrics will track the number of incidents linked to knowledge base articles and time between submissions and article creations. The roadmap focuses on people, process, and technology improvements over three years like automating workflows, creating an ITS knowledge portal, and further integrating the website and knowledgebase.
This document discusses the role of assessment librarians and the importance of assessment in libraries. It defines assessment as evaluating the importance, size, or value of operations in order to improve customer service. An assessment librarian understands libraries, advocates for customers, is passionate about quality service and assessment, and analyzes and interprets data to advise staff on projects and coordinate assessment efforts. Effective assessment requires library leadership, a customer-centered approach, and turning results into actionable changes.
The document discusses a project to review and publicize a database of research publications by academics from a business school. It notes inconsistencies and inaccuracies found in the existing FT45 database. The project aims to publish a webpage highlighting the database and potential services while getting feedback. It also aims to develop the skills of a graduate trainee and strengthen the position of a circulation officer in the school's research network. Risks include the database requiring more time and entries to complete than expected.
This document discusses how to use Raiser's Edge and action tracks to manage prospects through various stages from identification to cultivation, solicitation, and stewardship. Key tools include queries and reports to identify prospects, documenting research findings, tracking prospect ratings and statuses, creating action tracks to engage prospects according to strategic plans, and monitoring progress through action reports and dashboards. Action tracks make the process more efficient by stringing individual actions together, setting dependencies between actions, and automatically triggering follow-ups.
The document discusses the importance of assessment in libraries. It defines assessment as evaluating the importance, size, or value of operations in order to make data-driven decisions and improve customer service. A culture of assessment relies on analyzing facts and research to deliver optimal services. Reasons to assess include learning user needs, investigating new services, allocating resources, and accountability. Effective assessment requires leadership, customer-focused staff, and collecting meaningful data. The document also provides examples of assessment tools and positions such as the assessment librarian role.
Eva Mendez presents the latest developments for the Metadata 2020 collaboration at APE 2018. Updates include a summary of community group challenges and opportunities, and projects that will be launched in 2018.
Metadata 2020: Harnessing PID-power for the greater good.
This presentation outlines the key metadata challenges for each scholarly communications community as identified by Metadata 2020; and introduces new areas of focus for 2018.
This document discusses community approaches to open data at scale. It describes the Metadata 2020 collaboration, which aims to promote richer, connected, reusable open metadata. It outlines several projects undertaken by Metadata 2020 working groups to address challenges around metadata quality, standards, and incentives. The document also summarizes two talks on improving metadata pipelines for SHARE and the evolution of metadata curation at Dryad data repository. It discusses Dryad's integration of manuscript and data submission as well as efforts to enhance interoperability, data citation, and the proposed Data Curation Network model.
If You Tag it, Will They Come? Metadata Quality and Repository ManagementSarah Currier
Presentation to Metadata Perspectives 2009, a conference held in Vienna, Austria in November 2009.
When we build collections of scholarly works, learning materials, or other educational "stuff", we want people to be able to find it. This raises a number of problems, including ensuring that resources are tagged with adequate metadata. In 2004 a pioneering paper on this issue noted:
"At its best, “accurate, consistent, sufficient, and thus reliable” (Greenberg & Robertson, 2002) metadata is a powerful tool that enables the user to discover and retrieve relevant materials quickly and easily and to assess whether they may be suitable for reuse. At worst, poor quality metadata can mean that a resource is essentially invisible within the repository and remains unused." (Currier et al, 2004).
Have the five years since the above-quoted paper was published borne out its prediction: that simply expecting resource authors to create their own metadata at upload would lead to metadata of insufficient quality? Have repository managers been able to persuade funders that including professional metadata augmentation is worth the money? What has been the impact of recent Web developments allowing easier exposure, searching and sharing of resources? How is metadata being treated within the emerging domain of open educational resources? And what does all this mean for repository managers wanting to increase the discoverability of their resources, and to implement workflows for creation of good quality metadata?
Currier, S. et al (2004) Quality assurance for digital learning object repositories: issues for the metadata creation process, ALT-J, Research in Learning Technology, Vol. 12, No. 1, March 2004
http://repository.alt.ac.uk/616/1/ALT_J_Vol12_No1_2004_Quality%20assurance%20for%20digital%20.pdf
Greenberg, J. & Robertson, W. (2003) Semantic web construction: an inquiry of authors’ views on collaborative metadata generation, Proceedings of the International Conference on Dublin Core and Metadata for e-Communities 2002, 45–52.
http://dcpapers.dublincore.org/ojs/pubs/article/viewArticle/693
The document discusses optimizing content findability. It emphasizes the importance of governance, organization, user involvement, and metadata to improve search and findability. Successful organizations allocate resources to analyze search usage and improve information architecture through taxonomy and metadata. User testing, feedback loops, and search analytics are also recommended to enhance findability.
This document summarizes a presentation on FAIRsharing, a registry of interlinked standards, repositories, and policies that aims to increase guidance on finding and using research data resources. It discusses how FAIRsharing tracks the evolution of recommended resources, finds discrepancies between explicit and implicit recommendations, and works with projects to develop FAIR evaluation tools and guidelines. The goal is to help researchers, publishers, and others discover, select, and implement FAIR data standards and policies to accelerate discovery.
This presentation was provided by Chris Erdmann of Library Carpentries and by Judy Ruttenberg of ARL during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Metadata mapping and vocabulary: consistency for all in scholarly communicati...CILIP MDG
This document summarizes several projects and initiatives from Metadata2020, a collaboration that aims to improve metadata practices in scholarly communication. It outlines challenges and opportunities around metadata mapping, defining common terminology, and establishing best practices. Specific projects discussed include developing recommendations for shared metadata elements and mappings across schemas, creating a glossary of metadata terms, and defining principles for using metadata across the research workflow to facilitate interoperability. The document encourages participation and promotion of Metadata2020's efforts to improve metadata consistency.
The document summarizes Susanna-Assunta Sansone's presentation on enabling FAIR (Findable, Accessible, Interoperable, Reusable) digital resources. It discusses the driving forces behind FAIR including reproducibility crises, new data types, and changing publishing. It then outlines community efforts to develop standards, policies, and tools to improve metadata and data sharing according to FAIR principles. These include domain-specific standards, the FAIRsharing registry, metrics to assess FAIRness, and ongoing work to provide FAIR guidance and services.
Knowledge Management in Healthcare AnalyticsGregory Nelson
The promise of actionable analytics in healthcare poses an inherent challenge as we seek to accelerate the time it takes to go from question to insight to action. The velocity of change, the demand for bigger data, the allure of advanced algorithms, the need for deeper insights, and the cost of inaction make knowledge capture and reuse an all too allusive goal.
In an evolving environment, healthcare organizations need to find ways to make greater use of prior investments in analytics products by reusing the commonalities of proven designs, metadata, business rules, captured learnings, and collaborative insights and applying them to future analytics products. By doing so in a strategic manner, they will be able to create rapid and efficient analytics processes and better manage time to value and reuse.
In this presentation, authors from two very different health systems with two very different patient populations will share their perspectives of the value of knowledge management and discuss the role of analytics in driving towards a learning health system. The authors will highlight opportunities and challenges using examples across clinical, financial, and operational domains.
How to Optimize Your Metadata and TaxonomyIXIASOFT
1. The document discusses how to optimize metadata and taxonomy by creating a content strategy plan, determining key metrics, applying metadata to content, and communicating results to stakeholders.
2. It outlines the key steps: create a content strategy plan, determine metrics to measure goals, apply metadata to content using a metadata schema, and communicate results using reports and queries.
3. Applying metadata according to the strategy helps users find content and measures strategy success, while communicating results builds trust and credibility with stakeholders.
This presentation was provided by Kristi Holmes of Northwestern University during the NISO hot topic virtual conference "Effective Data Management," which was held on September 29, 2021.
INSERM Workshop 246 - Management and reuse of health data: methodological issues: https://ateliersinserm.dakini.fr/en/workshop.246.management.and.reuse.of.health.data.methodological.issues-66-22.php
Presented at http://mcbios-maqc.org. The FAIR Principles have propelled the global debate in all disciplines about better RDM, transparent and reproducible data worldwide, and in all disciplines. FAIR has de facto become a global norm for good RDM, a prerequisite for data science, since their endorsement by global and intergovernmental leaders. Funding bodies are consolidating FAIR into their funding agreements; publishers have united behind FAIR as a way to remain at the forefront of open research; and in the private sector FAIR is adopted and enshrined in policy in major biopharmas, libraries, and unions. FAIR is changing the culture of data science, but work is needed to turn the principles into reality. I will use the work of the FAIRplus project as examplar to illustrate challenges and progresses.
Applying a User-Centered Design Approach to Improve Data Use in Decision MakingMEASURE Evaluation
This document summarizes the application of a user-centered design approach to improve data use in decision making. Key activities included conducting immersion interviews with data users, holding design workshops to understand barriers and generate ideas, and prototyping solutions. Some prototypes developed included a digital portal for accessing data and policies, a social media platform for communication, and data use scorecards for facilities. The process identified technical, behavioral, and organizational barriers to data use and provided lessons on engaging stakeholders and testing prototypes.
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...DATAVERSITY
Are you looking to measure Data Quality in a more organized way? Look no further, use the Conformed Dimensions of Data Quality to organize your efforts, improve communication with stakeholders and track improvement over time. In this webinar, Information Quality practitioner Dan Myers will present the Conformed Dimensions of Data Quality framework along with the complete results of the 3rd Annual Dimensions of Data Quality survey. This presentation will provide the first view of the 2017 results, and all attendees will receive the associated whitepaper free.
In this webinar you will learn:
Why organizations use the Dimensions of Data Quality
Why there are so many options, and what he recommends you use
3rd Annual Survey data about how frequently organizations use the dimensions and specifically which dimensions are most used
Industry trends in adoption and more resources on the topic
KM Impact Challenge - Sharing findings of synthesis reportkmimpactchallenge
The document provides lessons learned from 47 case stories on monitoring and evaluation systems for knowledge management projects. It discusses keeping systems simple, being realistic about time requirements, developing systems as part of project planning, creating shared visions and expectations, focusing on relevant and actionable indicators, investing in facilitation skills, identifying appropriate data collection methods, managing and analyzing qualitative data, focusing on users, and selecting indicators that balance contextualization with aggregation.
Introduction to the workshop Services to support FAIR data - Sarah JonesOpenAIRE
The document summarizes a series of three workshops aimed at discussing services to support FAIR data. The first workshop took place in Prague on April 12, 2019 and focused on service providers and research infrastructures. The second workshop was in Vienna on April 24, 2019 and targeted research support staff and researchers. The third workshop will be in Porto on September 17, 2019 for service users and providers. The workshops seek to explore existing services and gaps to better support FAIR data practices and interoperability between services and infrastructures. A white paper on recommendations will be produced for the EOSC working group on FAIR.
Turning FAIR into Reality: Briefing on the EC’s report on FAIR datadri_ireland
DRI Director Natalie Harrower, a member of the European Commission's Expert Group on FAIR (Findable, Accessible, Interoperable and Re-usable) data, delivered a lunchtime briefing on the recently published 'Turning FAIR into Reality' report on Tuesday 26 February in the Royal Irish Academy, Dublin.
In 2016 the FAIR Data Principles were developed to support the position that effective research data management is ‘not a goal in itself but rather is the key conduit leading to knowledge discovery and innovation’. The new publication is both a report and an action plan for turning FAIR into reality. It offers a survey and analysis of what is needed to implement FAIR and it provides a set of concrete recommendations and actions for stakeholders in Europe and beyond.
The briefing provided an overview of the contents of the report, which include the principles of FAIR, as well as the elements required to implement FAIR data.
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
More Related Content
Similar to Metadata En Croûte: How to make metadata more appetizing to decision makers
Eva Mendez presents the latest developments for the Metadata 2020 collaboration at APE 2018. Updates include a summary of community group challenges and opportunities, and projects that will be launched in 2018.
Metadata 2020: Harnessing PID-power for the greater good.
This presentation outlines the key metadata challenges for each scholarly communications community as identified by Metadata 2020; and introduces new areas of focus for 2018.
This document discusses community approaches to open data at scale. It describes the Metadata 2020 collaboration, which aims to promote richer, connected, reusable open metadata. It outlines several projects undertaken by Metadata 2020 working groups to address challenges around metadata quality, standards, and incentives. The document also summarizes two talks on improving metadata pipelines for SHARE and the evolution of metadata curation at Dryad data repository. It discusses Dryad's integration of manuscript and data submission as well as efforts to enhance interoperability, data citation, and the proposed Data Curation Network model.
If You Tag it, Will They Come? Metadata Quality and Repository ManagementSarah Currier
Presentation to Metadata Perspectives 2009, a conference held in Vienna, Austria in November 2009.
When we build collections of scholarly works, learning materials, or other educational "stuff", we want people to be able to find it. This raises a number of problems, including ensuring that resources are tagged with adequate metadata. In 2004 a pioneering paper on this issue noted:
"At its best, “accurate, consistent, sufficient, and thus reliable” (Greenberg & Robertson, 2002) metadata is a powerful tool that enables the user to discover and retrieve relevant materials quickly and easily and to assess whether they may be suitable for reuse. At worst, poor quality metadata can mean that a resource is essentially invisible within the repository and remains unused." (Currier et al, 2004).
Have the five years since the above-quoted paper was published borne out its prediction: that simply expecting resource authors to create their own metadata at upload would lead to metadata of insufficient quality? Have repository managers been able to persuade funders that including professional metadata augmentation is worth the money? What has been the impact of recent Web developments allowing easier exposure, searching and sharing of resources? How is metadata being treated within the emerging domain of open educational resources? And what does all this mean for repository managers wanting to increase the discoverability of their resources, and to implement workflows for creation of good quality metadata?
Currier, S. et al (2004) Quality assurance for digital learning object repositories: issues for the metadata creation process, ALT-J, Research in Learning Technology, Vol. 12, No. 1, March 2004
http://repository.alt.ac.uk/616/1/ALT_J_Vol12_No1_2004_Quality%20assurance%20for%20digital%20.pdf
Greenberg, J. & Robertson, W. (2003) Semantic web construction: an inquiry of authors’ views on collaborative metadata generation, Proceedings of the International Conference on Dublin Core and Metadata for e-Communities 2002, 45–52.
http://dcpapers.dublincore.org/ojs/pubs/article/viewArticle/693
The document discusses optimizing content findability. It emphasizes the importance of governance, organization, user involvement, and metadata to improve search and findability. Successful organizations allocate resources to analyze search usage and improve information architecture through taxonomy and metadata. User testing, feedback loops, and search analytics are also recommended to enhance findability.
This document summarizes a presentation on FAIRsharing, a registry of interlinked standards, repositories, and policies that aims to increase guidance on finding and using research data resources. It discusses how FAIRsharing tracks the evolution of recommended resources, finds discrepancies between explicit and implicit recommendations, and works with projects to develop FAIR evaluation tools and guidelines. The goal is to help researchers, publishers, and others discover, select, and implement FAIR data standards and policies to accelerate discovery.
This presentation was provided by Chris Erdmann of Library Carpentries and by Judy Ruttenberg of ARL during the NISO virtual conference, Open Data Projects, held on Wednesday, June 13, 2018.
Metadata mapping and vocabulary: consistency for all in scholarly communicati...CILIP MDG
This document summarizes several projects and initiatives from Metadata2020, a collaboration that aims to improve metadata practices in scholarly communication. It outlines challenges and opportunities around metadata mapping, defining common terminology, and establishing best practices. Specific projects discussed include developing recommendations for shared metadata elements and mappings across schemas, creating a glossary of metadata terms, and defining principles for using metadata across the research workflow to facilitate interoperability. The document encourages participation and promotion of Metadata2020's efforts to improve metadata consistency.
The document summarizes Susanna-Assunta Sansone's presentation on enabling FAIR (Findable, Accessible, Interoperable, Reusable) digital resources. It discusses the driving forces behind FAIR including reproducibility crises, new data types, and changing publishing. It then outlines community efforts to develop standards, policies, and tools to improve metadata and data sharing according to FAIR principles. These include domain-specific standards, the FAIRsharing registry, metrics to assess FAIRness, and ongoing work to provide FAIR guidance and services.
Knowledge Management in Healthcare AnalyticsGregory Nelson
The promise of actionable analytics in healthcare poses an inherent challenge as we seek to accelerate the time it takes to go from question to insight to action. The velocity of change, the demand for bigger data, the allure of advanced algorithms, the need for deeper insights, and the cost of inaction make knowledge capture and reuse an all too allusive goal.
In an evolving environment, healthcare organizations need to find ways to make greater use of prior investments in analytics products by reusing the commonalities of proven designs, metadata, business rules, captured learnings, and collaborative insights and applying them to future analytics products. By doing so in a strategic manner, they will be able to create rapid and efficient analytics processes and better manage time to value and reuse.
In this presentation, authors from two very different health systems with two very different patient populations will share their perspectives of the value of knowledge management and discuss the role of analytics in driving towards a learning health system. The authors will highlight opportunities and challenges using examples across clinical, financial, and operational domains.
How to Optimize Your Metadata and TaxonomyIXIASOFT
1. The document discusses how to optimize metadata and taxonomy by creating a content strategy plan, determining key metrics, applying metadata to content, and communicating results to stakeholders.
2. It outlines the key steps: create a content strategy plan, determine metrics to measure goals, apply metadata to content using a metadata schema, and communicate results using reports and queries.
3. Applying metadata according to the strategy helps users find content and measures strategy success, while communicating results builds trust and credibility with stakeholders.
This presentation was provided by Kristi Holmes of Northwestern University during the NISO hot topic virtual conference "Effective Data Management," which was held on September 29, 2021.
INSERM Workshop 246 - Management and reuse of health data: methodological issues: https://ateliersinserm.dakini.fr/en/workshop.246.management.and.reuse.of.health.data.methodological.issues-66-22.php
Presented at http://mcbios-maqc.org. The FAIR Principles have propelled the global debate in all disciplines about better RDM, transparent and reproducible data worldwide, and in all disciplines. FAIR has de facto become a global norm for good RDM, a prerequisite for data science, since their endorsement by global and intergovernmental leaders. Funding bodies are consolidating FAIR into their funding agreements; publishers have united behind FAIR as a way to remain at the forefront of open research; and in the private sector FAIR is adopted and enshrined in policy in major biopharmas, libraries, and unions. FAIR is changing the culture of data science, but work is needed to turn the principles into reality. I will use the work of the FAIRplus project as examplar to illustrate challenges and progresses.
Applying a User-Centered Design Approach to Improve Data Use in Decision MakingMEASURE Evaluation
This document summarizes the application of a user-centered design approach to improve data use in decision making. Key activities included conducting immersion interviews with data users, holding design workshops to understand barriers and generate ideas, and prototyping solutions. Some prototypes developed included a digital portal for accessing data and policies, a social media platform for communication, and data use scorecards for facilities. The process identified technical, behavioral, and organizational barriers to data use and provided lessons on engaging stakeholders and testing prototypes.
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...DATAVERSITY
Are you looking to measure Data Quality in a more organized way? Look no further, use the Conformed Dimensions of Data Quality to organize your efforts, improve communication with stakeholders and track improvement over time. In this webinar, Information Quality practitioner Dan Myers will present the Conformed Dimensions of Data Quality framework along with the complete results of the 3rd Annual Dimensions of Data Quality survey. This presentation will provide the first view of the 2017 results, and all attendees will receive the associated whitepaper free.
In this webinar you will learn:
Why organizations use the Dimensions of Data Quality
Why there are so many options, and what he recommends you use
3rd Annual Survey data about how frequently organizations use the dimensions and specifically which dimensions are most used
Industry trends in adoption and more resources on the topic
KM Impact Challenge - Sharing findings of synthesis reportkmimpactchallenge
The document provides lessons learned from 47 case stories on monitoring and evaluation systems for knowledge management projects. It discusses keeping systems simple, being realistic about time requirements, developing systems as part of project planning, creating shared visions and expectations, focusing on relevant and actionable indicators, investing in facilitation skills, identifying appropriate data collection methods, managing and analyzing qualitative data, focusing on users, and selecting indicators that balance contextualization with aggregation.
Introduction to the workshop Services to support FAIR data - Sarah JonesOpenAIRE
The document summarizes a series of three workshops aimed at discussing services to support FAIR data. The first workshop took place in Prague on April 12, 2019 and focused on service providers and research infrastructures. The second workshop was in Vienna on April 24, 2019 and targeted research support staff and researchers. The third workshop will be in Porto on September 17, 2019 for service users and providers. The workshops seek to explore existing services and gaps to better support FAIR data practices and interoperability between services and infrastructures. A white paper on recommendations will be produced for the EOSC working group on FAIR.
Turning FAIR into Reality: Briefing on the EC’s report on FAIR datadri_ireland
DRI Director Natalie Harrower, a member of the European Commission's Expert Group on FAIR (Findable, Accessible, Interoperable and Re-usable) data, delivered a lunchtime briefing on the recently published 'Turning FAIR into Reality' report on Tuesday 26 February in the Royal Irish Academy, Dublin.
In 2016 the FAIR Data Principles were developed to support the position that effective research data management is ‘not a goal in itself but rather is the key conduit leading to knowledge discovery and innovation’. The new publication is both a report and an action plan for turning FAIR into reality. It offers a survey and analysis of what is needed to implement FAIR and it provides a set of concrete recommendations and actions for stakeholders in Europe and beyond.
The briefing provided an overview of the contents of the report, which include the principles of FAIR, as well as the elements required to implement FAIR data.
Similar to Metadata En Croûte: How to make metadata more appetizing to decision makers (20)
This session will demystify (generative) AI by exploring its workings as an advanced statistical modelling tool (suitable for any level of technical knowledge). Not only will this session explain the technological underpinnings of AI, it will also address concerns and (long-term) requirements around ethical and practical usage of AI. This includes data preparation and cleaning, data ownership, and the value of data-generated - but not owned - by libraries. It will also discuss the potentials for (hypothetical) use cases of AI in collections environments and making collections data AI-ready; providing examples of AI capabilities and applications beyond chatbots.
CATH DISHMAN, CENYU SHEN,
KATHERINE STEPHAN
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
Christina Dinh Nguyen, University of Toronto Mississauga Library
In the world of digital literacies, liaison and instructional librarians are increasingly coming to terms with a new term: algorithmic literacy. No matter the liaison or instruction subjects – computer science, sociology, language and literature, chemistry, physics, economics, or other – students are grappling with assignments that demand a critical understanding, or even use, of algorithms. Over the course of this session, we’ll discuss the term ‘algorithmic literacies,’ explore how it fits into other digital literacies, and see why it as a curriculum might belong at your library. We’ll also look at some examples of practical pedagogical methods you can implement right away, depending on what types of AL lessons you want to teach, and who your patrons are. Lastly, we’ll discuss how librarians should view themselves as co-learners when working with AL skills. This session seeks to bring together participants from across the different libraries, with diverse missions/vision/mandates, to explore ways we can all benefit from teaching AL. If time permits, we may discuss how text and data librarians (functional specialists) can support the development of this curriculum.
David Pride, The Open University
In this paper, we present CORE-GPT, a novel question- answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations.
Cath Dishman, Cenyu Shen, Katherine Stephan
Although scholarly communications has become more open, problems with predatory and problematic publishers remain. There are commercial providers of lists, start-up/renegade Internet lists of good/bad and the researchers, publishers and assessors that try to understand and process what being on/off a list means to themselves, their careers and their institutions. Still, these problems persist and leaves many asking: where is the list?
This plenary panel will discuss the problems of “predatory” publishing and what, if anything, publishers, our community and researchers can do to try and help minimise their abundancy/impact.
eth Montague-Hellen, Francis Crick Institute, Katie Fraser, University of Nottingham
Open Access is a foundational topic in Scholarly Communications. However, when information professionals and publishers talk about its future, it is nearly always Gold open access we discuss. Green was seen as the big solution for providing access to those who couldn’t afford it. However, publishers have protested that Green destroys their business models. How true is this, and are we even all talking the same language when we talk about Green?
Chris Banks, Imperial College London, Caren Milloy, Jisc,
Transitional agreements were developed in response to funder policy and institutional demand to constrain costs and facilitate funder compliance. They have since become the dominant model by which UK research outputs are made open access. In January 2023, Jisc instigated a critical review of TAs and the OA landscape to provide an evidence base to inform a conversation on the desired future state of research dissemination. This session will discuss the key findings of the review and its impact on a sector-wide consultation and concrete actions in the UK and beyond.
Michael Levine-Clark, University of Denver, Jason Price, SCELC Library Consortium
As transformative agreements emerge as a new standard, it is critical for libraries, consortia, publishers, and vendors to have consistent and comprehensive data – yet data around publication profiles, authorship, and readership has been shown to be highly variable in availability and accuracy. Building on prior research around frameworks for assessing the combined value of open publishing and comprehensive read access that these deals provide, we will address multi-dimensional perspectives to the challenges that the industry faces with the dissemination, collection, and analysis of data about authorship, readership, and value.
Hylke Koers, STM Solutions
Get Full Text Research (GetFTR) launched in 2020 with the objective of streamlining discovery and access of scholarly content in the many tools that researchers use today, such as Dimensions, Semantic Scholar, Mendeley, and many others. It works equally well for open access content as it does for subscription-based content, providing researchers with recognizable buttons and indicators to get them to the most up-to-date version of content with minimal effort. Currently, around 30,000 OA articles are accessed every day via GetFTR links.
Gareth Cole, Loughborough University, Adrian Clark, Figshare
Researchers face more pressure to share their research data than ever before. Owing to a rise in funder policies and momentum towards more openness across the research landscape. Although policies for data sharing are in place, engagement work is undertaken by librarians in order to ensure repository uptake and compliance.
We will discuss a particular strategy implemented at Loughborough University that involved the application of conceptual messaging frameworks to engagement activities in order to promote and encourage use of our Figshare-powered repository. We will showcase the rationale behind the adoption of messaging frameworks for library outreach and some practical examples.
Mark Lester, Cardiff Metropolitan University
This talk will outline how a completely accidental occurrence led to brand new avenues for open research advocacy and reasons for being. This advocacy has occurred within student communities such as trainee teachers, student psychologists and (especially) those soon losing access to subscription-based library content. Alongside these new forms of advocacy, these ethical example of AI use cases has begun to form a cornerstone of directly connecting the work of the library to new technology.
Simon Bell, Bristol University Press
The UN SDG Publishers Compact, launched in 2020, was set up to inspire action among publishers to accelerate progress to achieve the Sustainable Development Goals by 2030, asking signatories to develop sustainable practices, act as champions and publish books and journals that will “inform, develop and inspire action in that direction”.
This Lightning Talk will discuss how our new Bristol University Press Digital has been developed as part of our mission to contribute a meaningful and impactful response to this call to action as well as the global social challenges we face.
Using thematic tagging to create uniquely curated themed eBook collections around the Global Social Challenges, Bristol University Press Digital responds directly to the need to provide the scholarly community access to a comprehensive range SDG focussed content while minimising time and resource at the institution end in collating content and maintaining collection relevance to rapidly evolving themes
Jenni Adams, University of Sheffield, Ric Campbell, University of Sheffield
Academic researchers are becoming increasingly aware of the need to make data and software FAIR in order to support the sharing and reuse of non-publication outputs. Currently there is still a lack of concise and practical guidance on how to achieve this in the context of specific data types and disciplines.
This presentation details recent and ongoing work at the University of Sheffield to bridge this gap. It will explore the development of a FAIR resource with specialist guidance for a range of data types and will examine the planned development of this project during the period 2023-25
TASHA MELLINS-COHEN
COUNTER & Mellins-Cohen Consulting, JOANNA BALL
DOAJ, YVONNE CAMPFENS
OA Switchboard,
ADAM DER, Max Planck Digital Library
Community-led organizations like DOAJ (Directory of Open Access Journals), COUNTER (the standard for usage metrics) and OA Switchboard (information exchange for OA publications) are committed to providing reliable, not-for-profit services and standards essential for a well-functioning global research ecosystem. These organizations operate behind the scenes, with low budgets and limited staffing – no salespeople, marketing teams, travel budgets, or in-house technology support. They collaborate with one another and with bigger infrastructure bodies like Crossref and ORCID, creating the foundations on which much scholarly infrastructure relies.
These organizations deliver value through open infrastructure, data and standards, and naturally services and tools have been built by commercial and not-for-profit groups that capitalize on their open, interoperable data and services – many of which you are likely to recognize and may use on a regular basis.
Hear from the Directors of COUNTER, DOAJ and OA Switchboard, as well as a library leader, on the role of these organizations, the challenges they face and why support from the community is essential to their sustainability.
CAMILLE LEMIEUX
Springer Nature
What is the current state of diversity, equity, and inclusion in the scholarly publishing community? It's time to take a thorough look at the 2023 global Workplace Equity (WE) Survey results. The C4DISC coalition conducted the WE Survey to capture perceptions, experiences, and demographics of colleagues working at publishers, associations, libraries, and many more types of organizations in the global community. Four key themes emerged from the 2023 results, which will be compared to the findings from the first WE Survey conducted in 2018. Recommendations for actions organisations can consider within their contexts will be proposed and discussed.
Rob Johnson, Research Consulting
Angela Cochran, American Society of Clinical Oncology
Gaynor Redvers-Mutton, Biochemical Society
Since 2015, the number of self-published learned societies in the UK has decreased by over a third, with the remaining societies experiencing real-term revenue declines. All around the world, society publishers are struggling with increased competition from commercial publishers and the rise of open access business models that reward quantity over quality. We will delve into the distinctive position of societies in research, examine the challenges confronting UK and US learned society publishers, and explore actionable steps for libraries and policymakers to support the continued relevance of learned society publishers in the evolving scholarly landscape.
Simon Bell, Clare Hooper, Katharine Horton, Ian Morgan
Over the last few years we have witnessed a seismic shift in the scholarly ecosystem. Three years since outset of the COVID pandemic and the establishment UN Publishers Compact, this is discussion-led presentation will look at how four UK Universities Presses have adopted a consultative and collaborative approach on projects to support their institutional missions, engage with the wider scholarly community while building on a commitment to make a meaningful difference to society.
This panel discussion will combine the perspectives of four UK based university presses, all with distinct identities and varied publishing programs drawn from humanities, arts and social sciences, yet with a shared recognition and value of the importance to collaborate and co-operate on a shared vision to support accessibility and inclusivity within the wider scholarly community and maintain a rich bibliodiversity.
While research support teams are generally small and specialist in nature, an increased demand of its service has been observed across the sector. This is particularly true for teaching-intensive institutions. As a pilot to expand research support across ARU library, the library graduate trainee was seconded to the research services team for a month. This dialogue between the former trainee and manager will discuss what the experience and outcomes of the secondment were from different perspectives. The conversation will also explore the exposure Library and Information Studies students have to research services throughout their degree.
TIM FELLOWS & EMILY WILD, Jisc
Octopus.ac is a UKRI funded research publishing model, designed to promote best practice. Intended to sit alongside journals, Octopus provides a space for researcher collaboration, recording work in detail, and receiving feedback from others, allowing journals to focus on narrative.
The platform removes existing barriers to publishing. It’s an entirely free, open space for researchers, without editorial and pre-publication peer review processes. The only requirement for authors is a valid ORCiD ID. Without barriers, Octopus must provide feedback mechanisms to ensure the community can self-moderate. During this session, we’ll explore Octopus’ aims to foster a collaborative environment and incentivise quality.
David Parker, Publisher and Founder, Lived Places Publishing
Dr. Kadian Pow, Lecturer in Sociology and Black Studies & LPP Author, Birmingham City University
Natasha Edmonds, Director, Publisher and Industry Strategy, Clarivate
Library patrons want to search for and locate authors by particular identity markers, such as gender identification, country of origin, sexual orientation, nature of disability, and the many intersectional points that allow an author to express a point-of-view. Artificial Intelligence, skilled web researchers, and data scientists in general struggle to achieve accuracy on single identity markers, such as gender. And what right does anybody have to affix identity metadata to an author other than the author theirselves? And what of the risks in disseminating author identity metadata in electronic distribution platforms and in library catalog systems? Can a "fully informed" author even imagine all the possible misuses of their identity metadata?
More from UKSG: connecting the knowledge community (20)
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
5. Metadata: Tech or business concern?
...contributing to
delicious dishes of
powerful and effective
business needs
Metadata technology
provides raw
ingredients...
workflows micro-payments compliance reportingbusiness development
Five White Plates With Different Kinds of
Dishes by Pixabay CC0 from Pexels
Spices Avocado and Ingredients on Table by
mali maeder CC0 from Pexels
6. What is Metadata 2020?
A collaboration that advocates research
output metadata that is
to advance scholarly pursuits for the benefit
of society.
RICHER
fuels discoverability &
innovation
CONNECTED
bridges gaps between
systems & communities
REUSABLE / OPEN
eliminates duplication of
effort
7. Community-identified challenges
Metadata entry
takes time, and
must be entered
multiple times
Establishing streamlined,
efficient workflows for metadata
is challenging - siloed expertise
& unclear prioritization
Low adoption of metadata
efforts creates a tension
between quantity & quality
of metadata
RESEARCHERS
PUBLISHERS
REPOSITORIES
Metadata culture is often
focused on technical details
rather than the bigger
system-level picture
LIBRARIANS
Interoperability is
challenging:
inconsistent metadata
vocabulary and
community standards
SERVICE PROVIDERS
8. ● Communities have similar problems and similar
solutions available if they collaborate
● Efforts have been made to address challenges within
each community, but few efforts have been truly cross-
community
● We hope to increase effectiveness and efficiency and
avoid duplication of work
9. Projects
● Having identified core concerns for multiple communities, we
formed 6 closely related projects in March 2018
● The projects were designed to address the concerns of the
community groups
● Projects include participants from different communities
10. Components of a great dish
Metadata Mapping and Evaluation
● Metadata recommendations & element mappings
● Metadata evaluation & guidance
PLANNINGPREPARINGPRESENTATION
Best Practice, Principles, and Definitions for
Metadata
● Defining the terms we use about metadata
● Best practices & principles
Researcher Communications & Incentives for
Improving Metadata
● Researcher communications
● Incentives for improving metadata quality
Photo by Stokpic from Pexels
Photo by rawpixel.com from Pexels
Photo by Daria Shevtsova from Pexels
12. Metadata Recommendations & Element Mappings
Group Lead: Jim Swainston, Emerald Group Publishing
Purpose: To converge communities and publishers towards a
shared set of recommended metadata concepts with related
mappings between those recommended concepts and elements in
important dialects.
Outputs
● Schema index
● Schema mapping
● Flow diagram
13. Metadata Evaluation and Guidance
Group Lead: Ted Habermann, Metadata Game Changers
Purpose: To identify and compare existing metadata evaluation
tools and mechanisms for connecting the results of those
evaluations to clear, cross-community guidance.
Outputs
● Index of evaluation tools
● Element-level best practice notes
● Best practices index
15. Defining the Terms We Use About Metadata
Group Lead: Scott Plutchak, University of Alabama at
Birmingham (retired)
Purpose: In order to communicate effectively about anything, a
common language must be acknowledged, tacitly or purposefully.
In the metadata space, there is not agreement on what words like
property, term, concept, schema, title refer to. This project will
develop a glossary of words associated with metadata, both for
core concepts and disciplinary areas.
Outputs
● Global metadata glossary
16. Shared Best Practices and Principles
Group Leads: Howard Ratner, CHORUS; and Jennifer Kemp,
Crossref
Purpose: To build a set of high level best practices for using
metadata across the scholarly communication cycle, in order to
facilitate interoperability and easier exchange of information and
data across the stakeholders in the process.
Outputs
● Links to best practices & guidelines
● Metadata principles
● Metadata practices / sentiments (principles preamble)
18. Researcher Communications
Group Lead: Alice Meadows, ORCID; Michelle Urberg,
ProQuest
Purpose: Exploring ways to align efforts between communities
who aim to increase the impact and consistency of
communication with researchers about metadata.
Outputs
● Literature Review
● Survey Results
19. Incentives for Improving Metadata Quality
Group Lead: Fiona Counsell, Taylor & Francis
Purpose: To highlight downstream applications and value of
metadata for all parts of the community, telling real stories as
evidence of how better metadata will meet their goals.
Outputs
● Metadata personas
● Big benefits
20. The Metadata2020 Incentives Pyramid
Advancing
Research
Impact Innovation
Discoverability Accessibility
Reducing
Friction Integrity & Trust
21. Metadata Big Benefits
Discoverability
● Discoverability of research maximises dissemination to create impact
● High-quality metadata = topic content discovery
● Metadata links diverse content & outputs > Connections! Discoveries!
Accessibility
● High-quality metadata provides accessibility to research results
● high quality metadata utilising standards enables
○ Curation and custodianship
○ Long-term preservation
○ Machine & human readability
22. Metadata Big Benefits
Reducing Friction
● Metadata standards enable system interoperability
● Interoperability enables efficiency in people and processes
● Greater efficiency leads to higher productivity
● Standards and interoperable systems reduce administrative burden
Integrity & Trust
● Communities will preserve, protect and enhance trust in research
● Research transparency is key to building credibility and integrity
● Provenance metadata > resources people involved; chain of custody
● Metadata enables reproducibility of research data
23. Metadata Big Benefits
Impact
● Communities/ organisation measure impact differently
● All want to position themselves to stay ahead of technology,
opportunities & competitors
● Investment = benefits + increased leadership/ service reputation
Innovation
● Benefits lead to greater innovation
● New services and business models for existing & start-up orgs
● Catalyst for innovation within research itself
○ New research result trends and connections
○ Increased trust and trust indicators across scientific communities
○ Research method innovation via large scale text and data mining
24. Metadata Principles
For metadata to support the community, it should be
COMPATIBLE: a guide to content for machines and people
So, metadata must be open, interoperable, parsable, machine
actionable, human readable as possible.
COMPLETE: reflect the content, components and relationships as
published
So, metadata must be as complete and comprehensive as possible.
CREDIBLE: enable content discoverability and longevity
So, metadata must be of clear provenance, trustworthy and accurate.
CURATED: reflect updates and new elements
So, metadata must be maintained over time.
25. Personas
● Creators
Those who provide descriptive information (metadata) about research
and scholarly outputs
● Curators
Those who classify, normalize and standardize this descriptive
information to increase its value as a resource
● Custodians
Those who store and maintain this descriptive information and make it
available for Consumers
● Consumers
Those who knowingly or unknowingly use the descriptive information to
find, discover, connect and cite research and scholarly outputs
26. Adopt a persona
● What innovations do you wish were available for this
role?
● What impact do you think would come from
better/easier fulfilment of this role?
● What incentives would make it more desirable to
perform this role?
Creators | Curators | Custodians | Consumers
Record your thoughts
http://bit.ly/M2020_UKSGPresentationNOTES
27. How well served are you?
Importance
How important is the
attribute?
Least
Most
Well served?
How well do current tools
and process serve you?
Not at all Very
Attribute 1 3 5 1 3 5
Metadata Credibility/Accuracy - how correct and
understandable the information I provide/use is
◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯
Metadata Completeness - how complete the
information is can be compared to the available fields
◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯
Metadata Compatibility - how compatible the
information I provide/use is to metadata found about
other outputs, including ones submitted by others
◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯
Metadata Curation/Maintainability - how the
metadata that I provide/use is maintained over time
◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯ ◯
Record your thoughts
http://bit.ly/M2020_UKSGPresentationNOTES
28. Can you help?
● Over 140 individuals are involved in Metadata 2020
● Contribute to Metadata 2020 projects!
Email info@metadata2020.org for details
● Help promote our efforts to the wider community through
your organizations, and social media
● Respond the the survey, circulate the survey to your
colleagues/researchers
● Find us on @Metadata2020 Twitter, Facebook, LinkedIn,
and at metadata2020.org
Beginnings:
Research was conducted in 2019-17 through interviews with people across all communities
This work confirmed a need for better understand of the importance of metadata in multiple communities in scholarly communications.
“Metadata is the means to the end, not the goal. We need to demonstrate the importance of the interconnected whole.” - Metadata 2020 interviewee
The danger of standards: Metadata 2020 is NOT about standards!
“Standards are like toothbrushes; everybody likes the idea of them but everybody wants to use their own.” - Anon
Comic: HOW STANDARDS PROLIFERATE:
SITUATION: There are 14 competing standards
“14?! Ridiculous! We need to develop one universal standard that covers everyone’s use cases.”
Soon… SITUATION: There are 15 competing standards source xkcd.com/927 (CC BY-NC 2.5)
Based on interviews, we heard challenges as perceived by those in different parts of the community
Discoverability
Researchers and the broad scholarly communications community want to drive global and comprehensive dissemination of research results. Discoverability of research maximises dissemination to create impact.
High-quality metadata ensures that researchers, practitioners and policy makers discover topic content. Quality metadata links diverse content and outputs, enabling connections and amplified research discoveries.
Accessibility
The diverse communities in scholarly communications rely on high-quality metadata to effectively implement accessibility to research results. The acts of curation and custodianship, long-term preservation, machine readability, human readability all rely on high quality metadata utilising standards
Reducing Friction
Machine readable metadata standards enable interoperability between systems. Interoperability enables efficiency in people and processes for outputs. Greater efficiency leads to higher productivity for all.
‘Capture once’ and automated processes depend on standards and interoperable systems, and are key to reducing administrative burden.
Integrity & Trust
Communities are strongly motivated to preserve, protect and enhance integrity trust in the research process. Research transparency is key to building credibility and integrity.
Provenance metadata exposes resources used and people involved, and the chain of custody of information. Metadata enables reproducibility of research data and understanding how to use and validate it
Impact
Each community and organisation will define and measure impact slightly different depending on mission. It might be commercial advantage, operational efficiency, cost savings or simply making better decisions. However, all scholarly communications organisations share the same desire to position themselves for the future and not get left behind by technology, opportunities or competitors.
Through investing in quality metadata organisations can take advantage of the benefits above and build their reputation for leadership and community service.
Innovation
All of the above benefits of improving metadata quality can lead to greater innovation within organisations and scholarly communications itself. Quality metadata gives potential for new services and business models for both existing organisations and new start-ups.
Quality metadata can also be a catalyst for innovation within research itself. It enables hitherto unrecognised trends and connections between research results, and provides the evidence of trust and trust indicators across different scientific communities. Metadata can enable innovation in research method by facilitating large scale text and data mining.
All of which leads to the ultimate benefit of improving metadata quality…