Jo Lambert Jisc Paul Needham University of Cranfield
The success of COUNTER in supporting adoption of a standard to measure e-resource usage over the past 15 years is apparent. The prevalence of global OA policies and mandates, and the role of institutional repositories within this context prompts demand for more granular metrics. It also raises the profile of data sharing of item level usage and research data metrics. The need for reliable and authoritative measures is key. This burgeoning interest is complemented by a number of initiatives to explore the measurement and tracking of usage of a broad range of objects outside traditional publisher platforms. Drawing on examples such as OpenAIRE, IRUSdata-UK, Crossref’s distributed usage logging and DOI event tracker projects, COAR Next Generation Repositories and IRUS-UK, this session will provide an update on progress in this area, discuss some challenges and current approaches to tackling them
This presentation was provided by Paul Needham of Cranfield University and Johan Bollen of Indiana University, during the NISO webinar "Measuring Use, Assessing Success, Part Two: Count Me In: Measuring Individual Item Usage," which was held on September 15, 2010.
The document summarizes the PIRUS 2 project which aims to develop a global standard for measuring online usage of individual journal articles. It discusses how PIRUS 2 builds on PIRUS 1 by developing COUNTER-compliant usage reports at the article level. The project has gathered usage statistics from several publishers and repositories and loaded them into a prototype central clearing house demonstrator. Next steps include expanding participation and further developing the user interface and SUSHI server to re-expose article-level usage statistics.
The document summarizes the PIRUS 2 project which aims to develop a global standard for measuring online usage of individual journal articles. It discusses how PIRUS 2 builds on PIRUS 1 by developing COUNTER-compliant usage reports at the article level. The project has gathered usage statistics from several publishers and repositories and loaded them into a prototype central clearing house demonstrator. Next steps include expanding participation and further developing the user interface and SUSHI server to re-expose article-level usage statistics.
The document discusses an "information sharing pipeline" which would allow real-time sharing of identity information between institutions and applications. It describes existing standards like WebSub, ActivityPub and Linked Data Notifications that could enable this. The presenters argue for adopting these open standards to allow decentralized data ownership and exchange, rather than relying on centralized systems. Challenges include political and intellectual property issues, but benefits include breaking down data silos and enabling innovation. The document outlines the key components of various standards and evaluates their potential for building an information sharing pipeline.
The PIRUS project developed a new COUNTER standard for recording, reporting, and consolidating article-level usage statistics from institutional repositories and publishers. Key drivers included the growth of articles in repositories without usage standards and the need for reliable cross-source usage data. The draft PIRUS Code of Practice specifies how to collect, process, and report article usage data in a consistent way. It also proposes a Central Clearing House to consolidate global usage statistics by article for authors and funders. Public feedback on the draft Code is requested through April 2013.
A user journey in OpenAIRE services through the lens of repository managers -...OpenAIRE
A user journey in OpenAIRE services through the lens of repository managers (II – OpenAIRE dashboard for content providers, usage statistics and the catch-all broker service). OpenAIRE-connect & OpenAIRE Advance workshop at the Open Repositories Conference, June 10, 2019, Hamburg.
The document discusses next generation repositories that aim to improve discovery, interoperability, and functionality of existing repository systems. It identifies key priorities like exposing identifiers, enabling batch and navigation discovery, supporting user interactions through annotation and commenting, and collecting usage activities. Technologies like ResourceSync and Signposting are highlighted to enhance areas like notification and metadata exposure. The goal is a global network of interoperable repositories that empower open scholarship.
Jo Lambert Jisc Paul Needham University of Cranfield
The success of COUNTER in supporting adoption of a standard to measure e-resource usage over the past 15 years is apparent. The prevalence of global OA policies and mandates, and the role of institutional repositories within this context prompts demand for more granular metrics. It also raises the profile of data sharing of item level usage and research data metrics. The need for reliable and authoritative measures is key. This burgeoning interest is complemented by a number of initiatives to explore the measurement and tracking of usage of a broad range of objects outside traditional publisher platforms. Drawing on examples such as OpenAIRE, IRUSdata-UK, Crossref’s distributed usage logging and DOI event tracker projects, COAR Next Generation Repositories and IRUS-UK, this session will provide an update on progress in this area, discuss some challenges and current approaches to tackling them
This presentation was provided by Paul Needham of Cranfield University and Johan Bollen of Indiana University, during the NISO webinar "Measuring Use, Assessing Success, Part Two: Count Me In: Measuring Individual Item Usage," which was held on September 15, 2010.
The document summarizes the PIRUS 2 project which aims to develop a global standard for measuring online usage of individual journal articles. It discusses how PIRUS 2 builds on PIRUS 1 by developing COUNTER-compliant usage reports at the article level. The project has gathered usage statistics from several publishers and repositories and loaded them into a prototype central clearing house demonstrator. Next steps include expanding participation and further developing the user interface and SUSHI server to re-expose article-level usage statistics.
The document summarizes the PIRUS 2 project which aims to develop a global standard for measuring online usage of individual journal articles. It discusses how PIRUS 2 builds on PIRUS 1 by developing COUNTER-compliant usage reports at the article level. The project has gathered usage statistics from several publishers and repositories and loaded them into a prototype central clearing house demonstrator. Next steps include expanding participation and further developing the user interface and SUSHI server to re-expose article-level usage statistics.
The document discusses an "information sharing pipeline" which would allow real-time sharing of identity information between institutions and applications. It describes existing standards like WebSub, ActivityPub and Linked Data Notifications that could enable this. The presenters argue for adopting these open standards to allow decentralized data ownership and exchange, rather than relying on centralized systems. Challenges include political and intellectual property issues, but benefits include breaking down data silos and enabling innovation. The document outlines the key components of various standards and evaluates their potential for building an information sharing pipeline.
The PIRUS project developed a new COUNTER standard for recording, reporting, and consolidating article-level usage statistics from institutional repositories and publishers. Key drivers included the growth of articles in repositories without usage standards and the need for reliable cross-source usage data. The draft PIRUS Code of Practice specifies how to collect, process, and report article usage data in a consistent way. It also proposes a Central Clearing House to consolidate global usage statistics by article for authors and funders. Public feedback on the draft Code is requested through April 2013.
A user journey in OpenAIRE services through the lens of repository managers -...OpenAIRE
A user journey in OpenAIRE services through the lens of repository managers (II – OpenAIRE dashboard for content providers, usage statistics and the catch-all broker service). OpenAIRE-connect & OpenAIRE Advance workshop at the Open Repositories Conference, June 10, 2019, Hamburg.
The document discusses next generation repositories that aim to improve discovery, interoperability, and functionality of existing repository systems. It identifies key priorities like exposing identifiers, enabling batch and navigation discovery, supporting user interactions through annotation and commenting, and collecting usage activities. Technologies like ResourceSync and Signposting are highlighted to enhance areas like notification and metadata exposure. The goal is a global network of interoperable repositories that empower open scholarship.
This presentation by Judith Coffey Russell, Dean of University Libraries, University of Florida and Alicia Wise, Director of Universal Access, Elsevier describes expanding access to publications by University of Florida authors through the university's institutional repository using ScienceDirect supplied data and links. See the webcast at https://www.brighttalk.com/webcast/9995/125071.
This presentation was provided by Kathleen Shearer of COAR, during the NISO Event "Open Access: The Role and Impact of Preprint Servers," held November 14 - 15, 2019.
This presentation was provided by Todd Digby and Robert Phillips of the University of Florida during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
Online Journal Management using Open Journal Systems (OJS)Ina Smith
This document provides an overview of using Open Journal Systems (OJS) for online journal management. OJS is an open source journal management and publishing system that allows journals to accept submissions, peer review, edit and publish articles online. It has benefits such as being locally controlled, providing online submission and management tools, and building capacity for journals with fewer resources. The document discusses implementation of OJS, training, and continued support available through organizations like ASSAf and PKP. It also covers topics like registering with indexes, rights management, analytics and measuring impact.
This document provides an overview of using Open Journal Systems (OJS) for online journal management. OJS is an open source journal management and publishing system that allows journals to accept submissions, peer review, edit and publish articles online. It has benefits such as being locally controlled, providing online submission and management tools, and building capacity for journals with fewer resources. The document discusses implementing and customizing OJS, ensuring academic integrity of journals, registering with indexes, and measuring journal impact.
OpenAIRE provide dashboard #OpenAIREweek2020Pedro Príncipe
OpenAIRE provide session at the OpenAIRE week 2020 - A user journey in OpenAIRE provide - services and the interoperability guidelines, by Pedro Principe
COUNTER has three new developments:
1) Draft Release 4 of the COUNTER Code of Practice is available for public comment to improve usage reporting for all e-resources.
2) A draft Code of Practice for a new Journal Usage Factor measure is under review to provide broader journal impact data.
3) The PIRUS project report proposes a standard for recording and reporting article-level usage globally from repositories and publishers.
The document summarizes an update on the NISO Open Discovery Initiative standards. It provides an overview of the ODI, which defines recommendations for data exchange between libraries, content providers, and discovery service vendors. The ODI aims to help libraries assess content provider participation in discovery services and ensure fair and unbiased indexing. It also outlines the roles and responsibilities of each party to ensure transparency and conformance with ODI practices. Recent updates to the ODI recommended practice in 2020 focused on metadata elements, fair linking, open access indicators, and statistical reporting.
The arrival and enormous growth rate of digital contents have fundamentally changed the way in which content is made available to library users. In the recent years, libraries are acquiring more and more electronic resources (e-resources) because of perceived benefits, such as easy access to information and its comprehensiveness. Due to the influx of e-resources in libraries, the collection, acquisition, and maintenance of these resources have become complicated issues to deal with. This has forced libraries to devise strategies to manage and deliver e-resources conveniently. Therefore, “Management of E-resources” or “Electronic Resource Management” (ERM) has become a challenge for library professionals that needs to be addressed through research and practice. To meet these challenges, library professionals and content providers have decided to develop ‘Electronic Resource Management System’ (ERMS) for management of e-resources in a more systematic way.
Congresso Sociedade Brasileira de Computação CSBC2016 Porto Alegre (Brazil)
Workshop on Cloud Networks & Cloudscape Brazil
José Luiz Ribeiro Filho, Director of Services and Solutions of the Brazilian National Education and Research Network (RNP), Brazil
Cloud Federation & Open Science Cloud at cross-regional level
Implementing web scale discovery services: special reference to Indian Librar...Nikesh Narayanan
Web scale Discovery services arebecoming the widely adopted Information Retrieval solution in libraries across the world to connect its patrons with the relevant information they seek. In lieu with the world trend, Resources Discovery Solution implementation is gathering momentum in Indian libraries also.
Considering the Indian Libraries scenario, this paper attempts to provide an overview of Library Web Scale Discovery solutions, its need in Indian Libraries, important parameters to be considered for evaluation of Discovery Services, essential factors to be considered prior to implementation, stages of implementation and finally some thoughts on post implementation analysis for measuring the success.
Voxxed Athens 2018 - Eventing, Serverless, and the Extensible EnterpriseVoxxed Athens
The document discusses event-driven application architectures on Microsoft Azure. It provides an overview of Azure messaging and eventing services like Event Hubs, Service Bus, Event Grid, and IoT Hub. It also discusses how serverless architectures on Azure using functions can enable reactive and event-driven applications. Building and device management scenarios are provided as examples of how sensor data from IoT devices could be processed and reacted to based on events.
OpenAIRE Open Innovation call: Next Generation RepositoriesOpenAIRE
1) Current institutional repositories have issues with usability, interoperability, and acting primarily as silos for individual institutions' data.
2) The vision for next generation repositories is to position them as part of a globally networked infrastructure for scholarly communication, with the resources themselves, rather than the repositories, becoming the focus of services.
3) Key areas discussed for next generation repositories include improved resource discovery and content transfer using ResourceSync and Signposting, generating open usage metrics through a usage hub, and enabling annotation of content through web annotation protocols.
As more resources are indexed online and as more researchers begin their quest in a digital environment, unique local collections and institutional repositories play an ever more important role. The development of standards for these materials and ensuring their long-term preservation is crucial. Please join the Standards Committee and the Holdings Committee to learn more about RDA for Non-MARC testers. Discover how the PIRUS2 project (Publisher and Institutional Repository Usage Statistics) is enabling the recording and reporting of articles hosted by aggregators or in repositories. Learn how preservation standards can ensure the long-term protection of digital collections.
Presentation by Todd Carpenter given at the American Library Association Conference on June 25, 2017 about the Resource Access in the 21st Century (RA21) project. The RA21 project is focused on improving the access control systems for digital content subscribed to by libraries.
Novinky u Elsevier: Citace, metriky, spolupráceKnihovnaUTB
The document discusses new features and updates from Elsevier, including Mendeley, Scopus, and ScienceDirect. It summarizes:
Mendeley now offers institutional editions with more storage space, teams, collaborators, and analytics dashboards. A new certification program provides Mendeley Premium upgrades for librarians.
Scopus is re-evaluating journal content to ensure high quality. It is expanding cited references back to 1996 and books to provide better coverage. New metrics and APIs will integrate article-level data and citations into Scopus.
ScienceDirect is working with institutional repositories through new APIs to retrieve metadata, check access entitlements, and retrieve full-text content in order to better support sharing
Presentation from ALA Midwinter 2014 on Elsevier's new Text and Data Mining P...Chris Shillum
- Elsevier has introduced a new policy that allows academic researchers to text and data mine subscribed content on ScienceDirect for non-commercial purposes through the ScienceDirect APIs. Researchers can share text mining outputs publicly if it contains snippets of up to 200 characters and is attributed back to the original content.
- Text and data mining involves using natural language processing and analytical methods to extract structured information and discover patterns from unstructured text sources. It is becoming an essential tool in fields like biology and neuroscience.
- Elsevier piloted their text and data mining policy with 30 academic customers to understand use cases and challenges. Most requests fell into answering specific research questions or building shared resources. Researchers faced technical, functional, and legal/
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
More Related Content
Similar to CrossRef Distributed Usage Logging Pilot
This presentation by Judith Coffey Russell, Dean of University Libraries, University of Florida and Alicia Wise, Director of Universal Access, Elsevier describes expanding access to publications by University of Florida authors through the university's institutional repository using ScienceDirect supplied data and links. See the webcast at https://www.brighttalk.com/webcast/9995/125071.
This presentation was provided by Kathleen Shearer of COAR, during the NISO Event "Open Access: The Role and Impact of Preprint Servers," held November 14 - 15, 2019.
This presentation was provided by Todd Digby and Robert Phillips of the University of Florida during the NISO Virtual Conference held on Feb 15, 2017, entitled Institutional Repositories: Ensuring Yours is Populated, Useful and Thriving.
Online Journal Management using Open Journal Systems (OJS)Ina Smith
This document provides an overview of using Open Journal Systems (OJS) for online journal management. OJS is an open source journal management and publishing system that allows journals to accept submissions, peer review, edit and publish articles online. It has benefits such as being locally controlled, providing online submission and management tools, and building capacity for journals with fewer resources. The document discusses implementation of OJS, training, and continued support available through organizations like ASSAf and PKP. It also covers topics like registering with indexes, rights management, analytics and measuring impact.
This document provides an overview of using Open Journal Systems (OJS) for online journal management. OJS is an open source journal management and publishing system that allows journals to accept submissions, peer review, edit and publish articles online. It has benefits such as being locally controlled, providing online submission and management tools, and building capacity for journals with fewer resources. The document discusses implementing and customizing OJS, ensuring academic integrity of journals, registering with indexes, and measuring journal impact.
OpenAIRE provide dashboard #OpenAIREweek2020Pedro Príncipe
OpenAIRE provide session at the OpenAIRE week 2020 - A user journey in OpenAIRE provide - services and the interoperability guidelines, by Pedro Principe
COUNTER has three new developments:
1) Draft Release 4 of the COUNTER Code of Practice is available for public comment to improve usage reporting for all e-resources.
2) A draft Code of Practice for a new Journal Usage Factor measure is under review to provide broader journal impact data.
3) The PIRUS project report proposes a standard for recording and reporting article-level usage globally from repositories and publishers.
The document summarizes an update on the NISO Open Discovery Initiative standards. It provides an overview of the ODI, which defines recommendations for data exchange between libraries, content providers, and discovery service vendors. The ODI aims to help libraries assess content provider participation in discovery services and ensure fair and unbiased indexing. It also outlines the roles and responsibilities of each party to ensure transparency and conformance with ODI practices. Recent updates to the ODI recommended practice in 2020 focused on metadata elements, fair linking, open access indicators, and statistical reporting.
The arrival and enormous growth rate of digital contents have fundamentally changed the way in which content is made available to library users. In the recent years, libraries are acquiring more and more electronic resources (e-resources) because of perceived benefits, such as easy access to information and its comprehensiveness. Due to the influx of e-resources in libraries, the collection, acquisition, and maintenance of these resources have become complicated issues to deal with. This has forced libraries to devise strategies to manage and deliver e-resources conveniently. Therefore, “Management of E-resources” or “Electronic Resource Management” (ERM) has become a challenge for library professionals that needs to be addressed through research and practice. To meet these challenges, library professionals and content providers have decided to develop ‘Electronic Resource Management System’ (ERMS) for management of e-resources in a more systematic way.
Congresso Sociedade Brasileira de Computação CSBC2016 Porto Alegre (Brazil)
Workshop on Cloud Networks & Cloudscape Brazil
José Luiz Ribeiro Filho, Director of Services and Solutions of the Brazilian National Education and Research Network (RNP), Brazil
Cloud Federation & Open Science Cloud at cross-regional level
Implementing web scale discovery services: special reference to Indian Librar...Nikesh Narayanan
Web scale Discovery services arebecoming the widely adopted Information Retrieval solution in libraries across the world to connect its patrons with the relevant information they seek. In lieu with the world trend, Resources Discovery Solution implementation is gathering momentum in Indian libraries also.
Considering the Indian Libraries scenario, this paper attempts to provide an overview of Library Web Scale Discovery solutions, its need in Indian Libraries, important parameters to be considered for evaluation of Discovery Services, essential factors to be considered prior to implementation, stages of implementation and finally some thoughts on post implementation analysis for measuring the success.
Voxxed Athens 2018 - Eventing, Serverless, and the Extensible EnterpriseVoxxed Athens
The document discusses event-driven application architectures on Microsoft Azure. It provides an overview of Azure messaging and eventing services like Event Hubs, Service Bus, Event Grid, and IoT Hub. It also discusses how serverless architectures on Azure using functions can enable reactive and event-driven applications. Building and device management scenarios are provided as examples of how sensor data from IoT devices could be processed and reacted to based on events.
OpenAIRE Open Innovation call: Next Generation RepositoriesOpenAIRE
1) Current institutional repositories have issues with usability, interoperability, and acting primarily as silos for individual institutions' data.
2) The vision for next generation repositories is to position them as part of a globally networked infrastructure for scholarly communication, with the resources themselves, rather than the repositories, becoming the focus of services.
3) Key areas discussed for next generation repositories include improved resource discovery and content transfer using ResourceSync and Signposting, generating open usage metrics through a usage hub, and enabling annotation of content through web annotation protocols.
As more resources are indexed online and as more researchers begin their quest in a digital environment, unique local collections and institutional repositories play an ever more important role. The development of standards for these materials and ensuring their long-term preservation is crucial. Please join the Standards Committee and the Holdings Committee to learn more about RDA for Non-MARC testers. Discover how the PIRUS2 project (Publisher and Institutional Repository Usage Statistics) is enabling the recording and reporting of articles hosted by aggregators or in repositories. Learn how preservation standards can ensure the long-term protection of digital collections.
Presentation by Todd Carpenter given at the American Library Association Conference on June 25, 2017 about the Resource Access in the 21st Century (RA21) project. The RA21 project is focused on improving the access control systems for digital content subscribed to by libraries.
Novinky u Elsevier: Citace, metriky, spolupráceKnihovnaUTB
The document discusses new features and updates from Elsevier, including Mendeley, Scopus, and ScienceDirect. It summarizes:
Mendeley now offers institutional editions with more storage space, teams, collaborators, and analytics dashboards. A new certification program provides Mendeley Premium upgrades for librarians.
Scopus is re-evaluating journal content to ensure high quality. It is expanding cited references back to 1996 and books to provide better coverage. New metrics and APIs will integrate article-level data and citations into Scopus.
ScienceDirect is working with institutional repositories through new APIs to retrieve metadata, check access entitlements, and retrieve full-text content in order to better support sharing
Presentation from ALA Midwinter 2014 on Elsevier's new Text and Data Mining P...Chris Shillum
- Elsevier has introduced a new policy that allows academic researchers to text and data mine subscribed content on ScienceDirect for non-commercial purposes through the ScienceDirect APIs. Researchers can share text mining outputs publicly if it contains snippets of up to 200 characters and is attributed back to the original content.
- Text and data mining involves using natural language processing and analytical methods to extract structured information and discover patterns from unstructured text sources. It is becoming an essential tool in fields like biology and neuroscience.
- Elsevier piloted their text and data mining policy with 30 academic customers to understand use cases and challenges. Most requests fell into answering specific research questions or building shared resources. Researchers faced technical, functional, and legal/
Similar to CrossRef Distributed Usage Logging Pilot (20)
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
1. CrossRef Distributed Usage Logging Pilot |
Presented By
Date
CrossRef Distributed Usage Logging Pilot
SSP Fall Seminar
Victoria Rao
September 16, 2015
2. CrossRef Distributed Usage Logging Pilot | 2
https://library.uwinnipeg.ca/scholarly-communication/index.html
4. CrossRef Distributed Usage Logging Pilot | 4
Distributed Usage Reality
Researchers are increasingly using “alternative”
(non-publisher) platforms to store, access and
share the literature
• Institutional and subject repositories
• Aggregator platforms (EBSCOhost, IngentaConnect)
• Researcher-oriented social-networking sites (e.g.
Academia.edu, ResearchGate, Mendeley)
• Reading environments and tools (e.g. ReadCube,
Utopia Documents)
• …
5. CrossRef Distributed Usage Logging Pilot | 5
CrossRef DET (DOI Event Tracking)
• CrossRef DET: common “event” repository; scaling existing Lagotto
infrastructure for all DOI-based events – in scope of the pilot
• Focus on capturing all kinds of user-content interactions using
standardized message envelope (easily customizable for various types of
events) – data collection
• Data propagation and distribution
DET Pilot White Paper
6. CrossRef Distributed Usage Logging Pilot | 6
CrossRef DET and Distributed Usage Logging
Focus on:
DET – capturing any and all types of user-content interactions
DUL – focusing on COUNTER usage events occurring outside of the
publisher platforms and processing such events via publisher’s
COUNTER-compliant usage reporting streams
Organized as:
• 2 technical groups (DET and DUL)
• Executing 2 pilots to demonstrate technical feasibility, identify
supported use cases and standardize event exchange
DUL Technical Group
Beverly Jamison (American Psychological Association)
Chris Shillum (Elsevier)
Christian Kohl (de Gruyter)
David Sommer (COUNTER)
Genevieve Early (Taylor and Francis)
Harald Wirsching (Springer)
John Carroll (Nature Publishing Group)
Maciej Rymar (Mendeley)
Nicko Goncharoff (Digital Science)
Oliver Pesch (Ebsco/COUNTER)
Paul Needham (Cranfield University/ISUS)
Sarah Price (University of Birmingham)
Victoria Rao (Elsevier)
Wiley and MyScienceWork joined initiative as well
7. CrossRef Distributed Usage Logging Pilot | 7
Elsevier Sharing and Hosting Policies
• Elsevier supports the STM Article Sharing Principles and we want to work
in partnership with organizations aggregating and making available
versions of articles published by researchers with Elsevier. Hosting policy
complements our sharing policy which outlines how authors can share
their research, and agreements with subscribing institutions about how
licensed material can be shared.
• We believe that we all have a shared responsibility to work together to
ensure researchers can share research quickly, easily, and responsibly.
This requires active partnering to ensure the coherence and integrity of the
scientific record, to promote responsible sharing in a way that respects the
needs of all stakeholders, and to enable impact and usage
measurement in a distributed environment.
• Hosting platforms should develop and share COUNTER compliant
usage statistics so that researchers and publishers have a full picture of
how articles are shared and used.
https://www.elsevier.com/about/company-information/policies/sharing
https://www.elsevier.com/about/company-information/policies/hosting
8. CrossRef Distributed Usage Logging Pilot | 8
COUNTER Codes of Practice
http://www.projectcounter.org/about.html
• COUNTER (Counting Online Usage of Networked Electronic Resources) is
an international initiative serving librarians, publishers and intermediaries
by setting standards that facilitate the recording and reporting of online
usage statistics in a consistent, credible and compatible way.
http://www.niso.org/workrooms/sushi/
• NISO is the National Information Standards Organization of the United
States. COUNTER has worked with NISO on SUSHI (Standardized Usage
Harvesting Initiative) to develop a protocol to facilitate the automated
harvesting and consolidation of usage statistics from different vendors. This
protocol is now available and may be found on the NISO/SUSHI website
above.
• Hosting platform must collect and report COUNTER-compliant usage
statistics to the appropriate publishers
• Entitlements gap: hosting platform may not be aware of the end-user
entitlements at the publisher’s side
9. CrossRef Distributed Usage Logging Pilot | 9
Distributed Usage Gap – the problem
It has been noted that usage on non-platforms is often
legitimate, i.e. from researchers who have access to the
content via institutional subscription agreements, however
because the usage does not occur on the publishers’ own
platforms, it cannot be captured in the COUNTER-
compliant usage reports sent to subscribing customers,
meaning that:
• Publishers are not able to demonstrate to their customers the
true value of their subscription holdings and are not able to
provide authors will a full picture of usage of their articles.
• Institutions are not able to make a full and accurate assessment
of the usage of the content they subscribe to when making
purchasing decisions.
10. CrossRef Distributed Usage Logging Pilot | 10
DUL – Capturing Distributed Usage
1. Researchers read
articles on site of
choice
2. Sites log usage via DUL
API Including DOI, IP
address, Institutional ID
3. CrossRef orchestrates
usage event logging to
publisher’s usage
logging API
4. Publishers include third-
party site usage in
COUNTER reports sent to
customers
Publisher A
Publisher C
Publisher BCrossRef
COUNTER
Institutional
Repository
Social
Networking
Site
Reading
Environment
Institution
Publishers register
usage logging API URLs
with CrossRef
COUNTER certifies sites and
issues logging token
11. CrossRef Distributed Usage Logging Pilot | 11
CrossRef DUL – who is involved
Role of COUNTER
• Define semantics of usage
logging messages
• Validate and issue
credentials to participants in
the scheme
• Define Code of Practice and
oversee compliance auditing
process
Role of CrossRef
• Define syntax of usage
logging messages
• Build and operate technical
infrastructure
• Define technical API specs
• Provide training and
documentation on technical
integration
Role of Platform
Vendors
• Integrate with DULAPI
• Leverage CrossRef
framework to discover DUL
API endpoints
• Send usage events via API
to publishers
• Adhere to COUNTER
defined Code of Practice
Role of Publishers
• Implement DULAPI
• Register DUL API endpoint
with CrossRef
• Receive usage events from
hosting/sharing Platforms
• Incorporate DUL into
existing COUNTER-
compliant usage reporting
stream
12. CrossRef Distributed Usage Logging Pilot | 12
CrossRef Distributed Usage Logging Group Aims
• Define a way for DOIs to advertise endpoints to which event data
may be submitted, including a mechanism to specify the payload
schemas that the endpoint accepts.
• Pilot the end-to-end transmission of COUNTER-usage events from
platforms providing direct access to full text to publishers
responsible for that full text, using the above mechanism.
• Work out the "rules of the game" for the COUNTER use cases,
including message semantics, responsibility for anti-gaming
mechanism, etc.
13. CrossRef Distributed Usage Logging Pilot | 13
DUL Pilot scope and use case
Use case: single usage event message in CrossRef DOI envelope format
is submitted by 3rd party/social platforms to publisher (owner of the DOI)
distributed usage logging API (private event exchange).
Scope:
one 2 one
private event
exchange
between hosting
platforms and
publishers
CrossRef DET facilitates
DUL API endpoint
discovery given resource
DOI
14. CrossRef Distributed Usage Logging Pilot | 14
What happens after usage event submission?
DUL Usage
Event via API
Authenticate
User
Log Usage
Event
COUNTER
Usage Report
User info, ex. IP address
Publisher’s COUNTER-compliant usage reporting stream
DOI, format, account
Usage events to publisher
Customers
Customer specific usage reports
Non-
publisher
Platforms
15. CrossRef Distributed Usage Logging Pilot |
Mandatory HTTP request header:
Content-Type: application/vnd.crossref.det-envelope+json; charset=UTF-8;v=1.0.0
POST Payload:
{
"uuid": "", "message-type": "", "source-token": "", "message":
{ "doi": "", "content-type": "", "user-ip": "", "event-time": "", "session-id": "",
"user-agent": ""
}
where:
uuid is a message identifier
source-token is a platform (where the usage occurred) identifier
doi is a DOI of the article (same as [doi] parameter on the URI)
content-type is article format, such as application/pdf, text/html, text/xml, etc
user-ip is end-user IP address
event-time is usage event timestamp in ISO 8601 format
session-id is user session identifier (or equivalent)
user-agent is name of the application used to access article
15
CrossRef DUL Pilot - message format
16. CrossRef Distributed Usage Logging Pilot | 16
CrossRef DUL Pilot – Example API request / response
Example URI:
https://api.elsevier.com/content/usage/doi/10.1016/S0014-5793(01)03313-
0?apiKey=dc55dd54dd2e5b85bb32441101581fa7&httpAccept=text/xml
Mandatory HTTP request header:
Content-Type: application/vnd.crossref.det-envelope+json; charset=UTF-8;v=1.0.0
Example POST Payload:
{
"uuid": "e583eca0-fdf4-45ff-8c8e-2c3ce1196ea7",
"message-type": "counter-download",
"source-token": "Platform_Name",
"message":
{ "doi": "10.1016/S0014-5793(01)03313-0", "content-type": "application/pdf", "user-
ip": "127.1.1.1", "event-time": "20150603", "session-id": "1234", "user-agent":
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0" }
}
Expected successful HTTP response:
Status Code: 201 Created
17. CrossRef Distributed Usage Logging Pilot | 17
CrossRef DUL Pilot – sample distributed usage report
Journal Publisher Platform Article DOI … Format Jun-15
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2012.0
2.012 PDF 2
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2012.0
5.004 PDF 2
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2014.0
2.001 PDF 2
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2014.0
2.002 PDF 2
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2014.0
2.004 PDF 2
Cardiovascular
Pathology Elsevier Platform_VictoriaTest
10.1016/j.carpath.2014.0
3.008 PDF 2
18. CrossRef Distributed Usage Logging Pilot | 18
COUNTER Proposal: Distributed usage events
Activities in a (individual user) private library (similar to a private storage
library)
• Add to library (e.g. from hard drive or directly from a publisher website)
• Open/Read the article
• Annotate the article
Activities of a user in a closed (by invitation only) group:
• Add to library (‘consume’ for later use from another user in the group)
• Open/Read the article (‘consume’ for direct use from another user in the
group)
• Annotate the article
• Share the article with other users in the group (‘upload’ the article)
Activities of a user on a public website (e.g. in a public group, or a public
profile on a publicly (indexable) website):
• Download the article from a public website
• Open/read the article on a public website
• Annotate the article
• Upload an article onto a public website
COUNTER proposal v2, prepared by Sonja Lendi
19. CrossRef Distributed Usage Logging Pilot | 19
COUNTER proposal: Example Usage Reports
Journal Publisher Platform Journal
DOI
Proprietary
Identifier
Print ISSN Online
ISSN
Reporting
Period
Total
Reporting
Period
HTML
Reporting
Period
PDF
Jan-2015 Feb-
2015
Total 254,465 117,137 137,277 122,130 132,335
Academic Pediatrics Elsevier ScienceDirect ACAP 1876-2859 1876-2867 121 100 21 83 38
Academic Radiology Elsevier ScienceDirect XACRA 1076-6332 140 68 72 89 51
Academic Radiology Elsevier Mendeley XACRA 1076-6332 12 0 12 5 7
Academic Radiology Elsevier Readcube XACRA 1076-6332 9 0 9 6 3
Academic Radiology Elsevier Institutional Repository A XACRA 1076-6332 3 0 3 1 2
Accident Analysis & Prevention Elsevier ScienceDirect AAP 0001-4575 106 68 38 38 68
Accident and Emergency Nursing Elsevier ScienceDirect 0 0965-2302 19 14 5 11 8
Accounting Forum Elsevier ScienceDirect ACCFOR 0155-9982 64 37 27 8 56
Accounting Forum Elsevier Mendeley ACCFOR 0155-9982 89 0 89 45 44
Accounting, Organizations and Society Elsevier ScienceDirect AOS 0361-3682 108 19 89 14 94
Accounting, Organizations and Society Elsevier Mendeley AOS 0361-3682 130 0 130 11 119
Accounting, Organizations and Society Elsevier Readcube AOS 0361-3682 112 0 112 50 62
Accounting, Organizations and Society Elsevier Institutional Repository A AOS 0361-3682 5 0 5 4 1
Accounting, Organizations and Society Elsevier Institutional Repository B AOS 0361-3682 8 0 8 4 4
Journal Publisher Platform Journal
DOI
Proprietary
Identifier
Print ISSN Online
ISSN
Reporting
Period
Total
Reporting
Period
HTML
Reporting
Period
PDF
Jan-2015 Feb-
2015
Total 254,465 117,137 137,277 122,130 132,335
Academic Pediatrics Elsevier ScienceDirect ACAP 1876-2859 1876-2867 121 100 21 83 38
Academic Radiology Elsevier ScienceDirect XACRA 1076-6332 140 68 72 89 51
Academic Radiology Elsevier Non-publisher platforms XACRA 1076-6332 12 0 12 5 7
Accident Analysis & Prevention Elsevier ScienceDirect AAP 0001-4575 106 68 38 38 68
Accident and Emergency Nursing Elsevier ScienceDirect 0 0965-2302 19 14 5 11 8
Accounting Forum Elsevier ScienceDirect ACCFOR 0155-9982 64 37 27 8 56
Accounting Forum Elsevier Non-publisher platforms ACCFOR 0155-9982 89 0 89 45 44
Accounting, Organizations and Society Elsevier ScienceDirect AOS 0361-3682 108 19 89 14 94
Accounting, Organizations and Society Elsevier Non-publisher platforms AOS 0361-3682 130 0 130 11 119
Method 1.1 (1 line per sharing platform)
Method 1.2 (1 line for all sharing platforms)
COUNTER proposal v2, prepared by Sonja Lendi
20. CrossRef Distributed Usage Logging Pilot | 20
DUL Technical Group – progress so far
• December 12, 2014 – Distributed Usage Logging (DUL) technical group is formed
• January 15, 2015 – first meeting of the DUL technical group sets aims, forms two
subgroups with focus on technical feasibility and policy aspects.
• February 27, 2015 – DUL technical group meeting; two subgroups concluded their
activities resulting in proposal of DOI event envelope with DUL message specification and
DUL use cases. The DOI envelope specification and use cases are added to CrossRef
DET white paper.
• March 14, 2015 – pilot implementation of the Elsevier DUL API is available along with
technical documentation guide.
• April 30, 2015 – Mendeley implemented proof of concept using Elsevier DUL API and
provided feedback.
• May 15, 2015 – DUL technical group meeting; group proposed refinement of the required
elements in DUL message to ensure COUNTER compliance when processing usage
events .
• June 5, 2015 – next pilot iteration of the Elsevier DUL API is available, incorporating
feedback from May 15 meeting to include additional parameters in the DUL message.
• July 2, 2015 – DUL technical group meeting; introduction of 2 new members joining DUL
initiative MyScienceWork and Wiley. Sample DUL usage report via Elsevier DUL API is
presented and discussed. Further refinements are proposed in attempt to standardize
DUL message format and usage reporting.
21. CrossRef Distributed Usage Logging Pilot | 21
Next steps
• Identify and document new use cases as more publishers and 3rd
party platforms are joining the initiative.
• Define usage event types (ex. “raw-download”).
• Discuss usage reporting needs and corresponding formats while
taking into account user privacy considerations and COUNTER
compliance.
• Collaborate with COUNTER on use cases and report formats
• Propose and pilot usage event message authentication and anti-
gaming mechanisms.
• Pilot CrossRef DOI pingback/linkback mechanism for DUL endpoint
discovery (when supported by CrossRef DOI infrastructure) and
demonstrate end-to-end functionality.
Citations are no longer the only measure of research impact, although, they remain the most prominent measure.
Publisher platforms are no longer the only place where research works get disseminated, read, used, cited
Scientific social and sharing platforms, mainstream and social media are increasingly used to disseminate/share, discuss, comment, mention, like, link, tweet, etc – “events”.
Authors would like to know the collective impact of their research works, factoring in distributed “events” often scattered all over the internet.
Enable COUNTER-compliant usage by publisher, while providing information about the user (ex. IP address)
Publisher will be able to attribute usage to a specific account and incorporate off-platform usage as a part of COUNTER –compliant usage reporting
Distributed event sources
Data collection at DET/Lagotto
Data propagation & distribution to event subscribers
Do not re-invent COUNTER-compliant usage reporting streams.
Build on the CrossRef infrastructure to create a framework which allows usage information to flow from the point of usage (the alternative platforms) to the publishers, from where the data can be aggregated and incorporated into existing COUNTER usage reporting streams.
Authentication allows to attribute usage to a specific customer as resolved by publisher’s A&E system.