Delivered at the Society for Scholarly Publishing 2018 meeting as a pre-meeting seminar. What scholarly publishers need to know when changing platform vendors and migrating journal and book content. Presented by Allison Belan (Duke University Press) and Karie Kilpatrick (American Physiological Society)
Introduction to Crossref - Crossref LIVE South Africa Crossref
Vanessa Fairhurst gives an introduction and overview of Crossref at the Crossref LIVE local events in Pretoria and Cape Town, South Africa. 17th and 19th April 2018.
Research Software Engineering Inside and Outside the LibraryPatrick McCann
The importance of software to research is growing, which is reflected in the emergence of the Research Software Engineer (RSE) role and moves to recognise software as a research output. The Research Computing team at the University of St Andrews sits within the Digital Research division of the Library and seeks to support research in two principal ways. Firstly, the team are available as a development resource to researchers across the University; secondly, they are leading initiatives to understand and support better the breadth and depth of research software engineering activities across the University.
Crossref is a not-for-profit organization with over 10,000 member organizations that registers, links, and distributes metadata for scholarly research outputs. It maintains a database of over 99 million content items, including journals, books, conference proceedings, and other materials. Crossref provides services to help publishers, reference managers, and other groups discover, cite, and assess scholarly research. Members join Crossref to help discover and track where their content is located online and to participate in collaborative initiatives around metadata and technology.
The Content Platform Migrations Working Group was formed to address the increasing number of platform migrations in the publishing and library communities. With publishers migrating every 5-10 years, content platform vendors migrating 5-10 times per year, and librarians experiencing over 10 migrations per year, there is a lack of coordination and communication between stakeholders. The working group aims to develop recommended practices to standardize migration processes and improve communications before, during, and after migrations through stakeholder interviews, guidelines, checklists, and a communications plan to minimize disruptions to users.
Andrew Simpson - Making sense for researchers: finding a practical approach a...sherif user group
The University of Portsmouth took a practical approach to implementing open access policies. They established a research outputs manager position split between the library and research office. Interviews with researchers found common misunderstandings about open access that informed training sessions. A research portal was created for submissions. Expanding support staff and developing reporting tools helped manage the growing open access requirements.
NISO (a non-profit standards organization) is working on several projects related to scholarly information including recommended practices around access and license indicators, open discovery initiatives, journal transfers between publishers, and altmetrics standards. The presentation provides an overview of NISO's mission and processes for developing standards as well as details on the specific projects. Membership in working groups for each project involves representatives from libraries, publishers, and other organizations.
Introduction to Crossref - Crossref LIVE South Africa Crossref
Vanessa Fairhurst gives an introduction and overview of Crossref at the Crossref LIVE local events in Pretoria and Cape Town, South Africa. 17th and 19th April 2018.
Research Software Engineering Inside and Outside the LibraryPatrick McCann
The importance of software to research is growing, which is reflected in the emergence of the Research Software Engineer (RSE) role and moves to recognise software as a research output. The Research Computing team at the University of St Andrews sits within the Digital Research division of the Library and seeks to support research in two principal ways. Firstly, the team are available as a development resource to researchers across the University; secondly, they are leading initiatives to understand and support better the breadth and depth of research software engineering activities across the University.
Crossref is a not-for-profit organization with over 10,000 member organizations that registers, links, and distributes metadata for scholarly research outputs. It maintains a database of over 99 million content items, including journals, books, conference proceedings, and other materials. Crossref provides services to help publishers, reference managers, and other groups discover, cite, and assess scholarly research. Members join Crossref to help discover and track where their content is located online and to participate in collaborative initiatives around metadata and technology.
The Content Platform Migrations Working Group was formed to address the increasing number of platform migrations in the publishing and library communities. With publishers migrating every 5-10 years, content platform vendors migrating 5-10 times per year, and librarians experiencing over 10 migrations per year, there is a lack of coordination and communication between stakeholders. The working group aims to develop recommended practices to standardize migration processes and improve communications before, during, and after migrations through stakeholder interviews, guidelines, checklists, and a communications plan to minimize disruptions to users.
Andrew Simpson - Making sense for researchers: finding a practical approach a...sherif user group
The University of Portsmouth took a practical approach to implementing open access policies. They established a research outputs manager position split between the library and research office. Interviews with researchers found common misunderstandings about open access that informed training sessions. A research portal was created for submissions. Expanding support staff and developing reporting tools helped manage the growing open access requirements.
NISO (a non-profit standards organization) is working on several projects related to scholarly information including recommended practices around access and license indicators, open discovery initiatives, journal transfers between publishers, and altmetrics standards. The presentation provides an overview of NISO's mission and processes for developing standards as well as details on the specific projects. Membership in working groups for each project involves representatives from libraries, publishers, and other organizations.
Data-Informed Decision Making for Digital ResourcesChristine Madsen
This session will provide three case studies of assessment and evaluation programs in libraries--one past, one current, and one future. The cases use three different modes of data gathering and analysis and show the power of understanding user needs and how well your organization is meeting them.
Data-Informed Decision Making for Libraries - Athenaeum21Megan Hurst
Athenaeum21 presents three case studies of assessment and evaluation programs in libraries--one past, one current, and one future. The cases use three different modes of data gathering and analysis to show the power of understanding user needs and how well your organization is meeting them.
Crossref's newest member of the content family is preprints. At Crossref, preprints have custom support to make sure that, links to these publications persist over time, they are connected to the full history of the shared research results and the citation record is clear and up-to-date. But that's not the whole story, and we have three guest speakers lined up to share their thoughts and expertise on the role of preprints in research; Martyn Rittman from Preprints, operated by MDPI, Richard Sever from bioRxiv and Jessica Polka from ASAPbio.
The benefits and challenges of open access: lessons from practice - Helen Bla...Jisc
Led by Helen Blanchett, subject specialist, scholarly communications, Jisc.
With contribution from Andrew Simpson, associate university librarian (procurement and metadata and systems), Portsmouth University.
In this session you’ll hear in this session you’ll hear about the benefits and challenges of open access.
Connect more in London, 28 June 2016
Trend Spotting Workshop. A practical guide to making sense of large information sources. Workshop run with Gemma Long (QAA) at etc.venues Maple House, Birmingham, 23rd February 2017.
Publishing Partnerships: Why, When, and How Collaboration Sometimes Trumps Co...cuyeki
This document discusses publishing partnerships between Choice and Bowker regarding the Resources for College Libraries (RCL) service. It provides background on the RCL genealogy and need for a new version. Choice and Bowker formed a partnership with Bowker providing technological resources and marketing capabilities, while Choice maintained editorial independence. The results were a shorter development time, greater resources, and sustainable ongoing development of RCL to meet the changing needs of academic libraries.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Taming print journal collections...to boldly weed where no one has weeded beforeNASIG
Over the past few years, Bucknell University’s Bertrand Library has made many changes to evolve our services, physical library space, and collections in response to changing expectations and needs of our researchers. Our Collection Development team was charged with a task to develop a plan that would holistically examine our print journal collection and forecast what would be required for a single-effort de-accessioning project, aiming to weed our print journal collection by 50% or more. I will present our planning process, criteria, and grand reconceptualization for the space.
Accompanying handout: http://www.slideshare.net/NASIG/taming-print-journal-collections-handout
Presenter:
Kathryn Dalius
Serials Specialist, Bucknell University
This document discusses the Public Knowledge Project's (PKP) transition from open source software to community-sourced software. It outlines PKP's governance structure, financial support, development partners and recent software releases. Specifically, it describes the new Open Monograph Press software and plans for usability assessments of the Open Journal Systems and Open Monograph Press software conducted by the California Digital Library and University of British Columbia.
Presented at the OCLC Research Library Partnership meeting by Senior Program Officer, Karen Smith-Yoshimura and hosted by the University of Sydney in Sydney, NSW Australia, 17 February 2017. This meeting provided an opportunity for Research Library Partners to touch base with each other on issues of common concern and explore possible areas of future engagement with the OCLC Research Library Partnership and OCLC Research.
Charleston 2021 - Hit the ground running - Best practices for navigating cont...Matthew Ragucci
The document summarizes a presentation on navigating content platform migrations. It includes perspectives from a publisher (Wiley), librarian (North Carolina State University), platform provider (Silverchair Information Systems), and an overview of the NISO Content Platform Migration Working Group. The publisher discusses lessons learned from migrations, including the importance of communication plans and URL redirects. The librarian emphasizes the need for timely updates and checklists. The platform provider notes most migrations take 6-12 months and there are always unknowns. The NISO group aims to standardize migration processes and improve communications through recommended best practices and checklists.
Despite the tedious preparation by publishers, vendors, and librarians, content platform migrations are rarely seamless. Due to the complexities involved, a problem-free migration is the exception rather than the norm. The NISO Content Platform Migration Working Group was formed to address these challenges and aims to establish recommended practices and checklists to standardize and improve platform migration processes for all stakeholders involved with online content platforms.
In this session, a librarian and a publisher will share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
Walk this way: Online content platform migration experiences and collaboration NASIG
In this session, a librarian and a publisher share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
About the Webinar
The "single search box" approach of web search engines like Google and Bing have forced libraries and system developers to rethink their whole approach to end-user searching for library and publisher resources and electronic content. Discovery systems are continuing to evolve from simple keyword search systems, to more elaborate indexed discovery, to new forms of usage-based discovery and beyond. Because discovery of content is such a critical component of library services, understanding in what potential ways these systems will develop is critical for library staff, either when selecting a system, or seeking ways to improve its service. NISO launched a research study in early 2014 on the status of discovery systems, their potential future development directions, and the systems interoperability needs of these services.
This webinar will cover some of the latest developments of library discovery systems as well as discuss the findings of the NISO research study, and the implications of those results.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Differential Discovery: Effect of Discovery on Online Journal Usage
John McDonald, Associate Dean, Collections, University of Southern California Libraries
Jason Price, Program Manager, Statewide California Electronic Library Consortium (SCELC)
A Single Search Box is Definitely Not Enough
Steve Guttman, Senior Director of Product Management, ProQuest
Library Resource Discovery: Next Steps
Marshall Breeding, Library Consultant, librarytechnology.org
This document summarizes an information session about City Research Online (CRO), the institutional repository at City University London. CRO uses Symplectic Elements for research information management and Eprints for an open access repository, and provides services like archiving theses and working papers. The session discussed open access policies and infrastructure, lessons learned like automating metadata and differentiating systems, and future plans like research data management and author profiling services. Attendees were encouraged to ask questions about CRO's role in advocating for open access at City University London.
This document summarizes a workshop on open science and open data for librarians. The workshop covered introducing open science and open data, how data can inform the library profession and support research, tools and applications for working with data, and developing a data strategy for libraries. It discussed stakeholders in research data, why librarians are important data partners, the role of librarians in advocating for open data and managing repositories. The workshop also covered data skills needed by librarians and introducing trusted data repositories.
The Canadian Linked Data Initiative: Charting a Path to a Linked Data FutureNASIG
As libraries prepare to shift away from MARC to a linked data framework, new convergences in the metadata production activities of our libraries' technical services units, special collections, and digital libraries are becoming possible. In September 2015, the Canadian Linked Data Initiative (CLDI) was formed to leverage the existing collaboration between the Technical Services departments of Canada’s top 5 research libraries and the Library and Archives of Canada. Working cooperatively, our objective is to provide a path to linked data readiness for our institutions and leadership for the adoption of linked data by libraries across Canada. To achieve this goal, partner libraries are working across departments and institutions to create new workflows and tools and adapt to a new conceptual understanding of descriptive metadata. This presentation is a preliminary report on the progress made in five key areas of interest: digital collections, education and training, MARC record enhancement, evaluation of linked data tools and vendor supplied metadata. Building on existing initiatives, the CLDI is investigating the potential of integrating linked data elements into digitized collections, as well as MARC-based bibliographic and authority records, with the aim of fostering new and interesting pathways for resource discovery. To strengthen and expand the professional knowledge of staff, partner institutions are collaborating in the production of educational and training materials related to linked data principles and practices. The evaluation and potential development of linked data tools is another area of concentration. Finally, with the goal of changing workflows upstream, the CLDI is working to engage publishers and vendors in the linked data conversation. In addition to reporting on the work undertaken in the first year of the project, this presentation will also cover lessons learned and outline some of the new opportunities gained from working on a collaborative project that spans across multiple boundaries.
Marlene van Ballegooie, Metadata Librarian,
University of Toronto
Juliya Borie, University of Toronto Libraries
Andrew Senior, Coordinator,
E-Resources and Serials, McGill University
Librarians and faculty members now have the opportunity, through open access publishing, to work together to make faculty-produced scholarly content available to the entire academic community, not just to those scholars or institutions privileged enough to afford it. The University of South Florida Libraries have been working with bepress’ Digital Commons platform to create a substantial institutional repository that includes open access journals, conference proceedings, and data sets, among other materials. Publication of open access journals at USF officially began in 2008 with the launch of Numeracy from the National Numeracy Network. Library staff members are currently involved in a variety of activities, including negotiating memorandum of understandings, loading backfiles, registering DOIs with CrossRef, designing layout, doing final publication steps, and assisting with technical issues. In 2011, our institutional repository, Scholar Commons @ USF, went live, allowing the library to pull fragmented collections previously hosted on other platforms into a single system with improved discoverability. This session will discuss some of these efforts, what is involved, how we have retrained existing and new staff, and plans for future directions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
More Related Content
Similar to Thinking the Unthinkable; or, How to Prepare for a Platform Migration
Data-Informed Decision Making for Digital ResourcesChristine Madsen
This session will provide three case studies of assessment and evaluation programs in libraries--one past, one current, and one future. The cases use three different modes of data gathering and analysis and show the power of understanding user needs and how well your organization is meeting them.
Data-Informed Decision Making for Libraries - Athenaeum21Megan Hurst
Athenaeum21 presents three case studies of assessment and evaluation programs in libraries--one past, one current, and one future. The cases use three different modes of data gathering and analysis to show the power of understanding user needs and how well your organization is meeting them.
Crossref's newest member of the content family is preprints. At Crossref, preprints have custom support to make sure that, links to these publications persist over time, they are connected to the full history of the shared research results and the citation record is clear and up-to-date. But that's not the whole story, and we have three guest speakers lined up to share their thoughts and expertise on the role of preprints in research; Martyn Rittman from Preprints, operated by MDPI, Richard Sever from bioRxiv and Jessica Polka from ASAPbio.
The benefits and challenges of open access: lessons from practice - Helen Bla...Jisc
Led by Helen Blanchett, subject specialist, scholarly communications, Jisc.
With contribution from Andrew Simpson, associate university librarian (procurement and metadata and systems), Portsmouth University.
In this session you’ll hear in this session you’ll hear about the benefits and challenges of open access.
Connect more in London, 28 June 2016
Trend Spotting Workshop. A practical guide to making sense of large information sources. Workshop run with Gemma Long (QAA) at etc.venues Maple House, Birmingham, 23rd February 2017.
Publishing Partnerships: Why, When, and How Collaboration Sometimes Trumps Co...cuyeki
This document discusses publishing partnerships between Choice and Bowker regarding the Resources for College Libraries (RCL) service. It provides background on the RCL genealogy and need for a new version. Choice and Bowker formed a partnership with Bowker providing technological resources and marketing capabilities, while Choice maintained editorial independence. The results were a shorter development time, greater resources, and sustainable ongoing development of RCL to meet the changing needs of academic libraries.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Taming print journal collections...to boldly weed where no one has weeded beforeNASIG
Over the past few years, Bucknell University’s Bertrand Library has made many changes to evolve our services, physical library space, and collections in response to changing expectations and needs of our researchers. Our Collection Development team was charged with a task to develop a plan that would holistically examine our print journal collection and forecast what would be required for a single-effort de-accessioning project, aiming to weed our print journal collection by 50% or more. I will present our planning process, criteria, and grand reconceptualization for the space.
Accompanying handout: http://www.slideshare.net/NASIG/taming-print-journal-collections-handout
Presenter:
Kathryn Dalius
Serials Specialist, Bucknell University
This document discusses the Public Knowledge Project's (PKP) transition from open source software to community-sourced software. It outlines PKP's governance structure, financial support, development partners and recent software releases. Specifically, it describes the new Open Monograph Press software and plans for usability assessments of the Open Journal Systems and Open Monograph Press software conducted by the California Digital Library and University of British Columbia.
Presented at the OCLC Research Library Partnership meeting by Senior Program Officer, Karen Smith-Yoshimura and hosted by the University of Sydney in Sydney, NSW Australia, 17 February 2017. This meeting provided an opportunity for Research Library Partners to touch base with each other on issues of common concern and explore possible areas of future engagement with the OCLC Research Library Partnership and OCLC Research.
Charleston 2021 - Hit the ground running - Best practices for navigating cont...Matthew Ragucci
The document summarizes a presentation on navigating content platform migrations. It includes perspectives from a publisher (Wiley), librarian (North Carolina State University), platform provider (Silverchair Information Systems), and an overview of the NISO Content Platform Migration Working Group. The publisher discusses lessons learned from migrations, including the importance of communication plans and URL redirects. The librarian emphasizes the need for timely updates and checklists. The platform provider notes most migrations take 6-12 months and there are always unknowns. The NISO group aims to standardize migration processes and improve communications through recommended best practices and checklists.
Despite the tedious preparation by publishers, vendors, and librarians, content platform migrations are rarely seamless. Due to the complexities involved, a problem-free migration is the exception rather than the norm. The NISO Content Platform Migration Working Group was formed to address these challenges and aims to establish recommended practices and checklists to standardize and improve platform migration processes for all stakeholders involved with online content platforms.
In this session, a librarian and a publisher will share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
Walk this way: Online content platform migration experiences and collaboration NASIG
In this session, a librarian and a publisher share their perspectives on content platform migrations, and the Working Group Co-chairs will describe the group’s efforts to-date and expected outcomes. Our publisher-side speaker will describe issues they must consider when their content migrates, such as providing continuous access, persistent linking, communicating with stakeholders, and working with vendors. Our librarian speaker will describe their experience and steps they take during migrations, such as receiving notifications about migrations, identifying affected e-resources, updating local systems to ensure continuous access, and communicating with their front-line staff and patrons.
About the Webinar
The "single search box" approach of web search engines like Google and Bing have forced libraries and system developers to rethink their whole approach to end-user searching for library and publisher resources and electronic content. Discovery systems are continuing to evolve from simple keyword search systems, to more elaborate indexed discovery, to new forms of usage-based discovery and beyond. Because discovery of content is such a critical component of library services, understanding in what potential ways these systems will develop is critical for library staff, either when selecting a system, or seeking ways to improve its service. NISO launched a research study in early 2014 on the status of discovery systems, their potential future development directions, and the systems interoperability needs of these services.
This webinar will cover some of the latest developments of library discovery systems as well as discuss the findings of the NISO research study, and the implications of those results.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Differential Discovery: Effect of Discovery on Online Journal Usage
John McDonald, Associate Dean, Collections, University of Southern California Libraries
Jason Price, Program Manager, Statewide California Electronic Library Consortium (SCELC)
A Single Search Box is Definitely Not Enough
Steve Guttman, Senior Director of Product Management, ProQuest
Library Resource Discovery: Next Steps
Marshall Breeding, Library Consultant, librarytechnology.org
This document summarizes an information session about City Research Online (CRO), the institutional repository at City University London. CRO uses Symplectic Elements for research information management and Eprints for an open access repository, and provides services like archiving theses and working papers. The session discussed open access policies and infrastructure, lessons learned like automating metadata and differentiating systems, and future plans like research data management and author profiling services. Attendees were encouraged to ask questions about CRO's role in advocating for open access at City University London.
This document summarizes a workshop on open science and open data for librarians. The workshop covered introducing open science and open data, how data can inform the library profession and support research, tools and applications for working with data, and developing a data strategy for libraries. It discussed stakeholders in research data, why librarians are important data partners, the role of librarians in advocating for open data and managing repositories. The workshop also covered data skills needed by librarians and introducing trusted data repositories.
The Canadian Linked Data Initiative: Charting a Path to a Linked Data FutureNASIG
As libraries prepare to shift away from MARC to a linked data framework, new convergences in the metadata production activities of our libraries' technical services units, special collections, and digital libraries are becoming possible. In September 2015, the Canadian Linked Data Initiative (CLDI) was formed to leverage the existing collaboration between the Technical Services departments of Canada’s top 5 research libraries and the Library and Archives of Canada. Working cooperatively, our objective is to provide a path to linked data readiness for our institutions and leadership for the adoption of linked data by libraries across Canada. To achieve this goal, partner libraries are working across departments and institutions to create new workflows and tools and adapt to a new conceptual understanding of descriptive metadata. This presentation is a preliminary report on the progress made in five key areas of interest: digital collections, education and training, MARC record enhancement, evaluation of linked data tools and vendor supplied metadata. Building on existing initiatives, the CLDI is investigating the potential of integrating linked data elements into digitized collections, as well as MARC-based bibliographic and authority records, with the aim of fostering new and interesting pathways for resource discovery. To strengthen and expand the professional knowledge of staff, partner institutions are collaborating in the production of educational and training materials related to linked data principles and practices. The evaluation and potential development of linked data tools is another area of concentration. Finally, with the goal of changing workflows upstream, the CLDI is working to engage publishers and vendors in the linked data conversation. In addition to reporting on the work undertaken in the first year of the project, this presentation will also cover lessons learned and outline some of the new opportunities gained from working on a collaborative project that spans across multiple boundaries.
Marlene van Ballegooie, Metadata Librarian,
University of Toronto
Juliya Borie, University of Toronto Libraries
Andrew Senior, Coordinator,
E-Resources and Serials, McGill University
Librarians and faculty members now have the opportunity, through open access publishing, to work together to make faculty-produced scholarly content available to the entire academic community, not just to those scholars or institutions privileged enough to afford it. The University of South Florida Libraries have been working with bepress’ Digital Commons platform to create a substantial institutional repository that includes open access journals, conference proceedings, and data sets, among other materials. Publication of open access journals at USF officially began in 2008 with the launch of Numeracy from the National Numeracy Network. Library staff members are currently involved in a variety of activities, including negotiating memorandum of understandings, loading backfiles, registering DOIs with CrossRef, designing layout, doing final publication steps, and assisting with technical issues. In 2011, our institutional repository, Scholar Commons @ USF, went live, allowing the library to pull fragmented collections previously hosted on other platforms into a single system with improved discoverability. This session will discuss some of these efforts, what is involved, how we have retrained existing and new staff, and plans for future directions.
Similar to Thinking the Unthinkable; or, How to Prepare for a Platform Migration (20)
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
6. Introductions
Karie Kirkpatrick
• American Physiological Society
(APS)
• Digital Publications Manager
• Previously at MIT Press
Allison Belan
• Duke University Press (DUP)
• Associate Director, Digital
Strategy and Systems
Coordinated RFP and selection process
Project leads for migration
7. Where We’re Coming From
Duke University Press
• Publisher
• Humanities & Social Science
• Multiple publication formats
• Books (2500)
• journals (52)
American Physiological Society
• Society
• Biological Sciences
• Single publication format
• Journals (14)
10. Plan as Far Ahead as
Possible
• Procuring resources (budgeting, hiring staff,
consulting services)
• Obtaining back content and organizing archive
• Conducting website survey
• Compiling list of requirements and “wishlist” items
• Completing RFP – allow for at least 9 months
• Obtaining master schedule
• Organizing internal staff teams
11. APS RFP Timeline, 2016
JAN-JULY
• Draft RFP
• Select Vendor
Pool
JUNE-JULY
• Hire
Consultant
• Complete
Final Version
JULY
• Distribute RFP
JULY-SEPT
• Answer
Questions
• Receive
Completed
Proposals
SEPTEMBER
• Evalution
• Selection of
Shortlist
OCTOBER
• Vendor
Presentations
OCT-NOV
• Vendor
Scoring
• Follow-Up
Questions
• Selection
NOV-DEC
• Contract
Negotiations
12. DUP's No RFP Route
• Go under NDA with each potential
respondent
• Make all key business strategy and
planning documentation available
• On-site presentations of platform's
capabilities in light of material
June
• Build document
set
July
• Contact vendors
• Execute NDAs
Aug – Sept
• On-site
presentations
October
• Finalist list
• Follow-up
Nov
• Select preferred
vendor
• Validate
requirements
Dec
• Contract
negotiations
13. Wait!
Did you ask your customers?
• Librarians
• Members
Some concerns:
• Accessibility: WCAG 2.0 AA, VPATs
• Trusted access routes (members, alums)
• Text and data mining
• Institutional branding
• Integrations with preservation, discovery services
• GDPR (!?!)
Know these before you send RFP
Consider setting up advisory boards
15. Breaking It Down
Site Design & Build
•Global
•Journal
•Book
•Society
•Products
1
Content Migration
•Content Analysis, Design
•Archive Delivery
•Conversion
•Load
•QA
•“Gap” Content
2
Data Migration
•Products
•Customers: institutional,
individual
•Usage: publisher, COUNTER
•Legacy Site: alerts,
accounts, purchases
3
Partner Communications
•Library
•Editor
•Member
•Individual
•Alert subscribers
•Compositors, data vendors
4
Team Lead Team Lead Team Lead Team Lead
Project Lead
16. Vendor Team
Conversion
Vendor
Platform vendor?
Conversion house?
Project management?
Platform
Vendor
Roles
Project Methodology
Governance
Retiring
Platform
Vendor
Timelines
Data, content transfer
Obligations (both ways)
Composition
&
Data
Vendors
Pre-launch BAU
New specifications
Post-launch transition
18. Scheduling Method
• Set your launch date
• Identify the least flexible process
• Set the schedule based on that
• All other processes align to that schedule
• Accept that effort must fit within a time box
• Accept risk to accuracy, feature-set
20. Platform DUP
Schedule Converter
Converter -
Platfrom
Sprint 143 Sprint 144
Review
Nov 13
Launch
Key
Conversion
June 26
Validation,
Programming
Aug 8
Backfile Loading QA of Conversion
Sept 9
Correx,
Programming
Oct 27
Review
corrections
Nov 3
Resupply
Nov 10
Reload
Nov 17July 17
CONTENT
SITE
Nov 13
Launch
Sprint 145 Sprint 146 Sprint 148, etc
Review
6/28 – 7/7
Review
7/19 – 7/24
Review
8/9 – 8/14
Sprint 147
Review
8/30 – 9/5 9/20 – 9/25 ::BETA
Ready for
content load
Ready for
data load
21. Hidden
Shoals
Things your old platform
“just handled” (black boxes)
Shibboleth
Abuse monitoring configuration
PubMed Central, other deposits
DOI deposits
Business rules built into platform-side
processing
URL construction and patterns
Redirects
Things you know some—
but not enough—about
Identity, access integration approaches,
pros/cons
Your data hygiene (lack thereof)
Deep business rules: known but hard to
document
Lost institutional knowledge
Metadata surprises
Library discovery ecosystem
23. Your Work Is
Far from Over
Further testing and more testing
(Re)Training and cross-training
Verifying integrations and deposits are working properly
Monitoring new content submissions
Fix catch-up
Driving traffic and analyzing usage
Re-engaging users
Planning site improvements
Internal “post-mortem”
Supporting, communicating with customers
24. Lessons
Learned
There is no such
thing as starting
too soon
Know your blind
spots
Stick to deadlines
at every stage—or
testing will suffer
Start early on
custom
integrations
Deep dive on
customer, product,
access data &
interfaces
Expect your back
content to be a
mess
Be ready for
content fixes and
consequences
Involve librarians in
the process
Google makes the
rules – listen to
them!
Be prepared for
traffic changes
Provide strong
leadership
5 MIN (end 8:45) ALLISON
I’d like to set some basic definitions for terms you’ll hear today.
First, though, I’d like to know more about who is in the room
By show of hands:
Who works for a publisher or society?
Keep your hands up if your org has done a platform migration in the past 5 years
Who is in the vendor or services sector?
Who is a librarian, or works with library services?
DEFINITIONS . . .
Finally: During this seminar, we’ll be discussing our experiences. We won’t be assessing the relative merits or demerits of any given platform or publisher. Our aim is to give you actionable insights into managing your own migration, either real or hypothetical.
END BY 8:45
10 MINS (8:45 – 8:55) ALLISON
Let’s dive in
3 MINS ALLISON
Karie:
Digital Publications Manager at American Physiological Society
4 years at APS
Prior to that, 10 years in production and technology at MIT Press
Allison:
Associate Director for Digital Strategy and Systems
Been with DUP for 14 years – 8 as journal production manager, 6 in this role
***ANIMATION***
We each served as the leader of our organization’s migration effort.
KEY PIECE OF ADVICE:
Make 1 person the clear leader of the project
They must be
Trusted
Empowered
Given the resources they need
If possible, dedicated (ACB ~80%)
3 MINS ALLISON
So you know where we are coming from, here are some key characteristics of each organization that shaped our migrations.
Publisher vs Society:
different constituencies
For APS, there are members and journal editors who need to be served and supported.
DUP: our greater focus is on the institutional customer. Our relationships with our journals' editors are a bit more indirect.
STM vs H&SS:
Journal platforms were built originally to support STM content characteristics.
For H&SS: places where platform's existing content model doesn't support some common H&SS content attributes, or the UX doesn't fully support the way that H&SS scholars publish or read scholarship.
Multi-format vs Single Format
1 MIN ALLISON
END 8:55
10 MINS (8:55 – 9:05) KARIE
3 MIN KARIE
Staff: APS beefed up their Digital Publications Department (me, DP Coordinator, Web Specialist)
Back content: APS did not have an internal archive; obtained XML, PDFs, images along with supplemental files (chose to include ahead-of-print AND final content)
Web survey: sent out to public via social media, to APS members, and editorial board members
Requirements and wishlist items: helped to drive the entire process (RFP, contract negotiations, migration)
RFP: APS had distributed one about 5 years prior; distributed different RFPs to prospective vendors and current vendor; hired consultant to help project manage and negotiate
Contract signing: Appendix included requirements and wishlist items (denoted which were required for launch and which could be post-launch activities)
Internal organization: use master schedule to determine how to best determine who should be included in migration at various stages
2 MIN KARIE
2 MIN ALLISON
For a couple of years prior to platform search, DUP had been engaged in researching and building an integrated digital strategy to support the Press’s strategic goals.
Had a lot of critical documents on hand describing this
[Explain slide]
***ANIMATION***
From engagement with potential vendors to contract was 6 months – but work began long before that.
3 MIN KARIE
If there are big things like this, that have to be cooked into the platform to work, you have to get the commitments at RFP.
Very tough to make them happen later as a single-customer driven development or customization.
END 9:05
20 MINUTES (9:05 – 9:25) ALLISON
4 MIN ALLISON
Big project—break it down into smaller projects
We divided into 4 (I wish we’d had a 5th – Content Workflows)
Project Lead
Team leads on each subproject, with people on those teams.
4 MIN ALLISON
You will need to coordinate multiple vendors
Content conversion: know if you are responsible for delivering to platform content spec, or if platform will somehow manage the conversion or transformation
Don’t forget the retiring platform vendor: good planning and clear expectations will make this work. (Tip: figure out how to support your own asset and data export needs as much as possible)
3 MIN ALLISON
Here is a basic method for scheduling something like this.
2 MIN ALLISON
Point here: schedules get complicated FAST
Must have a good way of showing it, sharing it, managing it.
3 MIN ALLISON
For us, the site build, tied to the vendor’s sprint schedule was the least flexible.
So, I got that laid out on a timeline that ends at LAUNCH
Then, added in the key milestones for the other major subprojects: content migration and data migration
***ANIMATION***
Then I developed the next most complex timeline to hit those key intersection milestones
Etc
4 MIN ALLISON
No project plan survives contact with the enemy – the unknowns that become known
END 9:25
10 MIN (9:25 – 9:35)
5 MIN KARIE
Testing: you can never do enough
Training: time to fill in the gaps (pre-launch training can be a bit too basic) – and change is hard (esp if you’ve been working with the same system for over 20 years)
Integrations and deposits: even if your vendor tells you they’ve completed it and it’s working, it’s best to find out for yourself
Content submissions: it is likely that your compositor and other third-party vendors (e.g., manuscript submission system) had to adjust their technology and procedures to conform to the new platform; make sure these processes are running smoothly; extra content QA
Fix catchup: you will be faced with lists of fixes to apply to your content (or other areas) that is minor enough and time-consuming enough that it can wait
Driving traffic: have a campaign in place for promoting your site (signup for content alerts, incentives, teasers)
Re-engaging users: You may have lost all of your alert subscribers
Analyzing usage: like testing, you cannot do too much analyzing; this will help you find potential breaks or missing links
Site improvements: you will quickly discover what isn’t working the way you thought it would; stakeholders/users will recommend better functionality; generating roadmap to help drive future innovation