Presentation for the Modes Users Association Annual Meeting 25.9.2013. Overview of recording of Revisiting Museum Collections units of information within the Modes system.
I conducted a survey in 2013 to determine whether libraries were tracking their perpetual access entitlements, but it did not thoroughly explore how librarians were managing this process. This program will build on that survey and explore specific techniques for tracking perpetual access. The session will focus on commonly used systems as identified in my survey, including integrated library systems, electronic resource management systems, OpenURL knowledgebases, and spreadsheets. The program will discuss what information should be tracked, how best to leverage different sorts of systems, and how to address challenges identified in previous research. These recommendations will be developed through correspondence and interviews with other professionals, as well as the existing literature and best practices.
Chris Bulock
Electronic Resources Librarian, Southern Illinois University Edwardsville
Chris Bulock is currently the Electronic Resources Librarian at Southern Illinois University Edwardsville. He has been a NASIG member since 2011, and is on the Electronic Communications Committee. Chris is the chair of the Commercial Products Committee of the Consortium for Academic and Research Libraries in Illinois.
This document discusses IRUS-UK, a national aggregation service that collects and standardizes usage statistics from UK institutional repositories. It processes raw download data into COUNTER-compliant reports available through an online portal. IRUS-UK helps repositories report statistics to managers and researchers, benchmark performance, and support advocacy efforts. The benefits to OpenAIRE of collaborating with IRUS-UK include international collaboration, contributing to COUNTER standard development, and accessing open access content trends over time through statistics piping to the OpenAIRE dashboard.
UCL Museums & Collections undertook a comprehensive review of its nearly 380,000 objects to assess their current state and ensure ongoing support. A methodology was developed involving assigning collections care, documentation, use and other criteria grades of A-E. A pilot on the zoology collection helped refine the process. The review identified collection strengths and weaknesses, informed strategic planning and will enable prioritizing care for areas needing work. It also established a framework for ongoing assessment and continuous improvement.
Towards a national archives network - Nick Kingsley (The National Archives)RDTF-Discovery
The document discusses the history and current state of archival networks in the UK that aim to provide a single access point for catalog descriptions of archives from different institutions. It notes that while various networks have been established, they are often not comprehensive or sustainable due to separate funding streams. The National Archives is exploring using linked open data approaches to better connect descriptive elements across networks and repositories.
The document discusses the objectives, purposes, and functions of a library catalogue. It defines a library catalogue as a list of print and non-print materials accessible from a particular library. The main purposes of a library catalogue are to serve as a guide to the library's collection and to aid users in locating materials. An effective catalogue should enable users to find materials by author, title, subject, and other access points. The cataloging process involves preparing bibliographic records that describe materials and provide standardized subject headings and classifications.
How Libraries Use Publisher Metadata - Crossref Community WebinarCrossref
The document provides an overview of how libraries use publisher-provided metadata in library discovery systems. It discusses how libraries obtain MARC records and direct linking metadata from publishers and suppliers to incorporate content into library discovery services. It also describes how openURL linking and link resolvers allow libraries to provide access to publisher content through library discovery interfaces and services. Accurate metadata is important for successful linking to full text content.
Presented at the International Internet Preservation Consortium (IIPC) Web Archiving Week, University of London, 16 June 2017.
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in July 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
More information: oc.lc/wam
Contact: Jackie Dooley, dooleyj@oclc.org
I conducted a survey in 2013 to determine whether libraries were tracking their perpetual access entitlements, but it did not thoroughly explore how librarians were managing this process. This program will build on that survey and explore specific techniques for tracking perpetual access. The session will focus on commonly used systems as identified in my survey, including integrated library systems, electronic resource management systems, OpenURL knowledgebases, and spreadsheets. The program will discuss what information should be tracked, how best to leverage different sorts of systems, and how to address challenges identified in previous research. These recommendations will be developed through correspondence and interviews with other professionals, as well as the existing literature and best practices.
Chris Bulock
Electronic Resources Librarian, Southern Illinois University Edwardsville
Chris Bulock is currently the Electronic Resources Librarian at Southern Illinois University Edwardsville. He has been a NASIG member since 2011, and is on the Electronic Communications Committee. Chris is the chair of the Commercial Products Committee of the Consortium for Academic and Research Libraries in Illinois.
This document discusses IRUS-UK, a national aggregation service that collects and standardizes usage statistics from UK institutional repositories. It processes raw download data into COUNTER-compliant reports available through an online portal. IRUS-UK helps repositories report statistics to managers and researchers, benchmark performance, and support advocacy efforts. The benefits to OpenAIRE of collaborating with IRUS-UK include international collaboration, contributing to COUNTER standard development, and accessing open access content trends over time through statistics piping to the OpenAIRE dashboard.
UCL Museums & Collections undertook a comprehensive review of its nearly 380,000 objects to assess their current state and ensure ongoing support. A methodology was developed involving assigning collections care, documentation, use and other criteria grades of A-E. A pilot on the zoology collection helped refine the process. The review identified collection strengths and weaknesses, informed strategic planning and will enable prioritizing care for areas needing work. It also established a framework for ongoing assessment and continuous improvement.
Towards a national archives network - Nick Kingsley (The National Archives)RDTF-Discovery
The document discusses the history and current state of archival networks in the UK that aim to provide a single access point for catalog descriptions of archives from different institutions. It notes that while various networks have been established, they are often not comprehensive or sustainable due to separate funding streams. The National Archives is exploring using linked open data approaches to better connect descriptive elements across networks and repositories.
The document discusses the objectives, purposes, and functions of a library catalogue. It defines a library catalogue as a list of print and non-print materials accessible from a particular library. The main purposes of a library catalogue are to serve as a guide to the library's collection and to aid users in locating materials. An effective catalogue should enable users to find materials by author, title, subject, and other access points. The cataloging process involves preparing bibliographic records that describe materials and provide standardized subject headings and classifications.
How Libraries Use Publisher Metadata - Crossref Community WebinarCrossref
The document provides an overview of how libraries use publisher-provided metadata in library discovery systems. It discusses how libraries obtain MARC records and direct linking metadata from publishers and suppliers to incorporate content into library discovery services. It also describes how openURL linking and link resolvers allow libraries to provide access to publisher content through library discovery interfaces and services. Accurate metadata is important for successful linking to full text content.
Presented at the International Internet Preservation Consortium (IIPC) Web Archiving Week, University of London, 16 June 2017.
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in July 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
More information: oc.lc/wam
Contact: Jackie Dooley, dooleyj@oclc.org
Striking the Balance - public access and commercial reuse of digital contentCollections Trust
Presentation to the Association of Cultural Enterprises Picture Library Symposium on the subject of how UK museums are striking the balance between public access to and commercial reuse of digital cultural content.
FAIR data requires FAIR ontologies and standards. There has been an explosion in the number of ontologies but they are difficult to identify and manage due to a lack of consistent metadata. Existing ontology metadata practices were reviewed, finding that developers use various metadata vocabularies inconsistently and important ontology-specific metadata is underused. Ontology repositories help make ontologies more FAIR by providing interfaces for publishing, accessing, and reusing ontologies and their metadata. The presentation focuses on the NCBO BioPortal as an example ontology repository and how its technologies have been adopted by other repositories. Improving ontology metadata will help comprehension of the ontology landscape; a new AgroPortal metadata model was created to better describe ontologies and their
LoCloud Vocabulary Services: Thesaurus management introduction, Walter Koch a...locloud
This presentation provides an introduction to thesaurus management in the LoCloud Vocabulary Services given during the LoCloud training workshops. It provides an introduction to controlled vocabularies, thesaurus for information retrieval and interoperability, to SKOS, multilingual vocabulary issues and to the federated model adopted for thesaurus management within the LoCloud service, which is based in TemaTres. The presentation includes a list of the vocabularies that have been integrated within the LoCloud service. There is also a walk-through of MediaThread and how this was used in the vocabulary management training offered in the workshop.
Opening up the archives: from basement to browserAmanda Hill
The document summarizes the current state of archive gateways in the UK that provide access to archival descriptions and collections. It describes several existing networks like Archives Hub, AIM25, and A2A that aggregate finding aids from different repositories. Archives Hub aims to be a single point of access for archives in educational institutions. It has grown significantly since starting as a pilot in 1999 and now includes descriptions from over 150 repositories, though some collections only have brief level descriptions while others include item-level details. Future plans include transitioning to a more distributed model where repositories can host their own data and moving to new protocols to expose the data.
The document discusses automated cataloguing systems and their advantages. It provides requirements for an automated catalogue module including supporting standard formats like MARC, generating lists and statistics, and enabling record downloads. Key concepts are defined, such as bibliographic records, fields, and tags. Outputs of automated systems include the OPAC, reports, and information products. Automated cataloguing reduces clerical work and supports data interchange and information services.
SPECTRUM, the UK & International Collections Management standard is used by more than 7,000 museums worldwide. The Collections Trust used this seminar to launch SPECTRUM 4.0.
We reviewed progress with the translation and localisation of SPECTRUM in other countries, and attempted to set out a 5-year roadmap for the future development needs of the community.
This seminar was designed for non-technical people working with collections in museums, archives and libraries in the UK and Europe. We also welcomed participation from Collections Management software vendors and from people interested in translation/localisation.
This document provides a summary of a framework for managing vocabularies as presented at the TDWG Vocabulary Management Task Group meeting. It discusses the status of TDWG ontologies, requirements for a vocabulary management framework, and Semantic MediaWiki as a potential platform for collaborative vocabulary development. Key points include:
- Vocabularies are a core component of the TDWG technical architecture and provide shared understanding of terms, but development and governance has been challenging.
- A framework is needed to standardize the process for minting terms, releasing finalized concept vocabularies, and reusing terms in other schemas and ontologies to promote interoperability.
- Semantic MediaWiki is proposed as a platform for collaborative
Maja Žumer: Library catalogues of the future: realising the old vision with n...ÚISK FF UK
The document discusses the future of library catalogs and metadata, noting that catalogs need to change to meet new user needs and expectations by making data more intuitive to explore, exposing relationships between works and other entities, and fully utilizing the quality of library metadata. It also reviews the history and conceptual models for bibliographic data like FRBR, FRAD, and FRSAD, which aim to present bibliographic information in a more user-oriented way. Libraries will need new systems built on these conceptual models to improve user tasks like finding, identifying, selecting, and exploring materials.
The document discusses supplemental materials that accompany journal articles. It notes that supplemental materials are growing in size and importance, and include various types of multimedia, datasets, computer programs, and additional text/figures. However, there are open questions around how to classify, cite, preserve, and provide access to supplemental materials over time. The document then outlines efforts by the NISO/NFAIS working group to address these issues through the development of best practices and technical recommendations for managing supplemental materials.
ROHub is a reference platform that provides a holistic solution for managing research objects (ROs) through their entire lifecycle. It allows users to create, store, publish, discover, and reuse ROs. ROs include any research outcomes or related resources packaged with metadata to make them FAIR (Findable, Accessible, Interoperable, Reusable). ROHub is integrated into the European Open Science Cloud to enable open sharing of ROs across scientific communities.
Library review: improving back-of-house processes through richer integrations...Talis
The document discusses improving back-of-house processes at Nottingham Trent University Library through richer integrations between their library management system Aspire and other systems. It describes developing an API for Aspire to share data with other applications to help automate acquisitions decisions. This could include integrating Aspire data with the library's reading list system, student enrollment data, circulation history, usage statistics, and supplier catalogues to determine acquisition needs based on factors like student enrollment and usage. The goal would be to route acquisition decisions through formulas using this additional data to make the process more automated and informed. It asks for comments on the feasibility and responsibility for developing such integrations.
An institutional repository is a digital archive that collects, preserves, and disseminates the research output of an institution. It provides open access to scholarly articles, theses, data sets, and other materials. Repositories help increase the visibility and impact of an institution's research and satisfy funder mandates for open access. They benefit researchers, institutions, libraries, and the global research community by providing free access to scholarly works. Content in a repository can include faculty research, student theses and projects, and other materials. Maintaining a repository requires developing policies, building infrastructure, and gaining institutional support.
An institutional repository is a digital archive for collecting, preserving, and disseminating the research output of an institution. It aims to increase visibility and access to scholarship. Repositories help manage intellectual property and preserve content over the long term. They support the institution's mission by providing open access to research and learning materials.
The document summarizes activities from an ontology hackathon focused on creating VOCREF, an ontology cataloguing characteristics of ontologies and vocabularies relevant to their suitability for semantic web and big data applications. Key activities included:
1) Establishing a GitHub repository for the VOCREF project and initial ontology development using OWL2.
2) Developing a top-level VOCREF ontology module and beginning additional content modules to catalogue relevant characteristics.
3) Reviewing existing ontologies and evaluations for potential reuse in VOCREF and beginning the process of integrating relevant concepts.
Libraries as Knowledge Infrastructure of the 21st century: the role of Librar...LIBER Europe
Libraries as Knowledge Infrastructure of the 21st century: the role of Libraries in the future of Research and Higher Education. A presentation by Dr. Paul Ayris (LIBER President) to the European Commission.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
More Related Content
Similar to Revisiting Museum Collections with Modes
Striking the Balance - public access and commercial reuse of digital contentCollections Trust
Presentation to the Association of Cultural Enterprises Picture Library Symposium on the subject of how UK museums are striking the balance between public access to and commercial reuse of digital cultural content.
FAIR data requires FAIR ontologies and standards. There has been an explosion in the number of ontologies but they are difficult to identify and manage due to a lack of consistent metadata. Existing ontology metadata practices were reviewed, finding that developers use various metadata vocabularies inconsistently and important ontology-specific metadata is underused. Ontology repositories help make ontologies more FAIR by providing interfaces for publishing, accessing, and reusing ontologies and their metadata. The presentation focuses on the NCBO BioPortal as an example ontology repository and how its technologies have been adopted by other repositories. Improving ontology metadata will help comprehension of the ontology landscape; a new AgroPortal metadata model was created to better describe ontologies and their
LoCloud Vocabulary Services: Thesaurus management introduction, Walter Koch a...locloud
This presentation provides an introduction to thesaurus management in the LoCloud Vocabulary Services given during the LoCloud training workshops. It provides an introduction to controlled vocabularies, thesaurus for information retrieval and interoperability, to SKOS, multilingual vocabulary issues and to the federated model adopted for thesaurus management within the LoCloud service, which is based in TemaTres. The presentation includes a list of the vocabularies that have been integrated within the LoCloud service. There is also a walk-through of MediaThread and how this was used in the vocabulary management training offered in the workshop.
Opening up the archives: from basement to browserAmanda Hill
The document summarizes the current state of archive gateways in the UK that provide access to archival descriptions and collections. It describes several existing networks like Archives Hub, AIM25, and A2A that aggregate finding aids from different repositories. Archives Hub aims to be a single point of access for archives in educational institutions. It has grown significantly since starting as a pilot in 1999 and now includes descriptions from over 150 repositories, though some collections only have brief level descriptions while others include item-level details. Future plans include transitioning to a more distributed model where repositories can host their own data and moving to new protocols to expose the data.
The document discusses automated cataloguing systems and their advantages. It provides requirements for an automated catalogue module including supporting standard formats like MARC, generating lists and statistics, and enabling record downloads. Key concepts are defined, such as bibliographic records, fields, and tags. Outputs of automated systems include the OPAC, reports, and information products. Automated cataloguing reduces clerical work and supports data interchange and information services.
SPECTRUM, the UK & International Collections Management standard is used by more than 7,000 museums worldwide. The Collections Trust used this seminar to launch SPECTRUM 4.0.
We reviewed progress with the translation and localisation of SPECTRUM in other countries, and attempted to set out a 5-year roadmap for the future development needs of the community.
This seminar was designed for non-technical people working with collections in museums, archives and libraries in the UK and Europe. We also welcomed participation from Collections Management software vendors and from people interested in translation/localisation.
This document provides a summary of a framework for managing vocabularies as presented at the TDWG Vocabulary Management Task Group meeting. It discusses the status of TDWG ontologies, requirements for a vocabulary management framework, and Semantic MediaWiki as a potential platform for collaborative vocabulary development. Key points include:
- Vocabularies are a core component of the TDWG technical architecture and provide shared understanding of terms, but development and governance has been challenging.
- A framework is needed to standardize the process for minting terms, releasing finalized concept vocabularies, and reusing terms in other schemas and ontologies to promote interoperability.
- Semantic MediaWiki is proposed as a platform for collaborative
Maja Žumer: Library catalogues of the future: realising the old vision with n...ÚISK FF UK
The document discusses the future of library catalogs and metadata, noting that catalogs need to change to meet new user needs and expectations by making data more intuitive to explore, exposing relationships between works and other entities, and fully utilizing the quality of library metadata. It also reviews the history and conceptual models for bibliographic data like FRBR, FRAD, and FRSAD, which aim to present bibliographic information in a more user-oriented way. Libraries will need new systems built on these conceptual models to improve user tasks like finding, identifying, selecting, and exploring materials.
The document discusses supplemental materials that accompany journal articles. It notes that supplemental materials are growing in size and importance, and include various types of multimedia, datasets, computer programs, and additional text/figures. However, there are open questions around how to classify, cite, preserve, and provide access to supplemental materials over time. The document then outlines efforts by the NISO/NFAIS working group to address these issues through the development of best practices and technical recommendations for managing supplemental materials.
ROHub is a reference platform that provides a holistic solution for managing research objects (ROs) through their entire lifecycle. It allows users to create, store, publish, discover, and reuse ROs. ROs include any research outcomes or related resources packaged with metadata to make them FAIR (Findable, Accessible, Interoperable, Reusable). ROHub is integrated into the European Open Science Cloud to enable open sharing of ROs across scientific communities.
Library review: improving back-of-house processes through richer integrations...Talis
The document discusses improving back-of-house processes at Nottingham Trent University Library through richer integrations between their library management system Aspire and other systems. It describes developing an API for Aspire to share data with other applications to help automate acquisitions decisions. This could include integrating Aspire data with the library's reading list system, student enrollment data, circulation history, usage statistics, and supplier catalogues to determine acquisition needs based on factors like student enrollment and usage. The goal would be to route acquisition decisions through formulas using this additional data to make the process more automated and informed. It asks for comments on the feasibility and responsibility for developing such integrations.
An institutional repository is a digital archive that collects, preserves, and disseminates the research output of an institution. It provides open access to scholarly articles, theses, data sets, and other materials. Repositories help increase the visibility and impact of an institution's research and satisfy funder mandates for open access. They benefit researchers, institutions, libraries, and the global research community by providing free access to scholarly works. Content in a repository can include faculty research, student theses and projects, and other materials. Maintaining a repository requires developing policies, building infrastructure, and gaining institutional support.
An institutional repository is a digital archive for collecting, preserving, and disseminating the research output of an institution. It aims to increase visibility and access to scholarship. Repositories help manage intellectual property and preserve content over the long term. They support the institution's mission by providing open access to research and learning materials.
The document summarizes activities from an ontology hackathon focused on creating VOCREF, an ontology cataloguing characteristics of ontologies and vocabularies relevant to their suitability for semantic web and big data applications. Key activities included:
1) Establishing a GitHub repository for the VOCREF project and initial ontology development using OWL2.
2) Developing a top-level VOCREF ontology module and beginning additional content modules to catalogue relevant characteristics.
3) Reviewing existing ontologies and evaluations for potential reuse in VOCREF and beginning the process of integrating relevant concepts.
Libraries as Knowledge Infrastructure of the 21st century: the role of Librar...LIBER Europe
Libraries as Knowledge Infrastructure of the 21st century: the role of Libraries in the future of Research and Higher Education. A presentation by Dr. Paul Ayris (LIBER President) to the European Commission.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
2. Revisiting Museum Collections
“Revisiting Collections
provides a framework for
embedding new understanding
and perspectives on objects
directly within the museum’s
collection management
system, ensuring that it forms
part of the story about the
collections that is recorded
and made accessible to all.”
3. Revisiting Museum Collections
with Modes Complete
RMC units of information
1. Description and history
2. Associations and references
3. Use of the object
4. Owner’s contributions
5. Viewers’ contributions
6. Record & attribution information
4. Revisiting Museum Collections
with Modes Complete
• RMC units of information are mapped to
SPECTRUM, and all SPECTRUM units of
information can represented by Modes
elements.
• RMC units of information are not mandatory.
• Choose the units which are appropriate to the
current project.
5. Revisiting Museum Collections
with Modes Complete
• RMC units of information can be added to
records in the main catalogue
OR
• RMC data can be recorded in a separate file,
linked to the main catalogue.
6. Revisiting Museum Collections
with Modes Complete
1. Description and history
A record of the origins and use of the object,
perhaps deduced from examination of the
object. Modes records are usually rich in this
type of information.
Use the Association, Description,
FieldCollection and ConditionCheck groups.
Use Origin to record the name of the place
of origin of the Material (RMC Material
source).
7. Revisiting Museum Collections
with Modes Complete
2. Associations and references
A record of the people, places, activities and
other objects connected with the object. Modes
records are usually rich in this type of
information.
Use the Association and References groups.
Use ObjectIdentity within Related Object to
record the record number related to the object
being documented (RMC Related object number).
If the related record is in a different file, use
Filename to record the name of the file.
8. Revisiting Museum Collections
with Modes Complete
3. Use of the object
A record of the selection and use of the
object for exhibition, research, or other
activities within the museum.
Use either the ObjectUse or the Exhibition
group, depending on the nature of the
event.
Use the Commentary group to record
exhibition labels or other display text
(RMC Label text).
9. Revisiting Museum Collections
with Modes Complete
4. Owner’s contributions
Knowledge about the object contributed by a
former owner of this, or a similar, object.
Use the Evidence group to record an owner’s
contributions including
RMC Owner’s personal experience and
RMC Owner’s personal response.
The owner and ownership dates can be
recorded in the same Evidence group, or
separately in an Ownership group.
10. Revisiting Museum Collections
with Modes Complete
5. Viewers’ contributions
Responses to the object contributed by a
visitors, researchers or others interacting
with the object within the museum.
Use the Evidence group to record the
viewer’s contributions including
RMC Viewer’s personal experience and
RMC Viewer’s personal response. If there is
more than one viewer, repeat the
Evidence group to represent each person’s
individual contributions. Collective or group
contributions can be recorded as a single
Evidence group.
11. Revisiting Museum Collections
with Modes Complete
Information about the status of the
information being recorded.
Use the RecordType group to describe the
level of the record (collection, group, or item
level record?).
Use the RecordProgress group to record
stages in the amendment of the record.
Authority may also be used within any other
group to record its amendment.
Use Recorder for the main authorship of
the record.
6. Record & attribution information
12. Revisiting Museum Collections
with Modes Complete
Find out more:
1. Get the RMC toolkit from Collections
Linkhttp://www.collectionslink.org.uk/
2. Get the User Guide from Modes
http://www.modes.org.uk/members/user-guides
- full mapping of RMC / SPECTRUM units of information to Modes
elements.
3. Contact Modes for model templates and element groups.