Presentation given at Cilip ARLG/MmIT day conference on "Research(er) Workflows in the Real World" on 9 Dec 2019 at the British Library Conference Centre. Conference summary at: https://mmitblog.wordpress.com/2020/01/20/researcher-workflows-in-the-real-world-a-guest-review-from-our-bursary-winner/
Leeds Met Open Search - towards an integrated solution for research and OERNick Sheppard
The document discusses the development of an integrated search solution at Leeds Met for both open access research outputs and open educational resources (OERs). It describes how the university adapted its existing repository to provide an Open Search interface for both research and OER materials. Key features of the Open Search implementation include advanced search and browsing capabilities, identifying materials by content type, and differentially formatting research results. Ongoing work focuses on areas like search engine optimization, differentiating research by type, and improving the RSS feeds.
Or2019 DSpace 7 Enhanced submission & workflow4Science
The last two years have been very intense for the DSpace community. A great effort has been put into finalizing the development of a DSpace release, 7.0, which has many changes from previous releases, particularly with regard to UI technology.
As part of the activities related to the creation of DSpace 7, particularly innovative is the submission and workflow process that can be associated with the different collections.
The presentation will provide a deep dive into the new Enhanced Submission and Workflow features of DSpace 7, including how to configure, customize & use this feature (and differences with DSpace 6 and below)
The document discusses Khalifa University's implementation of ORCiD identifiers to capture faculty publications, avoid name ambiguity, and easily link publications to citation profiles. Key tasks completed include starting the implementation in September, creating an intranet page for faculty sign up, and conducting training sessions. Ongoing tasks involve connecting more faculty IDs, harvesting data for the institutional repository using an ORCiD plugin, and adding features to the dashboard. The future plans are to show ORCiD links for authors, push repository data to faculty profiles, and automate collecting data for faculty pages using ORCiD.
ORCID identifiers in research workflows - ACM (B. Rous)ORCID, Inc
The document discusses ACM's integration of ORCID identifiers into its research workflows. The objectives are to get ACM authors to obtain ORCID IDs and grant ACM permission to access their profiles, allow ORCID registrants to claim ACM profiles, and associate ORCID IDs with ACM author profiles. This will eliminate the need to normalize author names and allow ACM to push publications to and pull data from ORCID records. Points of integration include submission systems and author profile pages. Status updates show new ACM-ORCID registration and sign-in pages. Next steps are to further develop the integration and move it from testing to production.
SPEVO13 - Dev213 - Document Assembly Deep Dive Part 2John F. Holliday
This session picks up from part 1 by extending the development strategy to include using XPath expressions embedded within a MS Word document template.
This document provides an overview of how to search the chemical literature database SciFinder. It discusses accessing SciFinder and registering an account. The document outlines the goals of learning basic SciFinder searching, obtaining full-text articles, and exporting records to reference management systems. It also describes the various reference databases contained within SciFinder and how the online version compares to print indexes. Finally, it provides instructions on using interlibrary loan and exporting references to RefWorks.
Plastic Pollution Presentation By AnkitMishraAnkit Mishra
This document discusses plastic pollution and management. It provides an introduction to plastics, their composition, types and uses. It then discusses the disadvantages of plastics like releasing pollutants and absorbing toxic chemicals. Statistics about global plastic production and consumption are presented. The document outlines how plastic waste impacts the environment and marine life, forming garbage patches in oceans. It stresses the need for better plastic waste management and measures to curb pollution.
Presentation given at Cilip ARLG/MmIT day conference on "Research(er) Workflows in the Real World" on 9 Dec 2019 at the British Library Conference Centre. Conference summary at: https://mmitblog.wordpress.com/2020/01/20/researcher-workflows-in-the-real-world-a-guest-review-from-our-bursary-winner/
Leeds Met Open Search - towards an integrated solution for research and OERNick Sheppard
The document discusses the development of an integrated search solution at Leeds Met for both open access research outputs and open educational resources (OERs). It describes how the university adapted its existing repository to provide an Open Search interface for both research and OER materials. Key features of the Open Search implementation include advanced search and browsing capabilities, identifying materials by content type, and differentially formatting research results. Ongoing work focuses on areas like search engine optimization, differentiating research by type, and improving the RSS feeds.
Or2019 DSpace 7 Enhanced submission & workflow4Science
The last two years have been very intense for the DSpace community. A great effort has been put into finalizing the development of a DSpace release, 7.0, which has many changes from previous releases, particularly with regard to UI technology.
As part of the activities related to the creation of DSpace 7, particularly innovative is the submission and workflow process that can be associated with the different collections.
The presentation will provide a deep dive into the new Enhanced Submission and Workflow features of DSpace 7, including how to configure, customize & use this feature (and differences with DSpace 6 and below)
The document discusses Khalifa University's implementation of ORCiD identifiers to capture faculty publications, avoid name ambiguity, and easily link publications to citation profiles. Key tasks completed include starting the implementation in September, creating an intranet page for faculty sign up, and conducting training sessions. Ongoing tasks involve connecting more faculty IDs, harvesting data for the institutional repository using an ORCiD plugin, and adding features to the dashboard. The future plans are to show ORCiD links for authors, push repository data to faculty profiles, and automate collecting data for faculty pages using ORCiD.
ORCID identifiers in research workflows - ACM (B. Rous)ORCID, Inc
The document discusses ACM's integration of ORCID identifiers into its research workflows. The objectives are to get ACM authors to obtain ORCID IDs and grant ACM permission to access their profiles, allow ORCID registrants to claim ACM profiles, and associate ORCID IDs with ACM author profiles. This will eliminate the need to normalize author names and allow ACM to push publications to and pull data from ORCID records. Points of integration include submission systems and author profile pages. Status updates show new ACM-ORCID registration and sign-in pages. Next steps are to further develop the integration and move it from testing to production.
SPEVO13 - Dev213 - Document Assembly Deep Dive Part 2John F. Holliday
This session picks up from part 1 by extending the development strategy to include using XPath expressions embedded within a MS Word document template.
This document provides an overview of how to search the chemical literature database SciFinder. It discusses accessing SciFinder and registering an account. The document outlines the goals of learning basic SciFinder searching, obtaining full-text articles, and exporting records to reference management systems. It also describes the various reference databases contained within SciFinder and how the online version compares to print indexes. Finally, it provides instructions on using interlibrary loan and exporting references to RefWorks.
Plastic Pollution Presentation By AnkitMishraAnkit Mishra
This document discusses plastic pollution and management. It provides an introduction to plastics, their composition, types and uses. It then discusses the disadvantages of plastics like releasing pollutants and absorbing toxic chemicals. Statistics about global plastic production and consumption are presented. The document outlines how plastic waste impacts the environment and marine life, forming garbage patches in oceans. It stresses the need for better plastic waste management and measures to curb pollution.
Using the Archivists' Toolkit: Hands-on practice and related toolsAudra Eagle Yun
The document provides an overview of the Archivists' Toolkit (AT), a free and open-source archival management application. It discusses the key functions of AT including recording accessions, describing archival materials and digital objects, managing locations, and exporting records. The document then demonstrates how to get started with AT by setting up a repository record and user accounts. It guides the user through activities like adding locations, creating accession and resource records, and linking them together. Finally, it discusses how to export finding aids from AT in EAD format and submit them to online archives like the Online Archive of California.
Google AutoML, AWS SageMaker and other ML tools automate some but not all steps in machine learning workflows. Learn about problem formulation, data engineering, monitoring, and fairness assessment.
No doubt Visualization of Data is a key component of our industry. The path data travels since it is created till it takes shape in a chart is sometimes obscure and overlooked as it tends to live in the engineering side (when volume is relevant), an area where Data Scientist tend to visit but not the usual Web/Marketing Data Analyst. Nowadays the options to tame all that journey and make the best of it are many and they don't require extensive engineering knowledge. Small or Big Data, let's see what "Store, Extract, Transform, Load, Visualize" is all about.
This document discusses using SQL Developer for reporting. It covers creating different types of reports in SQL Developer like canned reports, user-defined reports, parent/child reports, and drill down reports. Advanced reporting options like charts, HTML rendering, and command line report generation are also covered. The presenter provides examples of complex user-defined reports that use HTML, JavaScript, and are kicked off via the command line.
After this presentation you will know how to:
- sell Drupal 8 to business on large enterprise
- plan migration of code and content
- technically migrate a lot of custom code and data
- automate migration process
- test migration and regression
- overcome migration challenges, based on a JYSK case
https://drupalcampkyiv.org/node/55
Talend Open Studio for Data Integration is an open Source ETL Tool, which means small companies or businesses can use this tool to perform Extract Transform and Load their data into Databases or any File Format (Talend supports many file formats and Database vendors).
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...Mark Rittman
Mark Rittman from Rittman Mead presented on Oracle Big Data Discovery. He discussed how many organizations are running big data initiatives involving loading large amounts of raw data into data lakes for analysis. Oracle Big Data Discovery provides a visual interface for exploring, analyzing, and transforming this raw data. It allows users to understand relationships in the data, perform enrichments, and prepare the data for use in tools like Oracle Business Intelligence.
MongoDB Certification Study Group - May 2016Norberto Leite
Study group session to review the certification exam regarding material covered, exam structure and technical requirements. DBA and Developers track covered to ensure the technical expertise of individuals on subject matter topics specific to MongoDB
Leveraging Oracle's Clinical Development Analytics to Boost Productivity and ...Perficient
This presentation discusses Oracle Clinical Development Analytics (CDA), a clinical data warehouse and reporting solution. CDA combines data from clinical trial management systems and electronic data capture systems. It includes pre-built interactive dashboards and reports. The presentation covers how CDA can be used by clinical operations and data management, how to create new reports and dashboards, CDA's architecture, ways to extend CDA, and services provided by BioPharm to support CDA implementation and customization.
In 2008, Harald van Breederode and Joel Goodman wrote a white paper titled "Performing an Oracle DBA 1.0 to Oracle DBA 2.0 Upgrade" in which they suggested DBAs needed to add storage and OS skills to remain relevant in a shifting technical landscape. The role of today's DBA has broadened considerably and with that comes a new set of abilities and concepts to be learned and mastered.
DBA 2.0 was written prior to the release of Oracle 11g and 12c, so the Oracle DBA 3.0 upgrade adds Cloud and virtualization to the DBAs repertoire. Their inclusion also demands that DBAs be able to better manage security and compliance challenges that come with hybrid and Cloud environments, the ability to adapt to continuous deployment cycles, and heterogenous and comingled data stores.
Most significantly DBA 3.0 signals an emergence of the DBA from a mostly utilitarian and anonymous role to one that is more in the limelight. The growing emphasis and influence of data and data-driven decision making means that the DBA must be a partner and driving force in the business and not simply a custodian of the data.
Learn what it will take to build or upgrade your skill set to Oracle DBA 3.0, and how to encourage and mentor a new generation of data professionals into the field.
Making your user happy – how to create a perfect profileLetsConnect
User profiles are one of the most important parts of IBM Connections and your social business.
IBM Connections features a set of scripts that will enable you to create basic profiles based on your corporate LDAP Directory. As IBM is leveraging the power of the Tivoli Directory Integrator for this task, you can customize it and grab data from almost any system. HR data from SAP, photos from a Relational database, skillsets from a Domino database and much more.
This document discusses Oracle Enterprise Manager 12c patch management. It provides an overview of patching with OEM 12c, including roles, the software library, and My Oracle Support integration. Patch management and control with OEM 12c is also reviewed, explaining how patch plans are used to consolidate patches and map them to deployment steps. Additional tasks like creating, reviewing, and taking away lessons from patch plans are also summarized.
The document discusses the identity management system at the University of Edinburgh. It describes the current homegrown system, issues with scalability and cost, and an evaluation of open source and commercial identity management solutions. A blended solution was chosen using the open source Grouper system for group management and reusing existing Oracle and OpenLDAP components. This provided functionality needed while avoiding high licensing costs of a commercial solution.
This document discusses creating a documentation portal. It begins by introducing the speaker and defining what a documentation portal is. The speaker then discusses why one would create a portal, noting that it requires an ongoing commitment. Various planning steps are outlined, including defining problems, requirements and prototypes. The remainder of the document provides a workshop example for creating a portal using an open source project on GitHub called Red Sofa. Steps are outlined for setting up accounts on Heroku and Cloudant, cloning the project, uploading content and reviewing the portal. Additional topics covered include simple configuration, updating content and metadata, customization, and usability testing.
Recap of TrailheaDX in CT. Slide of Group meet conducted on 26-Jul.
Blog - http://www.jitendrazaa.com/blog
More information available at -
https://www.meetup.com/Connecticut-Salesforce-Developer-User-Group/events/241570452/?comment_table_id=482174126&comment_table_name=event_comment#
How did it go? The first large enterprise search project in Europe using Shar...Petter Skodvin-Hvammen
This document summarizes a presentation about implementing a large enterprise search project in Europe using SharePoint 2013. It describes the background of the global oil services company undertaking a knowledge initiative. It details the key pains they faced, content sources indexed, and search strategy. It outlines the infrastructure needs, customizations made, performance considerations, and efforts to improve relevancy. In conclusion, it provides the current status and outcomes of the project.
Streamline RJS Document Management with AutoMateHelpSystems
As an RJS customer, you’ve purchased our document management software to solve specific needs such as capturing and storing scanned paper documents, creating electronic forms, or capturing digital signatures. Now is your chance to further streamline your document management efforts by creating automated business processes with AutoMate.
This webinar explains how document automation can help you:
Capture and publish documents to SharePoint
Use OCR to extract information and route incoming documents
Capture email from any mail system and use content to launch business processes
Prepare business documents for check-in to WebDocs
Easily integrate line-of-business data into the document management capture process
Learn actionable ways to streamline and automate your document management process today.
Shaking hands with the developer: How IT Communications can help you build a ...Sarah Khan
McGill University has transitioned to using Drupal to power over 850 of its websites. This presentation discusses McGill's journey with Drupal over the past 20 years, including challenges faced and solutions implemented. It provides an overview of McGill's current technical Drupal architecture, support resources and training provided to help site managers effectively use and maintain the system. The core team of 13 staff work in an Agile workflow to continuously improve and expand the Drupal implementation across McGill's various departments and websites.
Leveraging the Chaos tool suite for module developmentzroger
CTools, aka the Chaos tool suite is one of the most popular and arguably least understood modules in the contributions repository. While most users will enable it only because of a dependency (i.e. panels), there are some wonderful gems in this tool kit that simplify module development.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Using the Archivists' Toolkit: Hands-on practice and related toolsAudra Eagle Yun
The document provides an overview of the Archivists' Toolkit (AT), a free and open-source archival management application. It discusses the key functions of AT including recording accessions, describing archival materials and digital objects, managing locations, and exporting records. The document then demonstrates how to get started with AT by setting up a repository record and user accounts. It guides the user through activities like adding locations, creating accession and resource records, and linking them together. Finally, it discusses how to export finding aids from AT in EAD format and submit them to online archives like the Online Archive of California.
Google AutoML, AWS SageMaker and other ML tools automate some but not all steps in machine learning workflows. Learn about problem formulation, data engineering, monitoring, and fairness assessment.
No doubt Visualization of Data is a key component of our industry. The path data travels since it is created till it takes shape in a chart is sometimes obscure and overlooked as it tends to live in the engineering side (when volume is relevant), an area where Data Scientist tend to visit but not the usual Web/Marketing Data Analyst. Nowadays the options to tame all that journey and make the best of it are many and they don't require extensive engineering knowledge. Small or Big Data, let's see what "Store, Extract, Transform, Load, Visualize" is all about.
This document discusses using SQL Developer for reporting. It covers creating different types of reports in SQL Developer like canned reports, user-defined reports, parent/child reports, and drill down reports. Advanced reporting options like charts, HTML rendering, and command line report generation are also covered. The presenter provides examples of complex user-defined reports that use HTML, JavaScript, and are kicked off via the command line.
After this presentation you will know how to:
- sell Drupal 8 to business on large enterprise
- plan migration of code and content
- technically migrate a lot of custom code and data
- automate migration process
- test migration and regression
- overcome migration challenges, based on a JYSK case
https://drupalcampkyiv.org/node/55
Talend Open Studio for Data Integration is an open Source ETL Tool, which means small companies or businesses can use this tool to perform Extract Transform and Load their data into Databases or any File Format (Talend supports many file formats and Database vendors).
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...Mark Rittman
Mark Rittman from Rittman Mead presented on Oracle Big Data Discovery. He discussed how many organizations are running big data initiatives involving loading large amounts of raw data into data lakes for analysis. Oracle Big Data Discovery provides a visual interface for exploring, analyzing, and transforming this raw data. It allows users to understand relationships in the data, perform enrichments, and prepare the data for use in tools like Oracle Business Intelligence.
MongoDB Certification Study Group - May 2016Norberto Leite
Study group session to review the certification exam regarding material covered, exam structure and technical requirements. DBA and Developers track covered to ensure the technical expertise of individuals on subject matter topics specific to MongoDB
Leveraging Oracle's Clinical Development Analytics to Boost Productivity and ...Perficient
This presentation discusses Oracle Clinical Development Analytics (CDA), a clinical data warehouse and reporting solution. CDA combines data from clinical trial management systems and electronic data capture systems. It includes pre-built interactive dashboards and reports. The presentation covers how CDA can be used by clinical operations and data management, how to create new reports and dashboards, CDA's architecture, ways to extend CDA, and services provided by BioPharm to support CDA implementation and customization.
In 2008, Harald van Breederode and Joel Goodman wrote a white paper titled "Performing an Oracle DBA 1.0 to Oracle DBA 2.0 Upgrade" in which they suggested DBAs needed to add storage and OS skills to remain relevant in a shifting technical landscape. The role of today's DBA has broadened considerably and with that comes a new set of abilities and concepts to be learned and mastered.
DBA 2.0 was written prior to the release of Oracle 11g and 12c, so the Oracle DBA 3.0 upgrade adds Cloud and virtualization to the DBAs repertoire. Their inclusion also demands that DBAs be able to better manage security and compliance challenges that come with hybrid and Cloud environments, the ability to adapt to continuous deployment cycles, and heterogenous and comingled data stores.
Most significantly DBA 3.0 signals an emergence of the DBA from a mostly utilitarian and anonymous role to one that is more in the limelight. The growing emphasis and influence of data and data-driven decision making means that the DBA must be a partner and driving force in the business and not simply a custodian of the data.
Learn what it will take to build or upgrade your skill set to Oracle DBA 3.0, and how to encourage and mentor a new generation of data professionals into the field.
Making your user happy – how to create a perfect profileLetsConnect
User profiles are one of the most important parts of IBM Connections and your social business.
IBM Connections features a set of scripts that will enable you to create basic profiles based on your corporate LDAP Directory. As IBM is leveraging the power of the Tivoli Directory Integrator for this task, you can customize it and grab data from almost any system. HR data from SAP, photos from a Relational database, skillsets from a Domino database and much more.
This document discusses Oracle Enterprise Manager 12c patch management. It provides an overview of patching with OEM 12c, including roles, the software library, and My Oracle Support integration. Patch management and control with OEM 12c is also reviewed, explaining how patch plans are used to consolidate patches and map them to deployment steps. Additional tasks like creating, reviewing, and taking away lessons from patch plans are also summarized.
The document discusses the identity management system at the University of Edinburgh. It describes the current homegrown system, issues with scalability and cost, and an evaluation of open source and commercial identity management solutions. A blended solution was chosen using the open source Grouper system for group management and reusing existing Oracle and OpenLDAP components. This provided functionality needed while avoiding high licensing costs of a commercial solution.
This document discusses creating a documentation portal. It begins by introducing the speaker and defining what a documentation portal is. The speaker then discusses why one would create a portal, noting that it requires an ongoing commitment. Various planning steps are outlined, including defining problems, requirements and prototypes. The remainder of the document provides a workshop example for creating a portal using an open source project on GitHub called Red Sofa. Steps are outlined for setting up accounts on Heroku and Cloudant, cloning the project, uploading content and reviewing the portal. Additional topics covered include simple configuration, updating content and metadata, customization, and usability testing.
Recap of TrailheaDX in CT. Slide of Group meet conducted on 26-Jul.
Blog - http://www.jitendrazaa.com/blog
More information available at -
https://www.meetup.com/Connecticut-Salesforce-Developer-User-Group/events/241570452/?comment_table_id=482174126&comment_table_name=event_comment#
How did it go? The first large enterprise search project in Europe using Shar...Petter Skodvin-Hvammen
This document summarizes a presentation about implementing a large enterprise search project in Europe using SharePoint 2013. It describes the background of the global oil services company undertaking a knowledge initiative. It details the key pains they faced, content sources indexed, and search strategy. It outlines the infrastructure needs, customizations made, performance considerations, and efforts to improve relevancy. In conclusion, it provides the current status and outcomes of the project.
Streamline RJS Document Management with AutoMateHelpSystems
As an RJS customer, you’ve purchased our document management software to solve specific needs such as capturing and storing scanned paper documents, creating electronic forms, or capturing digital signatures. Now is your chance to further streamline your document management efforts by creating automated business processes with AutoMate.
This webinar explains how document automation can help you:
Capture and publish documents to SharePoint
Use OCR to extract information and route incoming documents
Capture email from any mail system and use content to launch business processes
Prepare business documents for check-in to WebDocs
Easily integrate line-of-business data into the document management capture process
Learn actionable ways to streamline and automate your document management process today.
Shaking hands with the developer: How IT Communications can help you build a ...Sarah Khan
McGill University has transitioned to using Drupal to power over 850 of its websites. This presentation discusses McGill's journey with Drupal over the past 20 years, including challenges faced and solutions implemented. It provides an overview of McGill's current technical Drupal architecture, support resources and training provided to help site managers effectively use and maintain the system. The core team of 13 staff work in an Agile workflow to continuously improve and expand the Drupal implementation across McGill's various departments and websites.
Leveraging the Chaos tool suite for module developmentzroger
CTools, aka the Chaos tool suite is one of the most popular and arguably least understood modules in the contributions repository. While most users will enable it only because of a dependency (i.e. panels), there are some wonderful gems in this tool kit that simplify module development.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
KuberTENes Birthday Bash Guadalajara - Introducción a Argo CD
Recycling and reusing
1. Recycling and reusing
It’s not just for environmentalists
Callie Coward, UNCG
http://sites.uci.edu/ics5enviroblog/archives/136
2.
3. What programs will you need?
• OCLC Connexion Client/WorldShare Management
Service
• MarcEdit
• Microsoft Excel
• Google Sheets
• CONTENTdm Project Client
8. Tips on working sheets
• Use Google Sheets
• Include articles when entering title information
• Include any information that is in the Local Holdings Record (LHR)
that you will need
• Include any information that needs to be in the CONTENTdm record
that isn’t in the cataloging record
9.
10.
11.
12.
13. • Metadata tab
• Select My Library Holdings
• Title (or OCLC number)
• Search for title (or OCLC
number)
14.
15.
16.
17.
18.
19.
20.
21. Version I’m using in case you
don’t have MarcEdit yet.
http://marcedit.reeset.net/downloads
40. Things to consider when harvesting
• Add some kind of qualifier in the title
• Punctuation
• Who’s the publisher?
• UNCG since we digitized it and created the image? The original publisher?
• Who’s the creator?
• Binding designer? Original author?
Hi! I’m Callie Coward and today we are going to talk a little bit about recycling and reusing metadata from the library catalog in our CONTENTdm records. I’m a Cataloging and Digital Projects Library Technician at the University of North Carolina at Greensboro. My job lets me look at CONTENTdm records in a little bit different light that other people might look at them.
Recycling and reusing metadata just makes sense. The cataloger has already done the hard work, so why redo what they have done? It’s a lot easier to extract metadata out of the system than copying and pasting one line at a time into the new CONTENTdm record or starting from scratch. Extracting and exporting also offers a quality control buffer. If we have everything described the same way and it’s been looked many times, we hopefully have cleaner and more reliable data than we would otherwise have, if we entered everything in from scratch.
Just so I know how in depth I need to go here, how many people use Worldshare Management Systems (or WMS) for their ILS? Anyone thinking about switching to WMS? How many people have OCLC Connexion Client?
Here’s a list of all the programs we use to get information out of the catalog and CONTENTdm. One thing I will note here is that you cannot use Connexion Browser for this because there is no place to save your files so you must use OCLC Connexion Client.
For the purpose of this presentation, we will take a look at UNCG’s American Publisher’s Trade Bindings collection and how books go through this process. The American Publisher’s Trade Bindings Collection is a group of binding images. We focus on the binding design and the artists behind these designs.
Bindings are described by the cataloging department and added to the catalog record (MARC record) so we can just pull those descriptions, the binding designer, printer, publisher, etc… from the catalog record itself.
The most important this you have to do is establish a workflow. Do what feels right for your institution. Remember there is not harm in trial and error.
Books come brought down from Special Collection. I sort them into what we are going to add to the online project and what we are not.
The ones we do add, I add them to a Google Doc spreadsheet (which you will see in a few slides) and put them on a cart to take over to digital projects
Scan the book and add pertinent information to the sheet (dimensions, scan date, corrective actions)
Since there is no metadata for these books yet, we save them under there title and once they are ready to upload we rename them with their OCLC number.
Once the books have been through quality control they make there way back to cataloging.
We catalog the books, adding relevant information to the spreadsheet and save the file in a list and in our local file
Export the list (or file)
Clean it up
Load it into CONTENTdm
Done!
So let’s look at this in a little more depth (and with more pictures)
Here is our working spreadsheet. We have a title column, what collection the book is a part of, the OCLC number, date it was cataloged, date it was taken to Digital Projects, Date it was scanned, the dimensions, Quality Control information, and the rest are just housekeeping columns.
Make sure to include anything that you need in the metadata record that isn’t exported by whatever program you use. When we were just exporting our files out of Client we had our call number right there and didn’t need the call number column in our spreadsheet. When we started exporting out of WMS we realized we needed that column because it doesn’t pull our local holdings information (call number, barcode, etc…).
We decided to go with Google Sheets because it was so much more convenient than Excel because more than one person can view a document at a time. There are people from three different departments working on this project (Special Collections, Digital Projects, and Cataloging). If you have multiple people working on a project, Google Sheets is the way to go. Plus it helps that we have Gmail at UNCG and have access to these Google products.
Be sure to include your articles even on the working sheet. When you export your metadata it will have the articles in the title information so you need that in this document so it will sort correctly when you merge the two sheets together.
Like I said earlier, you have to include anything that you would need from the local holdings record because it doesn’t export, yet.
And include any information on this sheet that isn’t in the cataloging record, in this case it would be the corrective actions, who scanned it, when it was scanned, and the dimensions of the material.
Since most people probably have OCLC, let’s start with OCLC Client. Get into OCLC Connexion Client. As stated earlier, you will not be able to do this process in OCLC Browser because there is no way to save records into a file. What I normally do is just save the records into my local file as I work on them. We still order cards (which will be going away in September) so all these saved records have all the local call numbers and information I want for my records. When I’m ready to export a batch of records, I go into my save file, select all the records I want to put into CONTENTdm…
After you select all of your records, select the E with a green arrow from your toolbar 9the export button), or go to the action menu and select export (or in my case, just press F5). You’ll know they are ready to export when you get the R in the export column
Go to Batch process batch. Then this screen will pop up. Check the box next to the file you saved the records in Check exports say ok. It’ll ask you where you want to save your files and then you’ll get an batch export report that will let you know if it was successful.
STOP! Let’s switch gears a little bit. Since we are done with the Connexion Client part, let’s switch to how to get the information out of WMS since everything is the same after getting information out of the respective programs.
Log into WMS. Get into the Metadata tab. Make sure my library holdings is selected (if you already have holding on the item if not then you’ll have to expand this to All WorldCar). Select the facet you will be searching from (OCLC number, title, ISBN, etc…) and then search for that item.
Once you have found your item and have all you local information that you want in your local bibliographic data, click record actions, add to export list. We normally add the book to the list as we are cataloging it.
Select the list you would like to add you record to, say add. A little green banner will appear at the head of your record saying it was added.
Once you have added all of the records you would like to your list, click on saved lists on the left hand side.
Head on over to your export tab and you’ll see all of your saved lists. One thing I will not is that, sadly, these aren’t communal. You can only see your saved lists so if someone else is working on the same project, they will have their own saved list and you will have your own. Please note the expiration date. These files only last for two weeks. They do extend the time as you add items to the collection, so it is two weeks from the last time you added an item to the collection. So export before you leave on your two week vacations!
What exactly are we exporting? We are exporting the master record that is found in OCLC AND any local notes/information that we have added in the local bibliographic date. As you can see from the list it has LHRs (local holdings records where the call numbers and barcodes are stored) on the list but we CANNOT currently get this information out using saved list. My guess is that we are going to be able to export this information in the future since they have a column for it, just not right now.
Select all export. It’ll ask you where you want to save your file. I normally save my file with the date of the export and save them collectively in a folder I have marked for the American Trade Bindings Collection
Now’s it’s time to get into MarcEdit. If you haven’t seen MarcEdit, it’s a wonderful tool for getting .dat .mrc .mrk files into the format you need them. Best of all it’s free, you just download it off of the internet! I’m running version 6.0.5…. on my computer. Here’s the web address to download that.
Go into Tools and select Batch Process Records.
Click “export tab delimited records.” Set the file paths and make sure you do tab delimiter. Make sure to select ALL FILES when looking for your .dat file it’s defaulted to only book for .mrc files but you will be able to find it if you select ALL files.
Be sure to select normalize field data. This is one of the most important buttons to check. If you don’t, you will have a lot more clean up on your hands and it will take longer. You will need to know a little MARC here because you are pulling MARC fields. You can select the fields you would like to export by either typing or selecting the fields from the drop down menu. You can also specify a specific subfield if you would like. For example you can export just the publisher or just the publication date if you would like if you didn’t need/want the whole field. This is also where you need to take your naming conventions into place. Our records are first saved under their titles since they aren’t cataloged and then, after they are cataloged, we rename them with their OCLC numbers. We export the OCLC numbers here so we can construct our file names easier.
One thing I will suggest is to export your metadata as it will be added in CONTENTdm. Try to match the Marc fields as closely as you can to the Dublin Core fields you will be using for CONTENTdm. This will make it so much easier when you are uploading into CONTENTdm
After you have selected all the fields you would like to use in your CONTENTdm metadata, click export and this box will appear.
Next you will need Excel. Open a new workbook and find the document you just saved. It will be saved as a .txt file so make sure you select to look at all documents when you are looking for your file. When you try to open your file, this box will appear. Make sure delimited is selected then hit next.
Tab delimiters, next, general, finished
And this is what you will get. Now is the time to clean up the data as much as you can! Like this thing. This would be how a copyright symbol comes over. We want to get rid of that completely and just have one date appear in this column for our purposes.
This is also the point where you combine the information that you have on your Google sheet with your Excel file as well. Anything you want uploaded into CONTENTdm (beside blanket information that you have in your metadata template already), make sure it is on this sheet. Also remember that your filename will be in the last column of the sheet.
Once you have added everything you need to, copy and paste ONLY the metadata sections into another excel sheet and save it as a text file. This will eliminate any extra columns and rows that you don’t need.
Yes you want to keep the format, re save, then get out of Excel completely.
Just say no to extra white space. Hit backspace until your cursor is right against the last text in the document. Hit save. Close out the program.
Now to get into Project Client. I want to open an existing project since I already have a project set up. (Project open) if you need to do a new project (project new) enter your credentials and select your collection.
Say open. If you haven’t set up your template, your do that under other tasks. Click edit metadata template, I just do general project template. Hit edit and fill in all the fields that will have the same metadata. In this case it would be the type, collection home page, language, digital publisher, original format, contributing institution and statement of rights.
Now you are ready to upload!
Go to the left hand side, say add multiple items. Select import using a tab-delimited text file. Find your .txt file. Hit next.
We are going to import from a directory. Find the folder where you saved all of your images. It’s really important to keep all your images in one folder because of this. Hit next
Yes you want CONTENTdm to display images. Click on image options images and thumbnails make sure lossy compression is checked ok next
We check lossy compression because it converts the images into jpegs and it “Makes the file drastically smaller and faster-loading’ (Thanks to head of Digital Projects, David, for providing this reasoning) =)
Now we match up our fields! Luckily we already did this step for the most part when we created our metadata sheet in Excel so we just have to make sure everything matches up. Hit next and then add items. Hopefully everything will be done and done and you won’t get any error messages
When you get this screen you have successfully started your upload! Normally, any errors will immediately pop up before you get this progress bar, so if you see this first thing, you should be good to go. You’ll get a summary report saying that everything has been added. Hit close and then the spreadsheet of all your metadata is in front of your eyes.
Here are some error messages you could encounter with your upload into Project Client.
Error in the application = text file you are trying to upload is open. Close out of the file and try to upload again.
Index was outside the bounds of the array = space issue (remember say no to white space); match field is blank. Update to what you save the is save as; or it could be anything else.
Red x’s = file name doesn’t match what’s on the spreadsheet
When you have given the metadata a look over to fix any characters that didn’t come across properly or anything else that looks bizarre, you are ready to upload the items into CONTENTdm. Select all upload for approval. After they have been uploaded CONTENTdm Administration.
After you approve and index all you are done and done and your images are up for people to view!
Just to check to make sure everything is as it should be, you can move over to the index tab and make sure you have the green light of success after a few minutes.
We had some people thinking that the records we initially uploaded were eBooks so we decided to add [binding] in the title field to at least help people realize that it was an image and not an eBook
You are able to add in punctuation in the harvester so you don’t necessarily have to worry about MARC punctuation rules. So you can go ahead and leave that period off the end of the creator field
The next two questions (and I’m sure more will be coming) and problems that we are thinking about but haven’t answered yet. We aren’t harvesting this collection until we work out these issues.