Discovery Systems Used in Academic Libraries Projects & Case StudyHong (Jenny) Jing
This document discusses discovery systems used in academic libraries and provides projects and case studies using different discovery systems. It begins with an overview of what discovery systems are and key vendors like Primo, Summon, and EDS. It then describes projects using Summon that involved user experience studies and improvements. The case study on migrating to Summon 2 discusses the planning, analysis including surveys, design including prototypes, and implementation. Finally, it reviews implementing EDS and using its API, comparing features of Primo, Summon, and EDS.
The workflows for the ingest of digital objects into a repository/digital l...Hong (Jenny) Jing
The document discusses best practices and considerations for developing workflows for ingesting digital objects into repositories and digital libraries. It covers key aspects of ingest workflows including standards, quality assurance procedures, metadata, tools and software. Example ingest workflows are provided using systems like Archivematica, DSpace and DataVerse to illustrate the ingest process.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
The University of Illinois uses a locally developed metasearch service, "Easy Search". We have recently added the ability to query the metasearch program as RESTful web service, allowing library content to be promoted to external web pages such as departmental web presences or courseware.
This document outlines a presentation about lessons learned from auditing EZproxy logs as an EZproxy administrator. EZproxy is a web proxy server used by libraries to provide remote access to restricted resources. The presentation covers what EZproxy is, reviewing EZproxy log files and security features, performing a security audit, post-review activities, advanced tools, and security lessons learned. Key points include taking geolocation data with skepticism, the value of failed login attempts, finding balance with usage limits, and automating auditing while still using human judgment.
Freedman Center for Digital Scholarship Colloquium - 14_1106jeffreylancaster
The Digital Centers at Columbia University were established to support collaborative work across disciplines through the creation of specialized facilities in the libraries. The centers provide experts, resources, technology services and space to support digital scholarship. They collaborate through a working group and advisory board to facilitate communication, address common needs, and strategize services holistically. This includes collaboration on software selection, workshops, projects and budgets to best support the diverse needs of students, faculty and researchers at Columbia University.
Discovery Systems Used in Academic Libraries Projects & Case StudyHong (Jenny) Jing
This document discusses discovery systems used in academic libraries and provides projects and case studies using different discovery systems. It begins with an overview of what discovery systems are and key vendors like Primo, Summon, and EDS. It then describes projects using Summon that involved user experience studies and improvements. The case study on migrating to Summon 2 discusses the planning, analysis including surveys, design including prototypes, and implementation. Finally, it reviews implementing EDS and using its API, comparing features of Primo, Summon, and EDS.
The workflows for the ingest of digital objects into a repository/digital l...Hong (Jenny) Jing
The document discusses best practices and considerations for developing workflows for ingesting digital objects into repositories and digital libraries. It covers key aspects of ingest workflows including standards, quality assurance procedures, metadata, tools and software. Example ingest workflows are provided using systems like Archivematica, DSpace and DataVerse to illustrate the ingest process.
EZID makes it simple for researchers and others to obtain and manage long-term identifiers for their digital content. The service can create and resolve identifiers, and it also allows entry and maintenance of information about the identifier (metadata). This presentation was given as part of a webinar series.
To facilitate data sharing from within the University of California system and beyond, the University of California Curation Center (UC3) is developing a new ingest and discovery layer for our data curation service, Dash. Dash uses the Merritt repository for preservation and a self-service overlay layer for submission and discovery of research datasets. The new overlay– dubbed Stash (STore And SHare)– will feature an enhanced user interface with a simple and intuitive deposit workflow, while still accommodating rich metadata. Stash will enable individual scholars to upload data through local file browse or drag-and-drop operation; describe data in terms of scientifically-meaning metadata, including methods, references, and geospatial information; identify datasets for persistent citation and retrieval; preserve and share data in an appropriate repository; and discover, retrieve, and reuse data through faceted search and browse. Stash can be implemented in conjunction with any standards-compliant repository that supports the SWORD protocol for deposit and the OAI-PMH protocol for metadata harvesting. Stash will feature native support for the DataCite or Dublin Core metadata schemas, but is designed to accommodate other schemas to support discipline-specific applications. By alleviating many of the barriers that have historically precluded wider adoption of open data principles, Stash empowers individual scholars to assert active curation control over their research outputs; encourages more widespread data preservation, publication, sharing, and reuse; and promotes open scholarly inquiry and advancement.
The University of Illinois uses a locally developed metasearch service, "Easy Search". We have recently added the ability to query the metasearch program as RESTful web service, allowing library content to be promoted to external web pages such as departmental web presences or courseware.
This document outlines a presentation about lessons learned from auditing EZproxy logs as an EZproxy administrator. EZproxy is a web proxy server used by libraries to provide remote access to restricted resources. The presentation covers what EZproxy is, reviewing EZproxy log files and security features, performing a security audit, post-review activities, advanced tools, and security lessons learned. Key points include taking geolocation data with skepticism, the value of failed login attempts, finding balance with usage limits, and automating auditing while still using human judgment.
Freedman Center for Digital Scholarship Colloquium - 14_1106jeffreylancaster
The Digital Centers at Columbia University were established to support collaborative work across disciplines through the creation of specialized facilities in the libraries. The centers provide experts, resources, technology services and space to support digital scholarship. They collaborate through a working group and advisory board to facilitate communication, address common needs, and strategize services holistically. This includes collaboration on software selection, workshops, projects and budgets to best support the diverse needs of students, faculty and researchers at Columbia University.
Web of Science Profiles aims to facilitate reporting on an institution's complete research outputs across all organizational levels. It does this by semi-automatically creating and maintaining profiles for researchers in a shared environment. Profiles are pre-populated with Web of Science data and matched to internal HR data to connect publications to authors and departments. As records are enriched and corrected, this feedback loops back to enrich Web of Science. Profiles provide customized reporting and metrics at the researcher, department, and institutional levels to comprehensively track research performance.
Sommer Browning, Assistant Professor; Head of Electronic Access & Discovery Services, Auraria Library, University of Colorado, Denver
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
RESTful web services follow the architectural constraints of REST (Representational State Transfer). REST is defined as an architectural style for distributed hypermedia systems, consisting of client-server, stateless, cacheable communications and a uniform interface between components. Key aspects of REST include representing resources with unique identifiers, using well-defined operations like GET, POST, PUT and DELETE on these resources, and hypermedia as the engine of application state (HATEOAS) to drive interactions between resources.
A Bibliographic Playlist: Online Reference, Recommender, & Collaborative Acad...Lorena O'English
PowerPoint presentation I have given at Washington State University about Zotero and Connotea - alternatives to (and expanders beyond) bibliographic management tools such as EndNote. Some slides are hard to read.
Workset Creation for Scholarly Analysis Project presentation at CNI 2013Harriett Green
Here are a few ways the HTRC could help with this use case:
1. Enrich the metadata to consistently tag or link works that were originally serialized, and identify the relevant periodicals. This would allow searching and filtering by serialization.
2. Perform text mining on periodical full text to automatically identify works by author and group related snippets together. Machine learning could help with this task.
3. Allow searching bibliographic metadata by author name, then viewing a timeline or network graph of where their works appeared - both in volumes and periodicals. This could help scholars trace serializations.
4. Support analytical workflows that extract and group text snippets by serialized work, rather than just by volume. This would facilitate
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
Beth R. Bernhardt, Assistant Dean for Collection Management and Scholarly Communications, University Libraries, University of North Carolina Greensboro
Anna Craft, Metadata Cataloger, University Libraries, University of North Carolina Greensboro
The document summarizes developments in Cambridge University Library's transition to more digital resources and services. It discusses how the library has shifted significant portions of its materials budget to online journals and databases. It also describes the library's implementation of a new "resource discovery" platform to help users more easily search and access the library's diverse digital collections, which had previously been scattered across different systems. Additionally, the document outlines the library's "COMET" project to publish a large portion of its metadata as open linked data on the semantic web.
Exploiting the value of Dublin Core through pragmatic developmentPaul Walk
The document discusses applying agile software development practices to information modeling and application profile development. It focuses on three areas: 1) using application profiles to combine metadata elements for specific uses, 2) learning from iterative software practices like continuous testing and collaboration, and 3) the importance of working openly with users and communities. The talk promotes applying techniques like minimum viable products, iterative design, and open participation to help application profiles better meet user needs.
The document discusses the future of Cochrane Reviews and scientific articles moving away from static documents toward structured linked data and interfaces. It argues that the focus should shift from the documents themselves to the things they are about (e.g. populations, interventions, outcomes), which can be connected as a web of data. This would allow content to be more nimble, traveling freely across datasets while retaining context. Interfaces could provide better access than documents by enabling smart search and filtering of this linked data graph. The future is making content and delivery more important than the containers (documents/articles) themselves.
This document provides an overview of an introductory course on Web Science. It discusses key topics including:
1. What is Web Science and why it matters as an area of scientific study.
2. Key aspects of web architecture like URIs, URLs, HTTP, and file formats.
3. Methods of measuring the web through network analysis and studying structures like the blogosphere and social networks.
4. The Web Science Method which takes an iterative, mixed methods approach of engineering, measuring, and analyzing the web.
5. The social aspects of the web and challenges of incorporating human behavior.
6. Issues of web governance, security, and standards setting.
Your digital humanities are in my library! No, your library is in my digital ...Rebekah Cummings
A presentation on the intersection of libraries and digital humanities presented at the Utah Digital Humanities Symposium at Utah Valley University on February 26, 2016.
Exploring a world of networked information built from free-text metadataShenghui Wang
This document summarizes a presentation about exploring topics through networked information extracted from free-text metadata. It describes challenges in exploring topics and related aspects. It then demonstrates an online interface called Ariadne that addresses these challenges by generating semantic representations of entities from a large dataset and identifying nearest neighbors and related entities through multidimensional scaling. Finally, it discusses potential applications of this approach and references related work.
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
John Mark Ockerbloom, Digital Library Architect and Planner, University of Pennsylvania
This document summarizes a presentation on using linked data with the digital asset management system Islandora. It discusses how linked data can help with issues like web traffic, data reuse, authority control and faster record editing. Examples are given of using linked data in Islandora, including adding RDF to objects and querying the data. Case studies of specific implementations at institutions like Delft University of Technology are also covered. The presentation concludes by discussing potential next steps and how linked data relates to library services more broadly.
Searching of Web and Electronic Resources Bramesha B
This document provides an overview of tools and techniques for searching the web and other online resources. It discusses the evolution of the internet from 1945 to present day. It then covers various types of search tools including search engines, meta search engines, directories, digital libraries, and scholarly communication directories. Finally, it outlines strategies for effective searching such as defining problems, selecting keywords, determining scope, and refining searches.
Rural Call : Shall we connect the DOTS with Rural World
Ramkrishna Sameriya, Rural Call presented in Indian Science Congress MP Chapter on 26th March 2014 at Vigyan Bhawan Bhopal.
Web of Science Profiles aims to facilitate reporting on an institution's complete research outputs across all organizational levels. It does this by semi-automatically creating and maintaining profiles for researchers in a shared environment. Profiles are pre-populated with Web of Science data and matched to internal HR data to connect publications to authors and departments. As records are enriched and corrected, this feedback loops back to enrich Web of Science. Profiles provide customized reporting and metrics at the researcher, department, and institutional levels to comprehensively track research performance.
Sommer Browning, Assistant Professor; Head of Electronic Access & Discovery Services, Auraria Library, University of Colorado, Denver
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
RESTful web services follow the architectural constraints of REST (Representational State Transfer). REST is defined as an architectural style for distributed hypermedia systems, consisting of client-server, stateless, cacheable communications and a uniform interface between components. Key aspects of REST include representing resources with unique identifiers, using well-defined operations like GET, POST, PUT and DELETE on these resources, and hypermedia as the engine of application state (HATEOAS) to drive interactions between resources.
A Bibliographic Playlist: Online Reference, Recommender, & Collaborative Acad...Lorena O'English
PowerPoint presentation I have given at Washington State University about Zotero and Connotea - alternatives to (and expanders beyond) bibliographic management tools such as EndNote. Some slides are hard to read.
Workset Creation for Scholarly Analysis Project presentation at CNI 2013Harriett Green
Here are a few ways the HTRC could help with this use case:
1. Enrich the metadata to consistently tag or link works that were originally serialized, and identify the relevant periodicals. This would allow searching and filtering by serialization.
2. Perform text mining on periodical full text to automatically identify works by author and group related snippets together. Machine learning could help with this task.
3. Allow searching bibliographic metadata by author name, then viewing a timeline or network graph of where their works appeared - both in volumes and periodicals. This could help scholars trace serializations.
4. Support analytical workflows that extract and group text snippets by serialized work, rather than just by volume. This would facilitate
February 18 2015 NISO Virtual Conference
Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Improving Integrity, Transparency, and Reproducibility Through Connection of the Scholarly Workflow
Andrew Sallans, Partnerships, Collaborations, and Funding, Center for Open Science
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
Beth R. Bernhardt, Assistant Dean for Collection Management and Scholarly Communications, University Libraries, University of North Carolina Greensboro
Anna Craft, Metadata Cataloger, University Libraries, University of North Carolina Greensboro
The document summarizes developments in Cambridge University Library's transition to more digital resources and services. It discusses how the library has shifted significant portions of its materials budget to online journals and databases. It also describes the library's implementation of a new "resource discovery" platform to help users more easily search and access the library's diverse digital collections, which had previously been scattered across different systems. Additionally, the document outlines the library's "COMET" project to publish a large portion of its metadata as open linked data on the semantic web.
Exploiting the value of Dublin Core through pragmatic developmentPaul Walk
The document discusses applying agile software development practices to information modeling and application profile development. It focuses on three areas: 1) using application profiles to combine metadata elements for specific uses, 2) learning from iterative software practices like continuous testing and collaboration, and 3) the importance of working openly with users and communities. The talk promotes applying techniques like minimum viable products, iterative design, and open participation to help application profiles better meet user needs.
The document discusses the future of Cochrane Reviews and scientific articles moving away from static documents toward structured linked data and interfaces. It argues that the focus should shift from the documents themselves to the things they are about (e.g. populations, interventions, outcomes), which can be connected as a web of data. This would allow content to be more nimble, traveling freely across datasets while retaining context. Interfaces could provide better access than documents by enabling smart search and filtering of this linked data graph. The future is making content and delivery more important than the containers (documents/articles) themselves.
This document provides an overview of an introductory course on Web Science. It discusses key topics including:
1. What is Web Science and why it matters as an area of scientific study.
2. Key aspects of web architecture like URIs, URLs, HTTP, and file formats.
3. Methods of measuring the web through network analysis and studying structures like the blogosphere and social networks.
4. The Web Science Method which takes an iterative, mixed methods approach of engineering, measuring, and analyzing the web.
5. The social aspects of the web and challenges of incorporating human behavior.
6. Issues of web governance, security, and standards setting.
Your digital humanities are in my library! No, your library is in my digital ...Rebekah Cummings
A presentation on the intersection of libraries and digital humanities presented at the Utah Digital Humanities Symposium at Utah Valley University on February 26, 2016.
Exploring a world of networked information built from free-text metadataShenghui Wang
This document summarizes a presentation about exploring topics through networked information extracted from free-text metadata. It describes challenges in exploring topics and related aspects. It then demonstrates an online interface called Ariadne that addresses these challenges by generating semantic representations of entities from a large dataset and identifying nearest neighbors and related entities through multidimensional scaling. Finally, it discusses potential applications of this approach and references related work.
NISO Two Day Virtual Conference:
Using the Web as an E-Content Distribution Platform:
Challenges and Opportunities
Oct 21-22, 2014
John Mark Ockerbloom, Digital Library Architect and Planner, University of Pennsylvania
This document summarizes a presentation on using linked data with the digital asset management system Islandora. It discusses how linked data can help with issues like web traffic, data reuse, authority control and faster record editing. Examples are given of using linked data in Islandora, including adding RDF to objects and querying the data. Case studies of specific implementations at institutions like Delft University of Technology are also covered. The presentation concludes by discussing potential next steps and how linked data relates to library services more broadly.
Searching of Web and Electronic Resources Bramesha B
This document provides an overview of tools and techniques for searching the web and other online resources. It discusses the evolution of the internet from 1945 to present day. It then covers various types of search tools including search engines, meta search engines, directories, digital libraries, and scholarly communication directories. Finally, it outlines strategies for effective searching such as defining problems, selecting keywords, determining scope, and refining searches.
Rural Call : Shall we connect the DOTS with Rural World
Ramkrishna Sameriya, Rural Call presented in Indian Science Congress MP Chapter on 26th March 2014 at Vigyan Bhawan Bhopal.
This document outlines tasks for student groups to complete an interdisciplinary project on the 2013 FIFA Confederations Cup. The groups are assigned different subjects including art, geography, food technology, English, music, history, and maths. For each subject task, the groups must complete an activity in Spanish and design a related starter activity to teach the topic to their peers in English with a focus on language skills. The overall goals are to reinforce cross-curricular learning and language acquisition while studying the Confederations Cup tournament. Students will create a PowerPoint presentation to showcase their work which will be published online. The top performing year group will visit primary schools as a reward.
EIMRINGI MOSHI-The old testament Student..........................666Elimringi Moshi
This document is the table of contents for Volume VII of The Old Testament Student, covering issues from September 1887 to June 1888. It lists over 40 articles on topics related to studying the Old Testament, including editorials, studies of biblical books, discussions of interpretation methods, and reports. The editor seeks to promote placing a rigorous, scholarly study of the Bible in college curriculums, as the opportunity has arrived for a movement to make it an elective course at all American colleges. Currently, most colleges offer little genuine biblical study and do not treat religious departments with the same dignity as other areas.
Inotec este o companie reprezentata de o echipa tanara si entuziasta care a plecat la drum in 2004. Ne-am asumat, inca de atunci, misiunea de a crea solutii software inovative pentru eficientizarea proceselor de business din interiorul companiilor. Ceea ce urmarim in activitatea noastra este sa aducem plus valoare clientilor si partenerilor nostri, prin intermediul proiectelor dezvoltatea alaturi de ei.
Acesta fiind scopul, restul e placerea lucrului bine facut!
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Os orçamentos municipais para 2017 aumentaram para a maioria das autarquias da região de Viseu. Os investimentos prioritários incluem a requalificação urbana, a construção de parques industriais e zonas empresariais, e projetos de sustentabilidade ambiental como novas estações de tratamento de águas residuais. Algumas autarquias justificam o crescimento dos orçamentos com a captação de fundos comunitários para estes projetos.
The document is a user manual for the Eee PC that describes the various parts of the device like the keyboard, touchpad, ports and buttons. It provides instructions on powering on the device and configuring wireless networks. The manual also explains how to use included utilities and features like the touchpad, function keys, and system recovery options.
Az előadásról
Adatkezelés és szerver architektúra: hogyan lehet lehetővé tenni, hogy viszonylag alacsony rezsivel tudjunk hatalmas mennyiségű adatot kezelni gyorsan!
Előadó: Jaksa Zsombor
This document contains images from various sources including Bing and Flickr that were used under fair use for a non-commercial purpose. The images are credited to their original sources and are intended to help illustrate concepts or ideas being discussed.
Elimringi moshi - hell testimonies/ shuhuda za kuzimuElimringi Moshi
This document contains a collection of testimonies about experiences of hell. It begins with an introduction explaining that not all who claim to know Jesus will enter the kingdom of heaven, as many are deceived. It then provides a table of contents outlining 16 hell testimonies from various individuals around the world. The document aims to serve as a serious warning to believers about the reality of hell for those who live in sin and disobedience.
Revista Racco 07 julho 2016 Encomendas-44-9957-9694 email lupegorini@hotmail...Lusani Dias
A empresa de tecnologia anunciou um novo produto, um smartphone com câmera de alta resolução e bateria de longa duração. O aparelho também possui armazenamento em nuvem e processador rápido. O lançamento está programado para o próximo mês com preço inicial de US$ 499.
Moshi Monsters is a website where you can adopt virtual pets called monsters and care for them by decorating their homes, feeding them, and playing games to earn rocks to buy items. There are different types of pets you can collect like Poppets, Zommer, and Livli. You can also grow plantable pets called Moshlings. Membership allows you to have more pets, do missions, and access special areas. The document recommends being safe online by not sharing private information or saying unkind things.
ALA 2014--Adding copyright/license information to different library systems Hong (Jenny) Jing
Recent changes to Canada’s Copyright Act have propelled copyright and licensed use into the spotlight at Colleges and Universities in Canada. Ensuring that comprehensive information on licensing permissions is displayed to our users is an urgent task. This session will look at how Queen’s University carried out a project to add the license information to to different library systems.
The document summarizes key physical characteristics of the moon and discusses its creation according to the Genesis account. It notes that the moon revolves around Earth in a sidereal month of 27.3217 days and a synodic period from new moon to new moon of 29.53 days. The Genesis creation story is presented as making as much sense as any other theory for the moon's origin. Worship of the moon as a deity dates back to ancient Sumer and spread to influence many cultures. The Bible warns against worshipping the moon or denying God, which can lead to losing touch with reality.
LinkedIn: Where business happens - Fredrik Bernsel (Linkedin EMEA)Social .Lab
This document discusses LinkedIn and how it connects professionals globally. It notes that LinkedIn has over 105 million members worldwide and 225 million monthly unique visitors, far more traffic than other professional sites. The document then provides statistics on Belgium, including that it has over 1.6 million LinkedIn members. It outlines how Belgian professionals use LinkedIn to connect, communicate, research people and companies. The rest of the document discusses how LinkedIn allows members and brands to engage in professional networking, publish content, and build communities to be successful.
This presentation was provided by
Priscilla Caplan of The Florida Center for Library Automation and Jeremy York of The University of Michigan Library, during the NISO Webinar "What It Takes To Make It Last: E-Resources Preservation" held on February 10, 2011.
This document summarizes a workshop on open science and open data for librarians. The workshop covered introducing open science and open data, how data can inform the library profession and support research, tools and applications for working with data, and developing a data strategy for libraries. It discussed stakeholders in research data, why librarians are important data partners, the role of librarians in advocating for open data and managing repositories. The workshop also covered data skills needed by librarians and introducing trusted data repositories.
Reflections of a Digital Steward: Recommendations for Scholarship and Preserv...Millie Gonzalez
The librarian underwent a pivot in their role and responsibilities to managing digital services and an institutional repository. During a sabbatical, the librarian benchmarked Framingham State University's repository against other established repositories, identified best practices, and conducted research. The librarian recommends the library formalize the institutional repository team as a department to expand digital services like data management and digital preservation. The librarian proposes morphing the repository into a Digital Scholarship Center to offer additional expertise and services to students and faculty.
10-15-13 “Metadata and Repository Services for Research Data Curation” Presen...DuraSpace
“Hot Topics: The DuraSpace Community Webinar Series," Series Six: Research Data in Repositories” Curated by David Minor, Research Data Curation Program, UC San Diego Library. Webinar 2: “Metadata and Repository Services for Research Data Curation”
Presented by Declan Fleming, Chief Technology Strategist, Arwen Hutt, Metadata Librarian & Matt Critchlow, Manager of Development and Web ServicesUC, San Diego Library.
“Filling the digital preservation gap”an update from the Jisc Research Data ...Jenny Mitcham
This document summarizes the findings of the Jisc Research Data Spring project at the University of York and Hull which investigated how Archivematica could be used to provide digital preservation for research data. The project tested Archivematica, explored how it handles different file formats and research data, and identified ways to improve Archivematica and integrate it into research data management workflows. The next phases will develop Archivematica further and implement proof of concepts at York and Hull to preserve research data using Archivematica.
RDAP 16 Poster: Challenges and Opportunities in an Institutional Repository S...ASIS&T
Research Data Access and Preservation Summit, 2016
Atlanta, GA
May 4-7, 2016
Poster session (Wednesday, May 4)
Presenters:
Amy Koshoffer, University of Cincinnati
Eric J. Tepe, University of Cincinnati
Capture All the URLs: First Steps in Web ArchivingKristen Yarmey
The document summarizes a webinar on getting started with web archiving. It discusses making the case for a web archiving program, selecting content, crawling and scoping websites, providing access to archived content, and building a sustainable program through policies, metadata, quality control, and addressing challenges. The webinar covered lessons learned and next steps such as additional outreach and exploring new technologies and uses for archived web content.
RDAP 15: Research Data Integration in the Purdue LibrariesASIS&T
Research Data Access and Preservation Summit, 2015
Minneapolis, MN
April 22-23, 2015
Lisa Zilinski, Data Specialist, Carnegie Mellon University
Amy Barton, Metadata Specialist, Purdue
Tao Zhang, Digital User Experience Specialist, Purdue
Line Pouchard, Computational Science Information Specialist, Purdue
Pete E. Pascuzzi, Molecular Biosciences Information Specialist, Purdue
Duraspace Hot Topics Series 6: Metadata and Repository ServicesMatthew Critchlow
Presented by Declan Fleming, Arwen Hutt, and Matt Critchlow. The second in a three part Webinar series on Research Data Curation at UC San Diego, as part of the larger Research Cyberinfrastructure initiative.
February 18 2015 NISO Virtual Conference Scientific Data Management: Caring for Your Institution and its Intellectual Wealth
Learning to Curate Research Data
Jennifer Doty, Research Data Librarian, Emory Center for Digital Scholarship, Emory University, Robert W. Woodruff Library
Research Cyberinfrastructure at UCSD - David Minor - RDAP12ASIS&T
Research Cyberinfrastructure at UCSD
David Minor
UC San Diego Libraries San Diego Supercomputer Center
Presentation at Research Data Access & Preservation Summit
22 March 2012
Changing the Curation Equation: A Data Lifecycle Approach to Lowering Costs a...SEAD
This document discusses the Sustainable Environment Actionable Data (SEAD) project, which aims to lower the costs and increase the value of data curation through a data lifecycle approach. SEAD provides lightweight data services to support sustainability research, including secure project workspaces, active and social curation tools, and integrated lifecycle support for data from ingest to long-term preservation. By leveraging technologies like Web 2.0 and standards, SEAD simplifies and automates curation processes using metadata captured from data producers and users. This allows curation activities to begin earlier in the data lifecycle and be distributed across researchers and curators.
This document summarizes a seminar on data management for undergraduate researchers. It discusses what data is, why it needs to be managed, and key aspects of the data management process such as data organization, metadata, storage, and archiving. Topics covered include file naming best practices, version control, documentation, metadata standards, storage options, and long-term archiving. The goal is to help researchers organize and document their data so it can be understood, preserved, and reused.
User-centered research for developing programs & articulating value.Lynn Connaway
Connaway, L. S. (2019). User-centered research for developing programs & articulating value. Presented at the University of Adelaide, February 18, 2019, Adelaide, Australia.
2013 DataCite Summer Meeting - Purdue University Research Repository (PURR) (...datacite
Michael Witt presented on the Purdue University Research Repository (PURR) at the DataCite summer meeting. PURR is a collaborative effort between Purdue University Libraries, Office of the Vice President for Research, and Information Technology. It provides researchers a space to store, share, and publish research data, with librarian support for data management plans and curation. PURR aims to encourage citation of datasets by assigning identifiers, displaying licenses, providing citation examples, and exposing structured citations. It is built on open source HUBzero software and has over 1,000 registered researchers sharing data across 200 projects.
PIDs, Data and Software: How Libraries Can Support Researchers in an Evolving...Sarah Anna Stewart
Presentation given at the M25 Consortium of Academic Libraries, CPD25 Event on 'The Role of the Library in Supporting Research'. Provides an introduction to data, software and PIDs and a brief look at how libraries can enable researchers to gain impact and credit for their research data and software.
(Nov 2008) Preparing Future Digital CuratorsCarolyn Hank
Event: Practical Applications of Digital Curation Education panel at the Fall 2008 Meeting of the Mid-Atlantic Regional Archives Conference, Silver Spring, MD, November 7, 2008. With Helen R. Tibbo, Sayeed Choudhury, and Kenneth Thibodeau
Similar to The workflows for the ingest of digital objects into a repository/digital library (20)
The Selection Between An Open Source And Vended Software in Libraries:Oppor...Hong (Jenny) Jing
This document discusses the opportunities and risks of selecting open source software versus vended software for libraries. It covers topics like project management, functional requirements, and evaluating different options. Open source software provides benefits like customization and cost savings but also risks around support, documentation, and reliability. Vended products offer stability, support, and standardization at a higher cost with less flexibility. The document provides examples of open source and vended software options for different library systems and suggests steps for evaluating and selecting the ideal solution based on a library's unique needs and resources.
Institutional Repository (IR) and Open Access in Academic LibrariesHong (Jenny) Jing
This document discusses institutional repositories (IRs) and open access in academic libraries. It provides an overview of IR trends, including a move toward collaboration between libraries through consortia to share costs and expertise. The document also describes common IR systems and functions, such as collecting and curating digital scholarly output. Workflow processes for IRs are discussed, as well as metrics for evaluating an IR's success. Best practices from libraries like COPPUL that have developed shared IR tools are also acknowledged.
The Impact of Linked Data in Digital Curation and Application to the Catalogu...Hong (Jenny) Jing
(Full version of the presentation: https://www.youtube.com/watch?v=WS9Svbmp-YY)
Information organization and systems in libraries are in a state of significant flux. In systems there is a shift to XML and RDF-based schemas and ontologies while resource description content standards have changed from AACR2 to RDA. A move from MARC to BIBFRAME and other linked data applications is on the horizon. Linked data and the semantic web have become buzzwords, but what is linked data and why it is important for librarians? How can we use it in digital curation? What can libraries do now to “prepare” for this change in their current practice?
In light of these questions, the panel presentation will discuss two projects. First, there will be coverage of a sample project using the Fedora-based open source framework, Islandora to demonstrate the concepts of connecting related data across the Web with URIs, HTTP and RDF. The second half of the presentation will describe how a consortia has taken a holistic approach to writing an RDA workflow to help front-line cataloguers develop a wider perspective when it comes to resource description (creating more structured, future compatible metadata). Up for discussion: the current state and future possibilities of library metadata with a focus on the implications of linked data.
Strategic Developments in Digital Initiatives at Academic LibrariesHong (Jenny) Jing
The document discusses three critical strategic developments for academic libraries to focus on regarding digital initiatives:
1. Focus digital initiatives on collaborating with stakeholders to develop new user services.
2. Use multiple systems and adopt new technologies like linked data and Fedora 4 for digital assets and institutional repositories.
3. Work with partners through consortia to share costs, expertise, and enhance standards and cooperation. The document provides examples of current technologies, systems, and consortia collaborations to illustrate these strategic developments.
Digital asset management (dam) systems used in LibrariesHong (Jenny) Jing
This document summarizes different digital asset management systems used in academic libraries. It discusses integrated repository systems, digital asset management systems, archival description software, digital preservation systems, and exhibition software. It also compares features of digital asset management systems and institutional repositories. The document presents case studies on allocating resources to collections services and system maintenance. It recommends systems for Queen's University Library based on factors like being Fedora-based and integration with other frameworks.
Implementing an Open Source IT Ticketing System at Queen's University LibraryHong (Jenny) Jing
For many years, Queen’s University Library has used an internally designed ticketing system for handling all technical requests sent by library staff. In the summer of 2014, we started moving to a more formal system for tracking, delegating, and resolving reported issues. This presenation will walk through the group’s evaluation process, the lessons we learned, as well as customizations and modifications made to our open-source choice, which will serve as an IT ticketing system, an inventory list and an internal knowledge base.
Transparent Licenses: Making user rights clear (OLA Super Conference 2015)Hong (Jenny) Jing
Recent changes to Canada’s Copyright Act have propelled copyright and licensed use into the spotlight at colleges and universities in Canada. This session will look at Queen’s and University of Toronto libraries’ experience implementing a licensing permissions workflow using OCUL Usage Rights database (OUR). The systems will be covered are: 360 Link, Summon, Voyager OPAC, Endeca. We will explain how to implement the license links with and without using API.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5. The Project WBS
1. Needs/
requirements
3 Staffing
& Role
2 Use
Standards
4
Quality control
1 Review process
2 Review collections
4 Technology: HW/SW
3 Define use cases
1 Plan: staff
2 Define: Tasks
4 Training
3 Assign: Role
1 Preservation
2 Metadata
3 Other
1 Deliverables
2 Standards
3 Activities
4 Tools/staff
6. • Review current procedures
• Review collections:
- Analog :Type of carrier, Condition of the media, etc.
- Born digital: File format, software,
• Define use cases
• Analyze existing functionality, tools and workflow
• Identify development needs, functional requirements
• Identify, recommend the tools/software to use
1. Needs & requirements – Ingest
7. 2. Standards: Preservation & metadata
Trusted Digital Repositories: (TDR) Attributes and Responsibilities
1. Administrative Responsibility
2. Organization Viability
3. Financial Sustainability
4. Technological & Procedural Suitability
5. System Security
6. Procedural Accountability
Open Archival
Information
System (OAIS):
Reference Tool
Archivematica ,
Archivest’s Toolkit
DAM/Islandora,
9. Batch processing: things to consider
Analog files: standards, processes, equipment, and storage for media records.
Image: File format, size, large datasets
Technology: Software (BitCurator, Archivematcia, ArchiveSpace), Storage hardware
Staffing: who
Metadata: PREMIS, METS (born-digital, copyrights)
• SIP Creation and PREMIS Rights
• Metadata is service driven
• Generated/extracted (automated) vs. human
• Where in workflows create metadata
• What form(s)
13. Ingest Workflow : IR - Dspace @ Queen’s University
Rosarie Coughlan, Scholarly Publishing Librarian, Queen’s University Library
14. Ingest Workflow - Sample using Archivematica
http://www.slideshare.net/mikeum/shallcross-mmdp-a220150327final
15. Acknowledges
• Rachel Wise, Archivist, Knowledge & Library Services, Harvard
Business School
• Kelli Babcock, Digital Initiatives Librarian, Information Technology
Services, University of Toronto Libraries
• Bronwen Sprout, Head, Digital Programs and Services Digital
Initiatives, Irving K. Barber Learning Centre, University of British
Columbia
16. Resources/References
• ArchivesSpace-Archivematica-Dspace Workflow Integration
• http://www.slideshare.net/mikeum/shallcross-mmdp-a220150327final
• UBC Library's Digital Preservation Strategy
• http://www.slideshare.net/ubclibrary/ubc-librarys-digital
• Digital POWRR
• http://digitalpowrr.niu.edu/tool-grid/
• UBC's Implementation of IR/DAM
• http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/mow/VC_Sprout_Romkey_2
6_B_1540.pdf
• Review of Available Open Source DAM Software
• http://www.opensourcedigitalassetmanagement.org/reviews/available-open-source-dam/
• Curating the Analog, Curating the Digital - Archive Journal
Editor's Notes
Digital curation establishes, maintains and adds value to repositories of digital data for present and future use.
It enhances the long-term value of existing data by making it available for further research.
Born digital : resources are items created and managed in digital form.
considerations take on added dimensions:
•
Versions: should unintentionally retained drafts be kept?
•
Privacy: should deleted files be recovered?
•
Rights
Licensing: does the original license transfer when software is “inherited”?
•
Access
Scope: to create a project workflow for the ingest of a combination of analog and born digital archival collection into a digital library
The ingest process for born digital is new at libraries. A lot of it has to do with departmental priorities and there is a lack of sufficient storage for such materials.
Ingest functions:
1
The project workflow for the ingest of a combination of analog and born digital archival collection into a digital library
In our case:
Collections: analog and born digital archival collection
Use case and development Needs: batch processing of digital images and metadata, and apply standards, identify staffing, and apply quality control.
Collect Requirements: Tools/Techniques: Interviews, focus groups, observations, etc.
Based on the functional requirements, review the tools/software and identify the ones meets our requirements.
Ingest: Archivematica
Archival Storage (short term, long term): DAM, OCUL Cloud (Dark Archive)
Data Management: Archivematica/DAM
Administration: oversee the systems
Access: AtoM
Preservation Planning: Set up policies for the above processes
Common Services: services required by IT system, such as application of security patches.
Purpose of OAIS
raise understanding, awareness, consensus
• enable effective participation
• describe and compare
– architectures and operations
– preservation strategies and techniques
• understand digital info data models
• guide growth of OAIS-related standards
1. We got this thing
2. Quality checking, is it good enough? Check list: metadata, shape, etc. You can do sampling if you are familiar with the content.
3. what the info we need to know about this object
4. go to data management.
batch processing of digital images and metadata
Collections:
Analog and born digital archival collection
When transferring analog sound file collections, a workflow should be established to outline the standards, processes, equipment, and storage for media records.
When selecting the tools, we need to make sure The interface support batch processing. For example,
PREMIS in METS Toolbox does not support it.
PREMIS:
Common data model for organizing/thinking about preservation metadata
Checklist for core metadata in a repository
Guidance for local implementations
Standard for exchanging IPs between repositories
This process helps us determine if deliverables are being produced to an acceptable quality level and if the project processes used to manage and create the deliverables are effective and properly applied.
For both analog & born digital: Preserving metadata—managerial, technical, descriptive, structural, and administrative information embedded in the file and/or in a database that corresponds to the digital file.
Evaluate current quality of your metadata
• Weigh significance of metadata elements
• Evaluate metadata form needs
• Set priorities/steps for filling metadata gaps
Implement a program to run checksums and other authentication methods to check the integrity of the file over time.(analog & born digital)
archivematica provides a list of Micro-services
Archivematica implements a micro-service approach to digital preservation.
Archivematica: During ingest, digital objects are packaged into SIPs and run through several micro-services, including normalization, packaging into an AIP and generation of a DIP.
If you would like to skip some of the default decision points or make preconfigured choices for your desired workflow, you can configure Archivematica to do so.