A presentation given at the International Image Interoperability Framework event held at the New York Academy of Medicine in New York City on May 11, 2016.
Randy Stern
Harvard University
IIIF Pre-conference - Usability testing conducted on the UV and MiradorJulien A. Raemy
Usability research on the UV and Mirador in the context of a bachelor's thesis at the University of Applied Sciences in Geneva.
This presentation was given in the Vatican City during the IIIF Pre-conference on Monday the 5th of June 2017.
The document summarizes a presentation about IIIF-To-Go, a proposed product that would make it easy for libraries and cultural heritage institutions to implement the International Image Interoperability Framework (IIIF). It notes that current IIIF adopters have IT staff but lack time and resources, while non-adopters have even fewer resources. IIIF-To-Go aims to provide all the necessary components and tools for research, teaching, preservation and sharing of digital resources using IIIF in one package. The product would benefit different user types including those consuming, contributing or serving IIIF content. Concerns about ongoing support, technical variability and ability were raised, but the overall goal is to extend the IIIF community by addressing
New approaches for data acquisition at europeana iiif, sitemaps and schema.o...Nuno Freire
Presentation on experiments at Europeana regarding new methods of aggregating metadata.
Presented at the Seminar Linked Data in Research and Cultural Heritage, on 1st of May 2017.
This document summarizes a presentation about using linked data to improve library discovery. It discusses linking library data to non-library data sources to provide a richer context about materials. It introduces key concepts of linked data like identifying entities, using URIs, and standard vocabularies. The presentation also provides examples of how linked data is being applied in library catalogs by connecting catalog records to sources like VIAF, DBpedia, and Wikidata.
International Image Interoperability Framework (IIIF): Journal Club PresentationJulien A. Raemy
Journal Club presentation about a IIIF article done on the 13th of December 2016 at the "Haute école de gestion de Genève" (School of Business Administration in Geneva) during a seminar class on Web and Information and Communications Technology (ICT). The presentation was in four parts:
1. IIIF as a community
2. Journal Club
3. Showcases
4. Conclusion
Reference: SNYDMAN, Stuart, SANDERSON, Robert and CRAMER, Tom, 2015. The International Image Interoperability Framework (IIIF): A community & technology approach for web-based images. In: Archiving Conference. May 2015. p. 16–21.
Wikidata is a free and open knowledge base that can be edited by anyone to store structured data. It currently has over 33.5 million articles and 1.9 billion edits in 287 languages. Wikidata provides structured, collaborative, free, open, multilingual, and referenced data through its API and licenses its data under CC0 to allow easy access and reuse. It helps projects like Wikipedia by providing integrated access to its data and supports smaller languages and communities through micro-contributions. In 2015, Google's Freebase project moved its data to Wikidata, increasing its scope and ecosystem.
This document provides an overview of an introductory course on Web Science. It discusses key topics including:
1. What is Web Science and why it matters as an area of scientific study.
2. Key aspects of web architecture like URIs, URLs, HTTP, and file formats.
3. Methods of measuring the web through network analysis and studying structures like the blogosphere and social networks.
4. The Web Science Method which takes an iterative, mixed methods approach of engineering, measuring, and analyzing the web.
5. The social aspects of the web and challenges of incorporating human behavior.
6. Issues of web governance, security, and standards setting.
IIIF Pre-conference - Usability testing conducted on the UV and MiradorJulien A. Raemy
Usability research on the UV and Mirador in the context of a bachelor's thesis at the University of Applied Sciences in Geneva.
This presentation was given in the Vatican City during the IIIF Pre-conference on Monday the 5th of June 2017.
The document summarizes a presentation about IIIF-To-Go, a proposed product that would make it easy for libraries and cultural heritage institutions to implement the International Image Interoperability Framework (IIIF). It notes that current IIIF adopters have IT staff but lack time and resources, while non-adopters have even fewer resources. IIIF-To-Go aims to provide all the necessary components and tools for research, teaching, preservation and sharing of digital resources using IIIF in one package. The product would benefit different user types including those consuming, contributing or serving IIIF content. Concerns about ongoing support, technical variability and ability were raised, but the overall goal is to extend the IIIF community by addressing
New approaches for data acquisition at europeana iiif, sitemaps and schema.o...Nuno Freire
Presentation on experiments at Europeana regarding new methods of aggregating metadata.
Presented at the Seminar Linked Data in Research and Cultural Heritage, on 1st of May 2017.
This document summarizes a presentation about using linked data to improve library discovery. It discusses linking library data to non-library data sources to provide a richer context about materials. It introduces key concepts of linked data like identifying entities, using URIs, and standard vocabularies. The presentation also provides examples of how linked data is being applied in library catalogs by connecting catalog records to sources like VIAF, DBpedia, and Wikidata.
International Image Interoperability Framework (IIIF): Journal Club PresentationJulien A. Raemy
Journal Club presentation about a IIIF article done on the 13th of December 2016 at the "Haute école de gestion de Genève" (School of Business Administration in Geneva) during a seminar class on Web and Information and Communications Technology (ICT). The presentation was in four parts:
1. IIIF as a community
2. Journal Club
3. Showcases
4. Conclusion
Reference: SNYDMAN, Stuart, SANDERSON, Robert and CRAMER, Tom, 2015. The International Image Interoperability Framework (IIIF): A community & technology approach for web-based images. In: Archiving Conference. May 2015. p. 16–21.
Wikidata is a free and open knowledge base that can be edited by anyone to store structured data. It currently has over 33.5 million articles and 1.9 billion edits in 287 languages. Wikidata provides structured, collaborative, free, open, multilingual, and referenced data through its API and licenses its data under CC0 to allow easy access and reuse. It helps projects like Wikipedia by providing integrated access to its data and supports smaller languages and communities through micro-contributions. In 2015, Google's Freebase project moved its data to Wikidata, increasing its scope and ecosystem.
This document provides an overview of an introductory course on Web Science. It discusses key topics including:
1. What is Web Science and why it matters as an area of scientific study.
2. Key aspects of web architecture like URIs, URLs, HTTP, and file formats.
3. Methods of measuring the web through network analysis and studying structures like the blogosphere and social networks.
4. The Web Science Method which takes an iterative, mixed methods approach of engineering, measuring, and analyzing the web.
5. The social aspects of the web and challenges of incorporating human behavior.
6. Issues of web governance, security, and standards setting.
The document provides guidelines for publishing data as Linked Data. It discusses identifying appropriate data sources, reusing existing vocabularies and non-ontological resources, generating RDF data from relational databases or geometrical data using tools like R2O, ODEMapster and geometry2rdf, and publishing the data on the web by resolving URIs. The Ontology Engineering Group at Universidad Politécnica de Madrid has published Spanish geospatial and statistical data as part of projects like GeoLinkedData following these guidelines.
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
This document provides an agenda and information about a tutorial on topic exploration using the HathiTrust Research Center (HTRC) Data Capsule. The agenda includes an overview of HTRC, an introduction to the Data Capsule and topic modeling, and hands-on sessions. Information is also provided about HTRC, including its mission to enable non-consumptive research on HathiTrust's digital library, its organizational structure, goals for the future, and important URLs.
The HathiTrust Research Center: Enabling New Knowledge Through Shared Infras...Robert H. McDonald
The presentation provided an overview of the HathiTrust Research Center (HTRC) and its services. HTRC provides access to over 13 million digitized book volumes and facilitates text mining and analysis through its extracted features dataset, data capsule, and other tools. It discussed challenges of text mining copyrighted works and demonstrated use cases using distant reading techniques. HTRC also works on outreach, education, and developing new interfaces and tools to enable scholarly research using its collections and infrastructure.
What is #LODLAM?! Understanding linked open data in libraries, archives [and ...Alison Hitchens
This document provides an overview of linked open data (LOD) and the Resource Description Framework (RDF) and their applications in libraries, archives, and museums (LODLAM). It begins by defining linked data and how it extends standard web technologies to share structured data between computers. The document then discusses using structured, machine-readable data to describe resources like people, and how to structure this data using RDF. It provides examples of libraries and archives sharing controlled vocabularies, unique resources and holdings data as linked open data. The document concludes by reviewing current LODLAM projects and the potential for libraries and archives to both contribute and consume linked open data.
This document discusses the development of linked data and the semantic web over the past 13 years. It outlines how initially the goal was to build the semantic web as a precursor to use, but that approach changed to focus on publishing data so that people could start building applications using that data incrementally. Two examples are given of published linked data sets from the British Museum and LinkedBrainz. The document argues that linked data is now about enabling systems integration across different applications and domains. It also addresses concerns about publishing linked data leading to untrue claims, and introduces ResearchSpace, a platform for researchers to make annotated claims and arguments about GLAM data using linked data techniques.
Brief overview of linked data and RDF followed by use in libraries and archives. Originally delivered at OLITA Digital Odyssey 2014. Revised for the OLA Superconference 2015
The document discusses linked open data and its possibilities for libraries. It provides an overview of linked data, explaining how it uses standard web technologies to share structured data between applications. Examples are given of library data like catalog records and authority files being exposed as linked data. Current projects involving libraries consuming and sharing linked data are also summarized, though it is noted the field is still developing.
Envisioning Social Applications of Library Linked DataUldis Bojars
This talk discusses two streams of innovation on the Web--the Social Web and Linked Data--and explains how bringing them together can move library services to the 21st century.
The core of the presentation will look at a few of the envisioned social use cases for library linked data: Social Annotation, Peer-to-Peer Bookswapping and Social Recommendations.
The goal is to create interest in combining new technologies and to start a discussion about how to bring these and similar use cases to fruition.
Presented at the ELAG-2012 conference: http://www.elag2012.com/
Open Education Challenge 2014: exploiting Linked Data in Educational Applicat...Stefan Dietze
Presentation from mentoring event of Open Education Europa Challenge (http://www.openeducationchallenge.eu/) about using Linked Data in educational applications.
Exploring the Networks in Open Public DataUldis Bojars
This document summarizes Uldis Bojārs' presentation on exploring networks in open public data. Bojārs analyzed voting patterns of Latvian politicians using open data from parliamentary voting records. He preprocessed the data to create nodes for politicians and edges for similar voting connections. Different connection criteria revealed patterns, but the right threshold had to be found where patterns emerged without connections being too broad. Visualizing the results for experts could improve understanding of the data and networks. More open data and collaboration could enhance such network analysis and data visualization.
Connected heritage: How should Cultural Institutions Open and Connect Data?Mia
Keynote for the International Digital Culture Forum 2017, Taichung, Taiwan, August 2017
I approach the question by describing the mechanisms organisations have used to open and connect data, then I look at some of the positive outcomes that resulted from their actions. This is not a technical talk about different acronyms, it's about connecting people to our shared heritage.
GLAMorous LOD and ResearchSpace introductionBarry Norton
This document discusses the development of ResearchSpace (RS), a platform that allows researchers to make claims by adding to and modifying data from cultural heritage institutions in a way that preserves canonical data. RS components include search, data annotation, image annotation, a "data basket" for collecting items, a dashboard, and conjunctive search. It also discusses fundamental relationships that can be represented in linked cultural data.
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Este documento describe cómo usar la sentencia SELECT CASE en Gambas para crear condiciones compuestas. Explica que SELECT CASE permite evaluar múltiples condiciones sobre una variable de forma más limpia que anidar IF/ELSE. Como ejemplo, muestra cómo usar SELECT CASE para bloquear una caja de texto y permitir solo la entrada de números y algunas teclas especiales como retroceso o tabulador. El lector aprende a implementar este patrón en un proyecto de calculadora para validar la entrada numérica del usuario.
The document provides guidelines for publishing data as Linked Data. It discusses identifying appropriate data sources, reusing existing vocabularies and non-ontological resources, generating RDF data from relational databases or geometrical data using tools like R2O, ODEMapster and geometry2rdf, and publishing the data on the web by resolving URIs. The Ontology Engineering Group at Universidad Politécnica de Madrid has published Spanish geospatial and statistical data as part of projects like GeoLinkedData following these guidelines.
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
This document provides an agenda and information about a tutorial on topic exploration using the HathiTrust Research Center (HTRC) Data Capsule. The agenda includes an overview of HTRC, an introduction to the Data Capsule and topic modeling, and hands-on sessions. Information is also provided about HTRC, including its mission to enable non-consumptive research on HathiTrust's digital library, its organizational structure, goals for the future, and important URLs.
The HathiTrust Research Center: Enabling New Knowledge Through Shared Infras...Robert H. McDonald
The presentation provided an overview of the HathiTrust Research Center (HTRC) and its services. HTRC provides access to over 13 million digitized book volumes and facilitates text mining and analysis through its extracted features dataset, data capsule, and other tools. It discussed challenges of text mining copyrighted works and demonstrated use cases using distant reading techniques. HTRC also works on outreach, education, and developing new interfaces and tools to enable scholarly research using its collections and infrastructure.
What is #LODLAM?! Understanding linked open data in libraries, archives [and ...Alison Hitchens
This document provides an overview of linked open data (LOD) and the Resource Description Framework (RDF) and their applications in libraries, archives, and museums (LODLAM). It begins by defining linked data and how it extends standard web technologies to share structured data between computers. The document then discusses using structured, machine-readable data to describe resources like people, and how to structure this data using RDF. It provides examples of libraries and archives sharing controlled vocabularies, unique resources and holdings data as linked open data. The document concludes by reviewing current LODLAM projects and the potential for libraries and archives to both contribute and consume linked open data.
This document discusses the development of linked data and the semantic web over the past 13 years. It outlines how initially the goal was to build the semantic web as a precursor to use, but that approach changed to focus on publishing data so that people could start building applications using that data incrementally. Two examples are given of published linked data sets from the British Museum and LinkedBrainz. The document argues that linked data is now about enabling systems integration across different applications and domains. It also addresses concerns about publishing linked data leading to untrue claims, and introduces ResearchSpace, a platform for researchers to make annotated claims and arguments about GLAM data using linked data techniques.
Brief overview of linked data and RDF followed by use in libraries and archives. Originally delivered at OLITA Digital Odyssey 2014. Revised for the OLA Superconference 2015
The document discusses linked open data and its possibilities for libraries. It provides an overview of linked data, explaining how it uses standard web technologies to share structured data between applications. Examples are given of library data like catalog records and authority files being exposed as linked data. Current projects involving libraries consuming and sharing linked data are also summarized, though it is noted the field is still developing.
Envisioning Social Applications of Library Linked DataUldis Bojars
This talk discusses two streams of innovation on the Web--the Social Web and Linked Data--and explains how bringing them together can move library services to the 21st century.
The core of the presentation will look at a few of the envisioned social use cases for library linked data: Social Annotation, Peer-to-Peer Bookswapping and Social Recommendations.
The goal is to create interest in combining new technologies and to start a discussion about how to bring these and similar use cases to fruition.
Presented at the ELAG-2012 conference: http://www.elag2012.com/
Open Education Challenge 2014: exploiting Linked Data in Educational Applicat...Stefan Dietze
Presentation from mentoring event of Open Education Europa Challenge (http://www.openeducationchallenge.eu/) about using Linked Data in educational applications.
Exploring the Networks in Open Public DataUldis Bojars
This document summarizes Uldis Bojārs' presentation on exploring networks in open public data. Bojārs analyzed voting patterns of Latvian politicians using open data from parliamentary voting records. He preprocessed the data to create nodes for politicians and edges for similar voting connections. Different connection criteria revealed patterns, but the right threshold had to be found where patterns emerged without connections being too broad. Visualizing the results for experts could improve understanding of the data and networks. More open data and collaboration could enhance such network analysis and data visualization.
Connected heritage: How should Cultural Institutions Open and Connect Data?Mia
Keynote for the International Digital Culture Forum 2017, Taichung, Taiwan, August 2017
I approach the question by describing the mechanisms organisations have used to open and connect data, then I look at some of the positive outcomes that resulted from their actions. This is not a technical talk about different acronyms, it's about connecting people to our shared heritage.
GLAMorous LOD and ResearchSpace introductionBarry Norton
This document discusses the development of ResearchSpace (RS), a platform that allows researchers to make claims by adding to and modifying data from cultural heritage institutions in a way that preserves canonical data. RS components include search, data annotation, image annotation, a "data basket" for collecting items, a dashboard, and conjunctive search. It also discusses fundamental relationships that can be represented in linked cultural data.
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Este documento describe cómo usar la sentencia SELECT CASE en Gambas para crear condiciones compuestas. Explica que SELECT CASE permite evaluar múltiples condiciones sobre una variable de forma más limpia que anidar IF/ELSE. Como ejemplo, muestra cómo usar SELECT CASE para bloquear una caja de texto y permitir solo la entrada de números y algunas teclas especiales como retroceso o tabulador. El lector aprende a implementar este patrón en un proyecto de calculadora para validar la entrada numérica del usuario.
Este documento presenta la hoja de vida de Michael David Navas Noriega. Incluye sus datos personales como nombre, fecha y lugar de nacimiento, cédula de identidad, edad, estado civil y nacionalidad. También detalla su formación académica, incluyendo la escuela primaria y secundaria a las que asistió y el título de bachiller en ciencias generales que obtuvo.
The document discusses several heroes including Tim Tebow, Jesus Christ, Mother Teresa, Danny Thomas, Dolly Parton, and Taylor Swift. It describes how each person exemplifies heroic qualities like sacrifice, humility, compassion, and providing hope.
Este documento trata sobre el reúso de aguas residuales tratadas en la agricultura en Colombia y su importancia para el desarrollo socioeconómico. Analiza las ventajas y desafíos del reúso de aguas residuales tratadas para la agricultura como una estrategia para gestionar mejor los recursos hídricos ante la escasez de agua. También revisa experiencias internacionales y nacionales sobre el tratamiento y uso de aguas residuales, así como los marcos legales en Colombia para el reúso agrícola de ag
This slide describes how to create custom role to Oracle Fusion Consultant.
In this slides we will use Oracle Fusion Application, Oracle Identity Manager (OAM) and Oracle Entitlement Server (APM).
Feel free to contact me any time if you face any problem with information in this slide.
OOW16 - Migrating and Managing Customizations for Oracle E-Business Suite 12....vasuballa
Have you created custom schemas? Have you personalized or extended your Oracle E-Business Suite environment? Attend this Oracle Development session to learn about selecting the best upgrade approach for existing customizations. This session helps you understand the new customization standards required by the Edition-Based Redefinition feature of Oracle Database to be compliant with the Online Patching feature of Oracle E-Business Suite. You will learn about customization use cases, tools, and technologies you can use to ensure that all your customizations are preserved during and after the upgrade. You will also hear about reports you can run before the upgrade to detect and fix your customizations to make them compliant with Oracle E-Business Suite 12.2.
Edward Perello, CBO of Desktop Genetics, joins us at the Science: Disrupt London Session on Disruptor Stories to talk machine learning, CRSIPR, pivoting and his startup story.
This document defines control, audit, and information systems. It explains that control is a managerial function, and management is required by law to establish internal controls. An audit objectively examines financial statements to ensure they accurately represent transactions. Information system audits test IT infrastructure controls. The COBIT framework provides best practices for IT governance and management. It links control objectives and practices to business processes and objectives. COBIT 5 is the latest version, which builds on previous versions and other frameworks to provide more holistic enterprise guidance.
The document is a story explaining why people get butterflies in their stomach when feeling nervous. It tells of a young girl named Sophie who swallowed a butterfly that laid magical eggs in her stomach. When the eggs hatched into caterpillars, she would feel them flutter around whenever she felt nervous. As she grew older, she realized this was why she had the strange feeling in her stomach when anxious. She eventually learned from a wise man that the magical butterfly story was true and this explained the butterfly sensation.
The document discusses the Harvard University Library's efforts to provide library resources and data through application programming interfaces (APIs) to enable access, reuse and innovation. It outlines several APIs that provide access to metadata, digital content, holdings and availability data. The goals are to support openness, interoperability and reuse of library data through standards-based web services APIs. Examples of apps built using these APIs are provided.
This document summarizes Harvard's integration of IIIF and Mirador to provide access to digital collections across the university. It describes how Harvard became interested in these technologies to support teaching, research, and digital access. The summary chronicles Harvard's involvement with IIIF from 2010 to present day, including launches of digital collections and tools using these standards. It highlights cross-campus collaboration between groups like the libraries, art museums, and academic technology services.
This document provides an agenda for introducing the International Image Interoperability Framework (IIIF). It summarizes the Image API and Presentation API, provides examples of compatible software implementations, and looks ahead to further development of the framework. The goal of IIIIF is to create a global standard for delivering and displaying images in a way that is interoperable across institutions through common APIs.
Oculus: Using Open APIs to Share Harvard’s Digitized Books and Manuscripts kevin_donovan
1) The document discusses the development of Oculus, a new open source digital book viewer being created by Harvard to replace its older Page Delivery Service (PDS) viewer.
2) Oculus is being built according to open standards like the IIIF APIs to allow Harvard's digitized content to be shared more widely and reused by others. It will also allow Harvard researchers to access content from other institutions.
3) By developing Oculus as an open source project with input from various Harvard departments and other universities, it is hoped that the tool will have ongoing community support and be interoperable across multiple archives.
This document provides an overview of digital humanities (DH), including brief definitions and history, examples of DH projects and tools, and the role of libraries in supporting DH. Some key points include:
- DH uses computational methods to study the humanities and involves activities like digitization of collections, text analysis, and data visualization.
- It has roots in earlier humanities computing projects from the 1940s-1970s and grew with text encoding standards, digital libraries and DH centers in the 1990s-2000s.
- Example projects include Mapping the Republic of Letters, digital archives of WWI poetry, and datasets on the transatlantic slave trade.
- Libraries support DH through digitization, technical skills, project
IIIF for CNI Spring 2014 Membership MeetingTom-Cramer
An overview of the International Image Interoperability Framework (IIIF) at the Coalition for Networked Information (CNI) Spring 2014 Meeting in St. Louis, MO.
Presented at the 2013 Annual Conference of the Council of American Jewish Museums (http://www.cajm.net/annual-conference). Based on the research exhibition "Case Study No. 3 | Sound Objects," created at The Magnes Collection of Jewish Art and Life, University of California, Berkeley, in 2012-2013 (http://bit.ly/sound-objects).
Interoperability in practice: a cross-repository image viewer (Mirador)Stuart Snydman
This document introduces Mirador, an open-source, community-driven image viewer that allows users to view and compare images from multiple online repositories that support the International Image Interoperability Framework (IIIF). Mirador provides a multi-window workspace to display images from different sources side by side for comparison. It is extensible and works with any IIIF-compliant repository. The document demonstrates how Mirador can benefit humanities scholars in tasks like manuscript analysis, creating critical editions, and studying medieval literature and books of hours. Future enhancements may include transcription viewing, annotation capabilities, and support for additional media types.
Web-scale Discovery Services are becoming an integral part of libraries' information gathering arsenal. These services are able to use a single interface to seamlessly integrate results from a wide range of online sources, emulating the experience patrons have come to expect from Internet search engines. But despite their ability to streamline searching, discovery services provide a wide set of challenges for libraries who implement them. This virtual conference will touch on both the potential of discovery services as well as some of the issues involved.
ALIAOnline Practical Linked (Open) Data for Libraries, Archives & MuseumsJon Voss
This document discusses practical applications of Linked Open Data (LOD) for libraries, archives, and museums. It describes how LOD allows these institutions to publish structured data on the web in ways that are interoperable and can be connected to other open datasets. Examples are given of how LOD is being used by various institutions to share metadata, images, and other cultural heritage assets on the web in open, machine-readable formats. The presenter argues that LOD represents a new paradigm that these cultural organizations should embrace to make their collections more accessible and useful on the web.
Rebecca Grant - DH research data: identification and challenges (DH2016)dri_ireland
Presentation made by Rebecca Grant as part of the panel session “Digital data sharing: the opportunities and challenges of opening research” at the Digital Humanities conference, Krakow, 15 July 2016. This paper “DH research data: identification and challenges” provided an introduction to concepts of research data in the digital humanities, including accepted definitions of what constitutes research data in a DH context.
This presentation was provided by Rachel Vacek of the University of Michigan during the NISO webinar, Library as Publisher, Part Two, held on March 14, 2018.
Challenges and Opportunities in Customizing Library Repository User InterfacesRachel Vacek
This presentation will dive into the ongoing challenges that academic libraries often face when improving the user experiences of out-of-the-box and open source repositories. Fueling the challenges are the ambiguity and fast-changing nature within the field of digital scholarship and the constant flux of technology platforms and tools. Fortunately, many libraries are paying more attention to users’ motivations and responding by designing user interfaces that support particular formats and contexts. We’ll explore emerging opportunities with repositories in looking at how far libraries should go in providing customizations to balance stakeholder and user needs, and how to plan for users’ ever-shifting expectations.
This presentation was part of a NISO and NASIG webinar, "Library As Publisher, Part Two: UX and UI for the Library's Digital Collections" and was presented on March 14, 2018.
Digital Libraries: the process, initiatives and developmental issues in India...Sudesh Sood
This document summarizes digital library initiatives and development issues in India. It discusses objectives of digital libraries like enhancing collections, using standards, and maximizing access. It outlines workflows for content selection, publishing, and delivery. Key metadata standards and protocols used in India include Dublin Core and OAI-PMH. Popular open source digital library software in India includes Greenstone, DSpace, and EPrints. The document then summarizes several digital repositories, open courseware initiatives, open access journals, and metadata harvesting services in India. It concludes by noting both accomplishments and ongoing issues around infrastructure, skills, and copyright that impact digital library development in India.
Ready to "level up" your digital humanities (DH) game? DH offers theological librarians new opportunities to collaborate with their communities. Drawing on our experience with a graduate seminar in DH at Vanderbilt Divinity School, we discuss how to equip librarians to foster digital scholarship in areas such as digital textual editions, geospatial apps, open access publishing, and network analyses. Discover how DH transforms faculty and librarian relations from a service model to a partnership model.
Digital collections and humanities researchHarriett Green
This document summarizes key findings from a study about what digital collections and features humanities scholars want. It found that scholars most commonly use texts, images, audio and video in their research. They want robust metadata and searchability. Features like annotation tools, ability to export materials, and access on multiple devices were also important. While many current collections are useful, scholars desire more access to obscure materials and seamless access across collections. Interoperability between collections and customized access pages could help meet future needs.
As part of the ALIA professional development series - "What's your job title mean?" - this presentation describes what's involved working with Informatics in Digital Humanities & Education at the University of Melbourne.
Modernisation of library resources emerging trends for engineering collegesH Anil Kumar
The document discusses trends in engineering college libraries and recommendations for an ideal library. It outlines emerging trends like the shift from ownership to access of resources, open systems, niche collections, and discovery services. Recommendations include a minimum space of 5,000 sq ft, subscriptions to journals and e-book databases, an institutional repository, 4 qualified library staff, and adoption of open-source software. The library should be available 24/7 and connect users to resources beyond the local collection.
Similar to IIIF as an Enabler to Interoperability within a Single Institution (20)
A presentation given as part of the Open Access panel at the International Image Interoperability Framework event held at the Museum of Modern Art in New York City on May 10, 2016.
Merete Sanderhoff
National Gallery of Denmark
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Glen Robson
National Library of Wales
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Michael Appleby
Yale University
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Petr Pridal
Klokan Technologies
Mirador: A Cross-Repository Image Comparison and Annotation ToolIIIF_io
Mirador is an open-source, JavaScript-based tool for comparing and annotating images from multiple repositories. It allows users to view objects side by side without institutional barriers and collect annotations about several objects in one place. Mirador is compatible with the IIIF image and presentation APIs and various IIIF-compliant software. It has been developed since 2013 in a collaborative effort between several universities and is available on GitHub under an open source license.
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Simeon Warner
Cornell University
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Rob Sanderson
Stanford University Libraries
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
David Haskiya
Europeana
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Emmanuelle Delmas-Glass
Yale Center for British Art
Almost Christmas | Introductory Remarks at Access to the World's ImagesIIIF_io
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Sylvia Van Peteghem
Ghent University
IIIF and the Digital Public Library of AmericaIIIF_io
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Mark Matienzo
Digital Public Library of America
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Karen Estlund
Penn State University
Introduction to the International Image Interoperability FrameworkIIIF_io
A presentation given at the International Image Interoperability Framework event held at Ghent University, Belgium on December 8, 2015.
Tom Cramer
Stanford University Libraries
A presentation given at the International Image Interoperability Framework event held at the Museum of Modern Art in New York City on May 10, 2016.
Stuart Snydman
Stanford University Libraries
Challenges Displaying Complex Image Data: New Tech & Old InstitutionsIIIF_io
This document discusses the use of the IIIF standard to provide access to cultural heritage materials through digital images in several contexts:
- Manuscripts that have been disbound and are held in different institutions, allowing the leaves to be reassembled digitally.
- Historical documents like diaries that have undergone multiple digitization campaigns as imaging technology advanced, presenting different versions to users.
- Scholarly editions incorporating multi-spectral images of manuscript pages taken for text reconstruction.
- Representing pages that are known to exist but are lost or too fragile to digitize directly.
A presentation given at the International Image Interoperability Framework event held at the New York Academy of Medicine in New York City on May 11, 2016.
Rob Sanderson
The Getty
A presentation given at the International Image Interoperability Framework event held at the New York Academy of Medicine in New York City on May 11, 2016.
Rob Sanderson
The Getty
Leaflet-IIIF: Plugins and Extensibility with IIIFIIIF_io
A presentation given at the International Image Interoperability Framework event held at the New York Academy of Medicine in New York City on May 11, 2016.
Jack Reed
Stanford University Libraries
A presentation given at the International Image Interoperability Framework event held at the Museum of Modern Art in New York City on May 10, 2016.
Tom Cramer
Stanford University Libraries
Introduction to the International Image Interoperability FrameworkIIIF_io
A presentation given at the International Image Interoperability Framework event held at the Museum of Modern Art in New York City on May 10, 2016.
Tom Cramer
Stanford University Libraries
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3
IIIF as an Enabler to Interoperability within a Single Institution
1. IIIF as an Enabler to Interoperability
within a Single Institution
Randy Stern
Harvard University IT - Library Technology Services
Jeff Emanuel, Jud Harward, Rashmi Singhal
Harvard University IT – Academic Technology Services
Jeff Steward
Harvard Art Museums
IIIF Conference – May 11, 2016
4. Use Cases
• The Library: An updated page viewer for the Digital
Repository Service – with smooth zoom/pan, 2 page view, etc.
• HarvardX: Embedded, annotated display of Harvard Library
images in HarvardX courses delivered on the edX platform
• Canvas course platform: Display and comparison of Library
and faculty-uploaded images in on-campus courses web sites
• Harvard Art Museums: Create online exhibits and digital tours
with museum image content from the digital repository
5. IIIF – to the rescue
• A common API
• Opens Harvard library digital content for reuse
over the Web
• Allows Harvard to reuse external content
• ** Breaks down silos within Harvard, and enables
reuse of content
6. Mirador – a IIIF enabled image viewer for the university
7. Metadata
(Titles, Authors,
Subjects, etc.)
Digital
Images
Related
Authorities
Names, Places
Annotations
&
Transcriptions
IIIF Presentation API IIIF Image API Linked Open Data Open Annotation
OPEN APIs – Harvard entities can reuse and embed each others content
• Library
• Canvas/edX
Course
• Museum TMS
Databases
• Library
• Course Image Sets
• Museums
• Linked Data
for Libraries
• CATCH
annotation
store
D
a
t
a
s
o
u
r
c
e
s
8. Annotations – a further opportunity
The International Image Interoperability Framework
Now
• Enhance teaching through faculty commentary
• Enhance learning through student discussions
• Record private observations
Future
• Enhance research through collaborative annotation
• On manuscripts for textual criticism
• On images of art objects for conservation and publication
• On collections of visual materials to create a research corpus
10. Image Media Management LTI application
The International Image Interoperability Framework
Goals
• Filling a gap in Canvas
• Faculty–contributed content
• Shared through IIIF
Future
• Seamlessly import image
media from any IIIF-
compliant repository
• Further define, and expand
access to, annotation
capability
11. Image Media Management LTI application
The International Image Interoperability Framework
Goals
• Filling a gap in Canvas
• Faculty–contributed content
• Shared through IIIF
12. Harvard Museums
The International Image Interoperability Framework
Goals
– Enhance desire to view physical objects
– Expand options for comparative images in digital tours platform
– Prove museums data is interoperable
Future
– 3D object viewing
– 3D object virtual reconstruction
– Viewing of complex living documents like curatorial object files and
archives
14. So now…
• “The Book” is live on edX
using images from the
library digital repository
• The library IIIF service exposes millions of images
– http://ids.lib.harvard.edu/ids/iiif/5981214/0,0,1200,1200/full/full/native.jpg
• The library has a new book viewer for the digital repository
– http://iiif.lib.harvard.edu/manifests/view/drs:5981093$7b
• Faculty and teaching staff staff can upload, curate, share,
and display IIIF-compliant images in their online and
residential courses
The Book: Histories Across Time and Space
Drawing on the rich collections of Harvard’s libraries and
museums, learners are invited to explore the book not simply
as a container of content, but as a meaningful physical object
that has shaped the way we understand the world around us
15. And it’s being more widely adopted
• HarvardX and Harvard’s Academic Technology Group
– Are developing additional LTI image display and
annotation tools for use in the Canvas and edX
course platforms
• The Harvard Art Museums
– have deployed a beta of IIIF for image access and
viewing and have embedded Mirador in a digital
tour builder platform
16. IIIF Interoperability at Harvard
Library
IT
Digital
Humanities
Faculty
HarvardX
Harvard
Library
IIIF – Open Access
Mirador – Open
Source
Mirador embedded in
“The Book”
Page Turner for the library
Digital Repository
IIIF APIs – 100,000 book
objects for the world
Image Viewer for
course web sites
Image Viewer for the
Art Museums
Academic
Technology
Harvard
Art
Museums
17. Brief Chronology
• 2010 Harvard library technologists tracking IIIF
• 2012 – Focus group of faculty, academic technology, library staff and
library technology assess options for a new page turner
• 2012 – Harvard commits $40M to and funds a IIIF developer
• 2013 – “The Book: Histories Across Time and Space” begins
development
• 2014 – Harvard Library IIIF services for 120,000 books and
manuscripts (5M page images) plus 10M still images
• 2015 – “The Book” launches, Art museums IIIF manifest service for
250,000 art objects
• 2016 – 3 Mirador-based apps launch: Harvard Library Viewer, Image
Media Management LTI-Canvas app, Art museums digital tour builder
18. Harvard IIIF collaborators
Harvard Faculty
• Prof. Jeffrey Hamburger (working grp, The Book)
• Prof Afsaneh Najmabadi (working grp)
• Prof. Peter Der Manuelian (working grp)
• Prof. Suzanne Blier (working grp)
• Prof. Dan Smail (The Book)
• Prof. Ann Blair (The Book)
• Prof. Leah Price (The Book)
• Prof. Thomas Kelly (The Book)
• Prof. Beverly Kienze (The Book)
Harvard Academic Technology Services
• Jud Harward, Dir. Of Research Computing in the Arts and
Humanities
• Jeff Emanuel, Assoc. Dir. of Academic Technology
• Rashmi Singhal, Senior Software Engineer and co-Lead
developer of Mirador (with Drew Winget of Stanford)
• Arthur Barrett, Sr. Software Engineer
• Jazahn Clevenger, Instructional Software Developer
• Brandon Bentley, Sr. Instructional Technologist
• Alan Wolf, Managing Director
HarvardX
• Samantha Earp, Executive Director
• Robert Lue, Faculty Director
Harvard Library
• Franziska Frey, Associate Librarian for Preservation,
Conservation and Digital Imaging
• Willam Stoneman, Curator of Early Books & Manuscripts
• Wendy Gogel, Manager of Digital Content and Projects
• Kate Bowers, Collections Services Archivist
• Barbara Meloni, Public Services Archivist
• Kerry Masteller, Reference and Digital Program Librarian
• Mary Clare Alternhofen, Librarian for the Fine Arts Library
Harvard Library IT
• Tracey Robinson, Managing Director
• Randy Stern, Dir. Of Systems Development
• Chip Goines, Senior Developer, IIIF and Mirador
• Dave Mayo, Developer, Mirador
• Janet Taylor, Usability Librarian
• Julie Wetherill, Systems Librarian
19. Continued…
Harvard Art Museums
• Jeff Steward, Director of Digital Infrastructure and
Emerging Technology
• Tom Lentz, Director Emeritus
Harvard Academic Technology Group
• Mike Hilborn, Assoc. Dir. of Academic Technology
Development
• Annie Rota, Director of Academic Technology
Harvard has been able to leverage the promise of interoperable APIs by replicating the IIIF/Mirador design pattern across multiple functional areas sharing core Image API and digital repository services. Sharing knowledge, expertise, and digital content, and Mirador, multiple “heads” have sprouted: a viewer application for the HarvardX course “The Book”, a new Harvard Library Viewer, faculty image collections embedded in the Canvas course platform, and walls of images in the Harvard Art Museums. What did it take to enable this level of collaboration in a large distributed organization?
Not just silos – worse! – some silos were broken - ancient book viewer, course platform migration left tools behind…
Interoperability through commons APIs
Mirador – a platform for bring together multiple sources
The academic technology group (ATG) identified a need among faculty teaching staff, and students for the ability to to manage and access images and associated metadata for teaching and learning. Harvard’s previous learning management system, called “iSites,” had a feature called the ”Slide Tool” that handled this requirement. Canvas, however, lacks such a feature. ATG took advantage of this gap in Canvas’s coverage to develop a IIIF-compliant, interoperable (LTI) application for the management and display of images.
This application supports IIIF for image and collection access, and seamlessly integrates with Mirador, IIIF-compliant digital collections, and LTI-compliant learning management systems, including both Canvas and edX, all of which are used for teaching and learning at Harvard.
The academic technology group (ATG) identified a need among faculty teaching staff, and students for the ability to to manage and access images and associated metadata for teaching and learning. Harvard’s previous learning management system, called “iSites,” had a feature called the ”Slide Tool” that handled this requirement. Canvas, however, lacks such a feature. ATG took advantage of this gap in Canvas’s coverage to develop a IIIF-compliant, interoperable (LTI) application for the management and display of images.
This application supports IIIF for image and collection access, and seamlessly integrates with Mirador, IIIF-compliant digital collections, and LTI-compliant learning management systems, including both Canvas and edX, all of which are used for teaching and learning at Harvard.
This is a screenshot of a user created digital tour showing comparative images in the Mirador viewer module in the Harvard Art Museums digital tour builder platform. The images are from two different repositories. The image on the left is from the Harvard Art Museums collections (http://www.harvardartmuseums.org/collections/object/156951). The image on the right is from the Digital Commonwealth (https://www.digitalcommonwealth.org/search/commonwealth:bk128t37w).