This presentation talks about problems related to big data clean up. It discusses various approaches at the University of Auckland Libraries and Learning services and gives two projects as examples.
Crediting informatics and data folks in life science teamsCarole Goble
Science Europe LEGS Committee: Career Pathways in Multidisciplinary Research: How to Assess the Contributions of Single Authors in Large Teams, 1-2 Dec 2015, Brussels
The People Behind Research Software crediting from the informatics, technical point of view
Distributed Person Data
Violeta Ilik, Digital Innovations Librarian, Northwestern University Feinberg School of Medicine, Galter Health Sciences Library, Chicago
Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
This presentation talks about problems related to big data clean up. It discusses various approaches at the University of Auckland Libraries and Learning services and gives two projects as examples.
Crediting informatics and data folks in life science teamsCarole Goble
Science Europe LEGS Committee: Career Pathways in Multidisciplinary Research: How to Assess the Contributions of Single Authors in Large Teams, 1-2 Dec 2015, Brussels
The People Behind Research Software crediting from the informatics, technical point of view
Distributed Person Data
Violeta Ilik, Digital Innovations Librarian, Northwestern University Feinberg School of Medicine, Galter Health Sciences Library, Chicago
Starting from scratch – building the perfect digital repositoryVioleta Ilik
By establishing a digital repository on the Feinberg School of Medicine (FSM), Northwestern University, Chicago campus, we anticipate to gain ability to create, share, and preserve attractive, functional, and citable digital collections and exhibits. Galter Health Sciences Library did not have a repository as of November 2014. In just a few moths we formed a small team that was charged at looking to select the most suitable open source platform for our digital repository software. We followed the National Library of Medicine master evaluation criteria by looking at various factors that included: functionality, scalability, extensibility, interoperability, ease of deployment, system security, system, physical environment, platform support, demonstrated successful deployments, system support, strength of development community, stability of development organization, and strength of technology roadmap for the future. These factors are important for our case considering the desire to connect the digital repository with another platform that was an essential piece in the big FSM picture – VIVO. VIVO is a linked data platform that serves as a researchers’ hub and which provides the names of researchers from academic institutions along with their research output, affiliation, research overview, service, background, researcher’s identities, teaching, and much more.
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
It Takes a Village to Grow ORCIDs on Campus: Establishing and Integrating Uni...Violeta Ilik
This presentation describes the integration of ORCID identifiers into the open source Vireo electronic theses and dissertations (ETD) workflow, the university's digital repository, and the internally-used VIVO profile system.
Presented at Texas Conference on Digital Libraries (TCDL) 2014:
https://conferences.tdl.org/tcdl/index.php/TCDL/TCDL2014/schedConf/program
This presentation was provided by Jackie Shieh of The Smithsonian Libraries, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was provided by Jean Godby of The OCLC Online Computer Library Center, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was provided by Abigail Sparling and Adam Cohen of The University of Alberta Library, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was given by Michael Lauruhn of Elsevier Labs during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
Cultural Heritage Insitutions and Big Data Collectionslljohnston
Data is not just generated by satellites, identified during experiments, or collected during surveys. Datasets are not just scientific and business tables and spreadsheets. We have Big Data in our Libraries, Archives and Museums, and we and managing and preserving those collections for research use. Preservation given at the 2013 Wolfram Data Summit.
This paper surveys the landscape of linked open data projects in cultural heritage, exam- ining the work of groups from around the world. Traditionally, linked open data has been ranked using the five star method proposed by Tim Berners-Lee. We found this ranking to be lacking when evaluating how cultural heritage groups not merely develop linked open datasets, but find ways to used linked data to augment user experience. Building on the five-star method, we developed a six-stage life cycle describing both dataset development and dataset usage. We use this framework to describe and evaluate fifteen linked open data projects in the realm of cultural heritage.
Who's the Author? Identifier soup - ORCID, ISNI, LC NACO and VIAFSimeon Warner
Identifiers, including ORCID, ISNI, LC NACO and VIAF, are playing an increasing role in library authority work. Well describe changes to cataloging practices to leverage identifiers. We'll then tell a short story of the how and why of ORCID identifiers for researchers, and relationships with other person identifiers. Finally, we'll discuss the use of identifiers as part of moves toward linked data cataloging being explored in Linked Data for Libraries work (in the LD4L Labs and LD4P projects).
Integrating with others: Stable VIVO URIs for local authority records; linkin...Violeta Ilik
Integrating with others: Stable VIVO URIs for local authority records; linking to VIAF; ORCID organizational identifiers; W3C Dataset ontology work by Melissa Haendel & Violeta Ilik, VIVO Implementation Fest, Durham NC, March 20, 2014
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
It Takes a Village to Grow ORCIDs on Campus: Establishing and Integrating Uni...Violeta Ilik
This presentation describes the integration of ORCID identifiers into the open source Vireo electronic theses and dissertations (ETD) workflow, the university's digital repository, and the internally-used VIVO profile system.
Presented at Texas Conference on Digital Libraries (TCDL) 2014:
https://conferences.tdl.org/tcdl/index.php/TCDL/TCDL2014/schedConf/program
This presentation was provided by Jackie Shieh of The Smithsonian Libraries, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was provided by Jean Godby of The OCLC Online Computer Library Center, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was provided by Abigail Sparling and Adam Cohen of The University of Alberta Library, during the NISO webinar "Implementing Linked Library Data," held on November 13, 2019.
This presentation was given by Michael Lauruhn of Elsevier Labs during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
Cultural Heritage Insitutions and Big Data Collectionslljohnston
Data is not just generated by satellites, identified during experiments, or collected during surveys. Datasets are not just scientific and business tables and spreadsheets. We have Big Data in our Libraries, Archives and Museums, and we and managing and preserving those collections for research use. Preservation given at the 2013 Wolfram Data Summit.
This paper surveys the landscape of linked open data projects in cultural heritage, exam- ining the work of groups from around the world. Traditionally, linked open data has been ranked using the five star method proposed by Tim Berners-Lee. We found this ranking to be lacking when evaluating how cultural heritage groups not merely develop linked open datasets, but find ways to used linked data to augment user experience. Building on the five-star method, we developed a six-stage life cycle describing both dataset development and dataset usage. We use this framework to describe and evaluate fifteen linked open data projects in the realm of cultural heritage.
Who's the Author? Identifier soup - ORCID, ISNI, LC NACO and VIAFSimeon Warner
Identifiers, including ORCID, ISNI, LC NACO and VIAF, are playing an increasing role in library authority work. Well describe changes to cataloging practices to leverage identifiers. We'll then tell a short story of the how and why of ORCID identifiers for researchers, and relationships with other person identifiers. Finally, we'll discuss the use of identifiers as part of moves toward linked data cataloging being explored in Linked Data for Libraries work (in the LD4L Labs and LD4P projects).
Engaging Information Professionals in the Process of Authoritative Interlinki...Lucy McKenna
Through the use of Linked Data (LD), Libraries, Archives and Museums (LAMs) have the potential to expose their collections to a larger audience and to allow for more efficient user searches. Despite this, relatively few LAMs have invested in LD projects and the majority of these display limited interlinking across datasets and institutions. A survey was conducted to understand Information Professionals' (IPs') position with regards to LD, with a particular focus on the interlinking problem. The survey was completed by 185 librarians, archivists, metadata cataloguers and researchers. Results indicated that, when interlinking, IPs find the process of ontology and property selection to be particularly challenging, and LD tooling to be technologically complex and unsuitable for their needs.
Our research is focused on developing an authoritative interlinking framework for LAMs with a view to increasing IP engagement in the linking process. Our framework will provide a set of standards to facilitate IPs in the selection of link types, specifically when linking local resources to authorities. The framework will include guidelines for authority, ontology and property selection, and for adding provenance data. A user-interface will be developed which will direct IPs through the resource interlinking process as per our framework. Although there are existing tools in this domain, our framework differs in that it will be designed with the needs and expertise of IPs in mind. This will be achieved by involving IPs in the design and evaluation of the framework. A mock-up of the interface has already been tested and adjustments have been made based on results. We are currently working on developing a minimal viable product so as to allow for further testing of the framework. We will present our updated framework, interface, and proposed interlinking solutions.
Creating Sustainable Communities in Open Data Resources: The eagle-i and VIVO...Robert H. McDonald
This is the slidedeck for my ACRL 2015 TechConnect Presentation with Nicole Vasilevsky (OHSU). For more on the program see - <a>http://bit.ly/1xcQbCr</a>.
In 2012, the University of Idaho Library began implementing VIVO, an open-source Semantic Web application, both as a discovery layer for its fledgling institutional repository and as a database to describe, visualize, and report university research activity. The presenters will detail some of the challenges they encountered developing this resource, while discussing the tools and techniques they used for obtaining, editing, and uploading institutional data into the RDF-based VIVO system.
We describe current work in federating data from institutional research profiling systems – providing single-point
access to substantial numbers of investigators through concept-driven search, visualization of the relationships
among those investigators and the ability to interlink systems into a single information ecosystem.
OCLC Research @ U of Calgary: New directions for metadata workflows across li...OCLC Research
Presentation used as scene setting for 2 days worth of discussion around library, archive & museum convergence, metadata workflows and single search at the University of Calgary.
Reuse of Structured Data: Semantics, Linkage, and Realizationandrea huang
In order to increase the reuse value of existing datasets, it is now becoming a general practice to add semantic links among the records in a dataset, and to link these records to external resources. The enriched datasets are published on the web for both human and machine to consume and re‐purpose.
In this paper, we make use of publicly available structured records from a digital archive catalogue, and we demonstrate a principled approach to converting the records into semantically rich and interlinked resources for all to reuse. While exploring the various issues involved in the process of reusing and
re‐purposing existing datasets, we review the recent progress in the field of Linked Open Data (LOD), and examine twelve well‐known knowledge bases built with a Linked Data approach.
We also discuss the general issues of data quality, metadata vocabularies, and data provenance. The concrete outcome
of this research work is the following:
(1) a website data.odw.tw that hosts more than 840,000
semantically enriched catalogue records across multiple subject areas,
(2) a lightweight ontology voc4odw for describing data reuse and provenance, among others, and
(3) a set of open source software tools available to all to perform the kind of data conversion and enrichment we did in this research. We have used and extended CKAN (The Comprehensive Knowledge Archive Network) as a platform to host and publish Linked Data. Our extensions to CKAN is open sourced as well.
As the records we drawn from the originally catalogue are released under the Creative Commons licenses, the semantically enriched resources we now re‐publish on the Web are free for all to reuse as well.
NISO access related projects (presented at the Charleston conference 2016)Christine Stohn
Presentation by Pascal Calarco (University of Windsor), Christine Stohn (Ex Libris/ProQuest), John G. Dove (Paloma Associates), covering NISO D2D work, ResourceSync, KBART and KBART automation, ODI (Open Discovery Initiative), Link origin tracking, ALI (Access and License Indicators), and a discussion around improvements and challenges for open access discovery
Digital Library Infrastructure for a Million BooksSteve Toub
Describes what library infrastructure is needed for digital humanities use of mass digitized collections. Given at the Million Books Workshop, May 2007.
Towards an Open Research Knowledge GraphSören Auer
The document-oriented workflows in science have reached (or already exceeded) the limits of adequacy as highlighted for example by recent discussions on the increasing proliferation of scientific literature and the reproducibility crisis. Now it is possible to rethink this dominant paradigm of document-centered knowledge exchange and transform it into knowledge-based information flows by representing and expressing knowledge through semantically rich, interlinked knowledge graphs. The core of the establishment of knowledge-based information flows is the creation and evolution of information models for the establishment of a common understanding of data and information between the various stakeholders as well as the integration of these technologies into the infrastructure and processes of search and knowledge exchange in the research library of the future. By integrating these information models into existing and new research infrastructure services, the information structures that are currently still implicit and deeply hidden in documents can be made explicit and directly usable. This has the potential to revolutionize scientific work because information and research results can be seamlessly interlinked with each other and better mapped to complex information needs. Also research results become directly comparable and easier to reuse.
Questioning Authority Lookup Service: Linking the DataSimeon Warner
One segment of a presentation "From idea to implementation: BIBFRAME becomes reality", Charleston, 2022
The implementation of BIBFRAME in active cataloguing workflows and linked data exchange environments is live and it’s evolving across several paths that are often intertwined. This complex bibliographic ecosystem consists of many experiences that the speakers will present highlighting their value both as autonomous endeavours, as well as from the perspective of interaction and options for mutual integration.
The Library of Congress, with the BIBFRAME original cataloguing editor, Marva, will report about developments and achievements for bringing BIBFRAME into practice in a very large library environment with many cataloguing workflows for diverse types of resources, encompassing the use of and adjustments to the BIBFRAME ontology and its modelling.
On the topic of original and copy cataloguing in linked data, Stanford and Cornell Universities are working to achieve a dynamic form of cataloguing through the implementation of Sinopia linked data editor and enrichment tools such as the Questioning Authority that queries authoritative sources to support linked data authorities.
Regarding the impact of linked data processes on the user experience, the University of Pennsylvania has contributed a study describing the functionalities and scenarios which the Share-VDE 2.0 entity discovery system https://www.svde.org/ addresses, and the ways in which user feedback is supporting the evolution of linked data discovery.
Share-VDE (SVDE) is an international library-driven initiative which brings together the bibliographic catalogues and authority files of a community of libraries in an innovative entity discovery environment based on linked data. A path towards the integration of SVDE with the local library services at the University of Pennsylvania and with the Sinopia environment is ongoing. Being a linked open data node, SVDE supports various levels of interoperability and also provides additional tools like the J.Cricket entity editor based on BIBFRAME that opens up new forms of cooperation among libraries to manage and maintain linked data entities.
OCFL: A Shared Approach to Preservation PersistenceSimeon Warner
A lightning talk at the CNI Fall Forum 2022: The Oxford Common File Layout (OCFL) is an application-independent method for storing and versioning content for digital preservation. Version 1.1 was released in October 2022, including backwards compatible corrections and clarifications based on implementation experience and community feedback. The session will recap goals, summarize changes in v1.1, and survey current implementations.
The Oxford Common File Layout: A common approach to digital preservationSimeon Warner
The Oxford Common File Layout (OCFL) specification began as a discussion at a Fedora/Samvera Camp held at Oxford University in September of 2017. Since then, it has grown into a focused community effort to define an open and application-independent approach to the long-term preservation of digital objects. Developed for structured, transparent, and predictable storage, it is designed to promote sustainable long-term access and management of content within digital repositories. This presentation will focus on the motivations and vision for the OCFL, explain key choices for the specification, and describe the status of implementation efforts.
Introduction to the International Image Interoperability Framework (IIIF)Simeon Warner
Introduction to the International Image Interoperability Framework (IIIF), Tutorial at Library Network Days, National Library of Finland, Helsinki, 2017-10-26
Mind the gap! Reflections on the state of repository data harvestingSimeon Warner
A 24x7 presentation at Open Repositories 2017 in Brisbane, Australia.
I start with an opinionated history of the evolution of repository data harvesting since the late 1990's to the present. A conclusion is that we are currently in danger of creating a repository environment with fewer cross-repository services than before, with the potential to reinforce the silos we hope to open. I suggest that the community needs to agree upon a new solution, and further suggest that solution should be ResourceSync.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
LKG Editor Dev
1. Library Knowledge Graph
Editor Development
Simeon Warner (Cornell)
https://orcid.org/0000-0002-7970-7855
Reporting work from the LD4P2 project including contributions from: Steven
Folsom, Huda Khan, Lynette Rayle, Jason Kovari, Tim Worrall (Cornell), Astrid
Usong (Stanford), David Eichmann (Iowa), and others…
US2TS 2019, March 11-13, Duke University, Durham, NC
3. Library Cataloging Background
Many practices developed in the era of card catalogs
MARC format developed in 1960's
Long history of linking entities, albeit with authorized
names rather than identifiers. Used for limited forms of
semantic browse
LD4 work and broader community moving from
MARC→RDF, from authorized names to URIs, and
toward better linking with the web
Henriette Avram 1919–2006,
American computer programmer
and systems analyst who
developed MARC
https://en.wikipedia.org/wiki/Henrie
tte_Avram
4. Production Scale
Cornell catalog has ~9M records
(~8M physical, ~1M electronic)
Cataloging staff must keep up with
new acquisitions. RSI is a real
Rarely start from scratch: base on
vendor supplied, community records
or record for similar resource
Specialists covering many
languages
Library Technical Services space in
OIin Library, Cornell University
5. MARC → RDF
Past work on ontology development but current
focus around BIBFRAME model from Library of
Congress (LC), still evolving
Conversions ~100 triples from each MARC record
Cornell: 9M records → ~1 billion triples (cf. WorldCat
scale: 440M bib records, 2.7G holdings)
Community will still rely on centralized services, but
opens possibility for other models too, and ad-hoc
links
Key entity types in BIBFRAME
6. Shapes
cf. Khan, Folsom, et al.,
poster at US2TS 2018
Want re-use and hence
interested in shared
shapes. Mechanics may
be mix of SHACL, ShEx,
schema
Currently no decoupling of
validation from forms, a
controlled environment
https://drive.google.com/file/d/1M_xhnG8qYL7M9akvIRSETfOgeSEfS9oh/view
7. Linking Our Data - Focus on Lookups
Build UI and infrastructure around discovery of related entities. We know:
➔ Evolving community norms: appetite for a variety of linked datasets and
associated lookup services; how to link each well and efficiently; sensitivity to
inclusive descriptions
➔ Complexity in how to search (recall/precision -- relevancy tests)
➔ Need context -- labels and types are nowhere near sufficient, what else to
display to enable human verification/selection?
➔ Multiple sources for same entity type (e.g. person in LC NAF, ISNI, ORCID)
➔ If available, hubs likely most efficient
➔ Largely untackled: maintenance and updates (traditional authorities have
strong policies and practices which have benefit but can be stifling)
8. Lookup Usability Experiments
● Building on VitroLib designs and results
○ Context generally useful and navigation to authoritative sources
important
● Current LD4P2 usability work around Sinopia editor development
○ 6 participants across different institutions
○ Prototype based on LC BIBFRAME Editor (BFE)
○ Contextual information for persons and genre forms
○ Links to Wikipedia, ISNI, VIAF where available
○ Additional mockups
Slides from SWIB18 presentation; Folsom, Khan, et al.
9. A cataloger has a copy of a film
"Nowhere Boy" by "Sam Taylor", a
British director
10.
11.
12. A cataloger is trying to add genre to a
record, is "humorous" fiction the right term?
13. Lookup Usability: Preliminary Results
● Contextual information useful
○ Should also include related works, more identifying info
○ Identify source of information
● External sources such as university profiles, genre or type-specific
sites (e.g. Discogs)
● Vocabularies such as MESH, AAT, Getty (depending on content)
● Links to Wikidata, ISNI, VIAF are useful to include
● Need consistent interface experience, use clearer icons
● Improve hierarchical navigation for subject areas/genre forms
14. Work Cycle I Data Flow Diagrams and Prototypes October 2018
Thanks to Astrid Usong, Stanford
15. Discogs -- External Source Data as Lookup
Recall - rarely start from scratch
Cataloging old 45's at Cornell
Exploring use of Discogs to generate
base record directly integrated with
the catalog editor tool
17. Community Scale Experiments & Challenges
➔ 15 organizations in LD4P2 cohort + project partners
➔ Test editor and lookup infrastructure in a number of cataloging projects
Caching needed because (most) authority sources don't provide sufficient and
stable infrastructure for lookups (also associated validation, cleaning,
transformation for non-LD sources)
Static vs dynamic
➔ caching for static but need live query if one expects catalogers to create new
entities in "real time" and then be able see them
➔ e.g. Wikidata - try against SPARQL API
18. Discovery Experiments
Primary purpose of library knowledge graph is to enable discovery of library
resources -- the benefits of linked data are so far unproven
➔ Parallels with ideas for lookups and linking
➔ Indexing -- already do some light inferencing from MARC into Solr (e.g.
broader terms, alternates). What other data inclusion or inference is useful?
➔ Individual libraries too small to develop search systems. Considerable effort
around a Solr/Ruby system called Blacklight where UI interactions
studied/improved together. What is broadly reusable?
➔ Most linked data UIs are awful! What good examples we might learn from?
LD4 Discovery Affinity Group having open biweekly calls