This document discusses coordinating bibliographic references across organizations. It covers the type of literature and citation content shared, sources and formats of content, methods of gathering and delivering information, identifiers used, and interoperability with other platforms. The document provides an overview of the Biodiversity Heritage Library (BHL), including its book viewer, sharing of data through APIs and other methods, and open data downloads. It discusses BHL services like its names service and OpenURL, as well as assigning DOIs. The document outlines requirements for a citation repository, lessons learned from previous efforts, and plans to integrate BHL services with other databases and improve citation reconciliation.
BHL Technical Projects Update presented during the BHL Stafff and Technical Meeting on September 26-27, 2012 at the Harvard Museum of Comparative Zoology
Cambridge, Massachusetts
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
The data model and user interface for the Biodiversity Heritage Library (BHL) portal at http://www.biodiversitylibrary.org/ was originally designed to accommodate books and journals found in botanical garden libraries and natural history museums. As the size and reputation of the BHL grew, there were many publishers and individuals who wanted to contribute to the BHL but their content consisted of publication types at more granular levels, such as articles, book chapters, and dissertations. In order to ingest and serve these materials, in early 2011, BHL launched a separate portal called Citebank hosted at citebank.org. Currently, Citebank contains over 180,000 citations linked to content files, either hosted at citebank.org or hosted externally. While feedback on Citebank has been positive, users indicated a desire to combine both the services of the BHL portal and the services of the Citebank portal into a single interface in order to enable a unified search for all biodiversity literature. To respond to these needs, the BHL has begun expansion of its data model in the BHL portal to accommodate articles, book chapters, treatments and other segment-like material so that they can be searched alongside its traditional book and journal content. Parallel to this activity the NSF-funded Global Names Architecture (GNA) Project has enlisted Citebank to fulfill the role of a global biodiversity repository for bibliographic citations. In support of this, Citebank will provide a key functional component to the GNA - that of reconciliation services for citations. Once reconciled, citations can be linked either to scanned page images in the BHL, or to PDFs uploaded by users. If neither exists, citations can point to other digital representations online. Experience with Citebank has resulted in many lessons learned about working with diverse publication types; data formats; and contributors with varying levels of technical competencies. Those lessons were incorporated into a functional requirements document that is being used to inform development of the BHL data model. This talk will outline the functional requirements needed for a global citation repository for biodiversity and how those requirements will better serve the needs of the biodiversity community.
BHL Technical Projects Update presented during the BHL Stafff and Technical Meeting on September 26-27, 2012 at the Harvard Museum of Comparative Zoology
Cambridge, Massachusetts
The Biodiversity Heritage Library and bibliographic citations: towards new u...Trish Rose-Sandler
The data model and user interface for the Biodiversity Heritage Library (BHL) portal at http://www.biodiversitylibrary.org/ was originally designed to accommodate books and journals found in botanical garden libraries and natural history museums. As the size and reputation of the BHL grew, there were many publishers and individuals who wanted to contribute to the BHL but their content consisted of publication types at more granular levels, such as articles, book chapters, and dissertations. In order to ingest and serve these materials, in early 2011, BHL launched a separate portal called Citebank hosted at citebank.org. Currently, Citebank contains over 180,000 citations linked to content files, either hosted at citebank.org or hosted externally. While feedback on Citebank has been positive, users indicated a desire to combine both the services of the BHL portal and the services of the Citebank portal into a single interface in order to enable a unified search for all biodiversity literature. To respond to these needs, the BHL has begun expansion of its data model in the BHL portal to accommodate articles, book chapters, treatments and other segment-like material so that they can be searched alongside its traditional book and journal content. Parallel to this activity the NSF-funded Global Names Architecture (GNA) Project has enlisted Citebank to fulfill the role of a global biodiversity repository for bibliographic citations. In support of this, Citebank will provide a key functional component to the GNA - that of reconciliation services for citations. Once reconciled, citations can be linked either to scanned page images in the BHL, or to PDFs uploaded by users. If neither exists, citations can point to other digital representations online. Experience with Citebank has resulted in many lessons learned about working with diverse publication types; data formats; and contributors with varying levels of technical competencies. Those lessons were incorporated into a functional requirements document that is being used to inform development of the BHL data model. This talk will outline the functional requirements needed for a global citation repository for biodiversity and how those requirements will better serve the needs of the biodiversity community.
Presented by Christa Burns
At NEBASE Annual Meeting - East (August 9, 2007, Lincoln, NE) and as a NEBASE Hour (September 5, 2007, online)
OCLC is piloting its new WorldCat Local service that will allow your library to customize WorldCat.org as a solution for local discovery and delivery services. WorldCat Local interoperates with locally maintained services like circulation, resource sharing and resolution to full text to present a locally branded interface to your patrons. Attend this session to learn how this new service works and to see the beta being run at the University of Washington Libraries.
Professional catalogers in an academic library have professional responsibilities in librarianship, scholarship, and services to the library, institute, and professional organizations. However, whatever the catalogers do have to be in alignment with strategic directions of the academic library, and contribute to its institutional effectiveness. This presentation uses several projects from Georgia Tech Library as examples to illuminate the subject matter on the role cataloger in the 21st century academic library. It first discusses the role of cataloger in assisting the removal of print collections out of GT Library building and creating a seamless collection with Emory resources in EmTech Library Services Center (LSC). It then discusses the role of cataloger in cataloging and metadata management from the perspectives of resource discovery, data curation, repository services, eResearch archive, and digitization.
Gary Price, MIT Program on Information ScienceMicah Altman
Gary Price, who is chief editor of InfoDocket, contributing editor of Search Engine Land, co-founder of Full Text Reports and who has worked with internet search firms and library systems developers alike, gave this talk on Issues in Curating the Open Web at Scale as part of the Program on Information Science Brown Bag Series.
Presented at the International Internet Preservation Consortium (IIPC) Web Archiving Week, University of London, 16 June 2017.
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in July 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
More information: oc.lc/wam
Contact: Jackie Dooley, dooleyj@oclc.org
Open Metrics for Open Repositories at OR2012Nick Sheppard
Slides for a paper on "Open Metrics for Open Repositories" based on the paper available from http://opus.bath.ac.uk/30226/ and presented by Nick Sheppard at the Seventh International Conference on Open Repositories (OR2012) held in Edinburgh from 9-13th July 2012.
Barbara Albee & Oliver Chen, Indiana University, School of Library and Information Science, Indianapolis
The purpose of this project is to examine the implementation of an open source library automation system, Evergreen, in Indiana public libraries and its impact to library users. Nine public libraries from the Evergreen Indiana consortium are invited to participate in the project. The research team recruits library users at the nine libraries with the assistance from local librarians. The users are classified into three age groups: 18-24, 25-59 and 60 or above. The research team has collected data from the first two quarters in 2010 and reports preliminary results from the two quarters. Preliminary correlation-coefficient analyses indicate participants’ use of the library system’s functions are related to age, frequency of visits to local library in the last 12-months, level of use of the previous library system and the Evergreen system. Participants; use of the Evergreen system has potential to change the way they use library services and collections.
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
Open source software for implementation of union catalogueBeatrice Amollo
Adapting open source for a union catalogue in Kenya is not impossible. This is made feasible by the fact that there exist several successful union catalogs in the world. Of importance, is the agreement between the participating libraries. This is the hurdle that must be overcome before any progress is realized in this direction.
There are libraries in Kenya that have implemented open source ILS for long enough to provide the necessary expertise or input to help in the actual implementation. Koha seems to have gained much mileage in Kenya as observed earlier on. The experiences with it by the different libraries will come in handy when deciding on which software to adapt for the union catalogue.
Presented by Christa Burns
At NEBASE Annual Meeting - East (August 9, 2007, Lincoln, NE) and as a NEBASE Hour (September 5, 2007, online)
OCLC is piloting its new WorldCat Local service that will allow your library to customize WorldCat.org as a solution for local discovery and delivery services. WorldCat Local interoperates with locally maintained services like circulation, resource sharing and resolution to full text to present a locally branded interface to your patrons. Attend this session to learn how this new service works and to see the beta being run at the University of Washington Libraries.
Professional catalogers in an academic library have professional responsibilities in librarianship, scholarship, and services to the library, institute, and professional organizations. However, whatever the catalogers do have to be in alignment with strategic directions of the academic library, and contribute to its institutional effectiveness. This presentation uses several projects from Georgia Tech Library as examples to illuminate the subject matter on the role cataloger in the 21st century academic library. It first discusses the role of cataloger in assisting the removal of print collections out of GT Library building and creating a seamless collection with Emory resources in EmTech Library Services Center (LSC). It then discusses the role of cataloger in cataloging and metadata management from the perspectives of resource discovery, data curation, repository services, eResearch archive, and digitization.
Gary Price, MIT Program on Information ScienceMicah Altman
Gary Price, who is chief editor of InfoDocket, contributing editor of Search Engine Land, co-founder of Full Text Reports and who has worked with internet search firms and library systems developers alike, gave this talk on Issues in Curating the Open Web at Scale as part of the Program on Information Science Brown Bag Series.
Presented at the International Internet Preservation Consortium (IIPC) Web Archiving Week, University of London, 16 June 2017.
Web archiving has become imperative to ensure that our digital heritage does not disappear forever, yet many institutions have not begun this work. In addition, archived websites are not easily discoverable, which severely limits their use. To address this challenge, OCLC Research has established the OCLC Research Library Partnership Web Archiving Metadata Working Group to develop a data dictionary that will be compatible with library and archives standards. Three reports on this project are available in July 2017, focused on metadata best practices guidelines, user needs and behaviors, and evaluation of web archiving tools.
More information: oc.lc/wam
Contact: Jackie Dooley, dooleyj@oclc.org
Open Metrics for Open Repositories at OR2012Nick Sheppard
Slides for a paper on "Open Metrics for Open Repositories" based on the paper available from http://opus.bath.ac.uk/30226/ and presented by Nick Sheppard at the Seventh International Conference on Open Repositories (OR2012) held in Edinburgh from 9-13th July 2012.
Barbara Albee & Oliver Chen, Indiana University, School of Library and Information Science, Indianapolis
The purpose of this project is to examine the implementation of an open source library automation system, Evergreen, in Indiana public libraries and its impact to library users. Nine public libraries from the Evergreen Indiana consortium are invited to participate in the project. The research team recruits library users at the nine libraries with the assistance from local librarians. The users are classified into three age groups: 18-24, 25-59 and 60 or above. The research team has collected data from the first two quarters in 2010 and reports preliminary results from the two quarters. Preliminary correlation-coefficient analyses indicate participants’ use of the library system’s functions are related to age, frequency of visits to local library in the last 12-months, level of use of the previous library system and the Evergreen system. Participants; use of the Evergreen system has potential to change the way they use library services and collections.
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
Open source software for implementation of union catalogueBeatrice Amollo
Adapting open source for a union catalogue in Kenya is not impossible. This is made feasible by the fact that there exist several successful union catalogs in the world. Of importance, is the agreement between the participating libraries. This is the hurdle that must be overcome before any progress is realized in this direction.
There are libraries in Kenya that have implemented open source ILS for long enough to provide the necessary expertise or input to help in the actual implementation. Koha seems to have gained much mileage in Kenya as observed earlier on. The experiences with it by the different libraries will come in handy when deciding on which software to adapt for the union catalogue.
Ikke om 10 minutter eller lige om lidt...Peter Vittrup
Oplæg på DKG Sommerhøjskole 2013, der undersøger om der er en modsætning mellem vores digitale virkelighed og det, som i virkeligheden er vigtigt. Læs mere på http://dkg.dk
Our digital lives. Participation. Friends. NOW!Peter Vittrup
Presentation was part of Nordic Performing Arts Days 2014 (CPH STAGE) at a session named "The performing arts facing globalization, digitalization and co-creation".
A Lined Data Approach to Interoperability between Biomedical Resource Invento...Trish Whetzel
Overview of Resource Representation Coordination efforts to coordinate the representation of resources from Biositemaps, eagle-i, and the Neuroscience Information Framework.
Services recommending books = BibTip, LibraryThing, University of Huddersfield borrowing recommendations, and articles – bX from Ex Libris, PubMed, Synthese (CISTI) now exist in the academic context. JISC in the UK is sponsoring a major project, MOSAIC: “Making Our Shared Activity Information Count.” This session will provide an overview of these recommendation systems, describe their different approaches to data mining, and discuss their role in improving information retrieval and user experience in a now nearly fully online scholarly information world.
Keynote presentation delivered at ELAG 2013 in Gent, Belgium, on May 29 2013. Discusses Research Objects and the relationship to work my team has been involved in during the past couple of years: OAI-ORE, Open Annotation, Memento.
Global Library of Life: The Biodiversity Heritage LibraryMartin Kalfatovic
Global Library of Life: The Biodiversity Heritage Library. Martin R. Kalfatovic. Boston Library Consortium Meeting. Boston Public Library. 18 March 2008. Boston, MA.
NISO access related projects (presented at the Charleston conference 2016)Christine Stohn
Presentation by Pascal Calarco (University of Windsor), Christine Stohn (Ex Libris/ProQuest), John G. Dove (Paloma Associates), covering NISO D2D work, ResourceSync, KBART and KBART automation, ODI (Open Discovery Initiative), Link origin tracking, ALI (Access and License Indicators), and a discussion around improvements and challenges for open access discovery
Implementing web scale discovery services: special reference to Indian Librar...Nikesh Narayanan
Web scale Discovery services arebecoming the widely adopted Information Retrieval solution in libraries across the world to connect its patrons with the relevant information they seek. In lieu with the world trend, Resources Discovery Solution implementation is gathering momentum in Indian libraries also.
Considering the Indian Libraries scenario, this paper attempts to provide an overview of Library Web Scale Discovery solutions, its need in Indian Libraries, important parameters to be considered for evaluation of Discovery Services, essential factors to be considered prior to implementation, stages of implementation and finally some thoughts on post implementation analysis for measuring the success.
Digital Library Infrastructure for a Million BooksSteve Toub
Describes what library infrastructure is needed for digital humanities use of mass digitized collections. Given at the Million Books Workshop, May 2007.
Revolutionary and Evolutionary Innovation - Marshall Breeding CONUL Conference
Presented at the CONUL Conference, July 2015, Athlone, Ireland by Marshall Breeding.
Biography
Marshall Breeding is an independent consultant, speaker, and author. He is the creator and editor of Library Technology Guides and the libraries.org online directory of libraries on the Web. His monthly column Systems Librarian appears in Computers in Libraries; he is the Editor for Smart Libraries Newsletter published by the American Library Association, and has authored the annual Library Systems Report published by Library Journal from 2002-2013 and by American Libraries since 2014. He has authored nine issues of ALA’s Library Technology Reports, and has written many other articles and book chapters. Marshall has edited or authored seven books, including Cloud Computing for Libraries published by in 2012 by Neal-Schuman, now part of ALA TechSource. He regularly teaches workshops and gives presentations at library conferences on a wide range of topics.
He has been an invited speaker for many library conferences and workshops throughout the United States and internationally. He has spoken in throughout the United States and in Korea, Taiwan, Thailand, China, Singapore, India, Japan, Australia, New Zealand, Iceland, the Czech Republic, Slovenia, Israel, Austria, Germany, The Netherlands, Norway, Denmark, Sweden, Spain, Ireland, the United Kingdom, Israel, Lebanon, Jordan, Colombia, Chile, Mexico, and Argentina.
Marshall Breeding held a variety of positions for the Vanderbilt University Libraries in Nashville, TN from 1985 through May 2012, including as Director for Innovative Technologies and Research as the Executive Director the Vanderbilt Television News Archive.
Breeding was the 2010 recipient of the LITA LITA/Library Hi Tech Award for Outstanding Communication for Continuing Education in Library and Information Science.
Read his Guideposts blog on Library Technology Guides at:
www.librarytechnology.org
Finding the annotation needs of the botanical community in a digital libraryWilliam Ulate
The Center for Biodiversity Informatics at the Missouri Botanical Garden and Saint Louis University are analyzing the web annotation needs of the botanical community to develop a prototype of how those needs may be met within a digital library platform. We want to assess the practicality of existing tools to satisfy the technical, economic, and operational needs of botanical users to annotate. This will inform on requisites, best practices, and further developments for a research project to integrate an annotation tool within a virtual library. We surveyed 14 members of 10 different institutions in the botanical and scientific communities. We included both, those who currently annotate online as well as those who have only annotated offline (e.g. print or analog), in order to better understand the functionality needed to encourage and support online annotation activities. The answers to this survey were analyzed in the context of an annotation tool in a digital library and a prioritized list of annotation needs for users of a botanical virtual library was produced, taking into account the minimal and recommended functionality required to comply with the users requirements. Preliminary results from the report of the in-depth user assessments of annotation needs in the specific domain of botanists are shared with the attendees. Advances in the definition of a prototype are also shown.
Botanists and annotations printer friendlyWilliam Ulate
Findings from I Annotate 2016 concluded that the uptake of web annotation could be sufficiently moved forward by tackling three key issues: 1) interoperability, 2) domain use cases, and 3) user centered design. The Center for Biodiversity Informatics at the Missouri Botanical Garden has identified valuable use cases for developing in-depth user assessments of annotation needs in the specific domain of botanists. This presentation will share those use cases and talk about next steps in serving the annotation needs of botanists and their relevance for the larger scientific domain.
Expanding Access to Biodiversity Literature. Mining Biodiversity.William Ulate
Mining Biodiversity project introduction and advance report at the CBHL Annual Meeting, in the Cleveland Botanical Garden on May 26, 2016. Also feedback request for Semantic Search User Interface that employs Query Expansion using Term Inventory.
Mining Biodiversity Project presentation for Digging Round Three Conference, January 27-28 2016. http://diggingintodata.org/awards/2016/news/digging-round-three-conference
Unlocking knowledge in biodiversity legacy literature through automatic seman...William Ulate
BHL is home to most of the world’s biodiversity legacy literature. In order to allow its users to find information in a more focused and efficient manner, efforts towards the development of a semantically enabled search engine are currently underway. To this end, semantic metadata in the form of concept annotations has been automatically extracted over the BHL collection using text mining (TM) techniques. This was carried out in a series of stages: (1) producing a moderately sized BHL corpus in which concepts have been manually marked up and assigned semantic labels, e.g., taxon, location, anatomical entity, habitat; (2) training machine learning-based concept recognition models on the said corpus; (3) applying the trained models on BHL documents in order to automatically recognize and assign semantic labels to concepts; and (4) automatically linking together semantically related concepts using distributional similarity methods. BHL documents were then indexed according to the semantic annotations automatically generated by the above-described TM methodology. This facilitates the incorporation of the following system features into BHL’s search engine: (1) query expansion, which helps a user widen his search through automatic suggestion of synonyms; and (2) semantic facets, which the user can specify to narrow down search results in order to filter out documents pertaining to unwanted word senses.
Engaging the Citizen Scientist in Content Enhancement for BHLWilliam Ulate
This presentation will discuss two current crowdsourcing activities initiated with BHL content: Science Gossip (implemented by ConSciCom on top of the Zooniverse platform) and two online games (Beanstalk and Smorball developed by Tiltfactor @ Darmouth)
"What should a flora/fauna/mycota of the future be able to do for me?" My flash talk presentation during pro-iBiosphere Workshops in Berlin, 21 May 2013.
Presentation given by William Ulate, BHL Technical Director and Global BHL Coordinator on Wednesday February 13, 2013 during the pro-iBiosphere at Leiden
The Biodiversity Heritage Library: an Open Global Resource of Literature for ...William Ulate
As part of the scientific method and peer review followed by scientists and particularly taxonomists, it is essential to be able to access the specimens and original publications used to describe a new species and published in books and journals for more than three centuries ago.
The Global BHL (Biodiversity Heritage Library) is a cooperative network of autonomous organizations and institutions that operate programs and projects to support the goal of making biodiversity literature available to all through open access. Currently, the European Commission, the Chinese Academy of Sciences, the Museum Victoria as part of the Atlas of Living Australia, SciELO Brazil, and the Bibliotheca Alexandrina in Egypt have all created regional BHL nodes. These projects are working together to share content, protocols, services, and digital preservation practices to support research, policy and conservation through appropriate repatriation of scientific information.
In recent years, several biodiversity informatics initiatives have been promoted in Africa by different donors. One of them, the JRS Foundation, supported in November 2011, that ten African librarians, biologists, computer scientists, publishers and students were brought together in Chicago, USA during the Life and Literature Conference, to decide on African needs and objectives related to Biodiversity Literature Digitization.
A follow-up organizational meeting will take place in June 2012, to collaborate on the development of a BHL node for Africa, an open global resource of literature for African biodiversity scientists. Among the topics to be covered are the sharing of previous experiences organizing a BHL Node following on the successful model developed in Australia and Brazil, the appropriate metadata delivery infrastructure, how to coordinate the scanning and synchronize the repositories of titles that are important for biodiversity scientists in Africa, including gray literature and publications produced within the continent.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Bibliographic References in BHL
1. Bibliographic references in BHL
Coordination and routes for
cooperation across organizations,
projects and e-infrastructures
23rd of May 2013
William Ulate R., Missouri Botanical Garden
2. Questions to Answer
1. Type of content we discuss (e.g., occurrences, genes, behaviour,
morphology, etc.)
2. Sources of content (from where)
3. Formats of content (formats, standards)
4. Methods of gathering information (e.g., harvesting, ftp uploads,
protocols)
5. Methods of delivery of information (e,g., free searches, API, web
services, automated exports, linking mechanisms, etc.; provide links
to API and web services documentation)
6. Identifiers used (type, persistence, dereferencing, resolvability)
7. Present or forthcoming interoperability features with other
platforms
8. Constraints, needs and expectations to:
a) Suppliers of content, and
b) Users of content
9. What is needed for Bibliographic References?
7. Open Data
• Downloads
– Simple tab-delimited exports of core data
– http://www.biodiversitylibrary.org/data/BHLExportSchema.pdf
• Data model
– DB schema as ERD
– http://bhl-bits.googlecode.com/files/20090930_BHLDataModel.pdf
8. Services
• Names Service
– Return all occurrences of a name throughout BHL digitized corpus
• Documentation: http://bit.ly/2e6sg9
– Access to 100+ million name strings using TaxonFinder & NetiNeti
• 1.5 million unique names
– Algorithm to detect nomenclatural & taxonomic acts
• OpenURL
– Facilitate links to citations: protologues, articles, references
• Documentation: http://www.biodiversitylibrary.org/openurlhelp.aspx
– Useful to Nomenclators, Reference Systems
• IPNI
• Tropicos
11. DOIs for Legacy Literature
• BHL member of CrossRef through Smithsonian
• Started assigning DOIs to BHL monographs
– Low hanging fruit: Easy, non-controversial
– 54,856 DOIs Approved to date
• Next, other publication types / articles?
– Process of automatically assigning CrossRef DOIs
to articles has a higher potential for collisions.
12. Article-level metadata
• Disambiguating and locating structural components
in the corpus
• Done by automated and crowdsourced means
– Thanks Rod Page! Welcome others!
• Greatly increases semantic value of the dataset
• Makes data addressable and thus linkable
Chapter-level metadataTreatment-level metadataPart-level metadata
13. Genesis: “BHL Article Repository”
• Idea first introduced at TDWG 2008, Fremantle
(by BHL, many have discussed for years)
• YouTube for biodiversity articles
• Needed (need) a way to access articles in BHL
– “BHL has no articles.”
– BHL has hundreds of thousands of articles but you
can’t search for them via author, article title search
– Can find via “article coordinates” using BHL’s UI &
OpenURL resolver: Journal / Volume / Start Page / Year
14. CiteBank
• Objectives
– Create a repository for community-vetted
taxonomic bibliographies.
– Ability to ingest, display, download, and index
articles so that the BHL can operate as an article
repository.
– Provide links to content published online through
other repositories.
• Launched on December 6th 2010
• 185609 bibliographic records to date
18. Lessons Learned
• Biblio/Drupal data model insufficient for mass of data
envisioned for all biodiversity, too flat and difficult to
expand in collaboration with Biblio development
community
• Data providers want their content findable and
managed in the Biodiversity Heritage Library, not a
system alongside BHL
• Maintaining two platforms for biodiversity literature
threatens sustainability of the literature resources over
the longer term
20. What have we done?
• Articles
– Extended BHL data model to store article metadata
– Built process to harvest data from BioStor
• Created user interfaces for adding article metadata
and associated files
– Defined functional requirements as improvements to
Drupal-based Citebank
– Defined process flow for adding article metadata and
associated files
– Implemented UI changes
• Changed BHL UI to accommodate article search
• Changed BHL UI to accommodate article display (TOC)
25. Requirements for a citation repository?
Admin. Interface
– IMPORT AND MAPPING TOOL
• Preview/Accept/Reject/Undo/Report on Import
• No standard schema, MODS or Bibtex
• Drag & drop GUI or mapped source and target field config.
– USER MANAGEMENT
• Self-Registration
• Admin. Approval & Deletion
• User Roles Assignment
– GLOBAL UPDATES
26. Requirements for a citation repository?
General User Interface
– IMPORT
• Upload/Preview/Accept/Reject/Undo/Report on Import
– CREATE CITATION
• By filling a Form, via BibTex
– BROWSE
• Faceted: title,author,subject, year, contributor, my citations
27. Requirements for a citation repository?
• CITATION TYPES
– Journal Article, Book Chapter, Conference Proceedings,
Conference Paper, Thesis, Government Report, Note, etc.
• OAI HARVESTING
– Harvest and serve data through OAI-PMH
• SPECIFICATIONS FOR DATA PROVIDERS PAGE
• CONTRIBUTORS PAGE
– Recognize ALL contributions
• REPORTING
– Statistics Page by Citation and Publication type
– Recent/Latest Uploads
28. What are we doing?
• Integrate BHL’s Services with ZooBank, IPNI & IF
• Authoritative list of titles in common use for
nomenclatural acts (“TL3”)
• Harvest relevant content from Mendeley
• Integrate services and interfaces with the GNUB
data model
• Interoperate with citation parsing tools & services
29. Support citation reconciliation
.
.
.
.
.
.
.
L. Sp. Pl. 2: 971. 1753
Linneaus, C. Species Plantarum, vol. 2 p. 971. 1753
Linné, Carl von. Sp. Pl. Vol. 2 Page 971. 1753
Caroli Linnaei, Species Plantarum exhibentes plantas rite cognitas, ad genera
relatas, cum Differentis Specificis, Nominibus Trivialibus, Synonymis Selectis,
Locis Natalibus, secundum SYSTEMA SEXUALE digestas.. 2:971. 1753
Zea mays
30. Questions to Answer
1. Type of content - Literature, Images, OCR Text
and Bibliographic Citations
2. Sources of content - BHL, CB & other Repositories
3. Formats of content - BibTex, MODS, DC
4. Methods of gathering info - Harvesting, FTP Uploads
5. Methods of delivery of info - Free Searches, API, web
services, exports, linking
mechanisms
6. Identifiers used - CrossRef DOIs for Monographs
7. Interoperability with
other platforms - Zoobank, IPNI, IF
8. Constraints, needs and expectations to suppliers of content
and users of content
31. Thank you
pro-iBiosphere Meeting 3
Coordination and routes for cooperation across organizations, projects and e-infrastructures
Berlin, Germany
May 23rd, 2013
William.Ulate@mobot.org
Global BHL Project Manager
BHL Technical Director
Senior Project Manager
Missouri Botanical Garden
Editor's Notes
Guidelines for speakers giving presentationsPresentation are limited to 15 minutes for each speaker plus 5 minutes for discussion.Presentations should clearly answer the following questions (7-8 slides), definitely focusing on the interoperability problem:Type of content we discuss (e.g., occurrences, genes, behaviour, morphology, etc.)Sources of content (from where)Formats of content (formats, standards)Methods of gathering information (e.g., harvesting, ftp uploads, protocols)Methods of delivery of information (e,g., free searches, API, web services, automated exports, linking mechanisms, etc.; provide links to API and web services documentation)Identifiers used (type, persistence, dereferencing, resolvability)’Present or forthcoming interoperability features with other platformsConstraints, needs and expectations to: a) Suppliers of content, and b) Users of contentOverall picture of what is needed within a certain domain (e.g., for names, references, genes, images, etc.) (2-3-slides)The final outputs of presentations and discussions should be two-fold:Summary table encompassing the answers to the above questions, that will be a basis for the whitepaper and future workMoU draft discussedProposing an Advisory Board of key stakeholders that will form the ground for a consortium to develop and launch the future BKMSTasks involved:Task 2.1. Coordination and routes for cooperation across organizations, projects and e-infrastructures (lead: Plazi). Encompassing the information gathered at Workshop 1 (Leiden, February 2013) and through the online questionnaire.Task 4.1 Improve technical cooperation and interoperability at the e-infrastructure level (lead: FUB-BGBM).Task 4.2 Promote and monitor the development and adoption of common mark-up standards and interoperability between schemas by identifying technical and societal constraints and needs to increase collaboration and interoperability between e-platforms and projects, and by envisioning practical solutions towards the Biodiversity Knowledge Management System (lead: Plazi).=============Concrete examples of ideas for potential points in a draft MoUA primary purpose of the “Routes towards cooperation” meeting is to increase our reciprocal understanding and progress towards a multi-institutional Memorandum of Understanding(MoU). The following points are potential points in a draft MoU. It is welcome to comment them here on the wiki before the meeting takes place, or to add further points. The results would then have to be further discussed by the appropriate levels.Establishment of a multi-institutional focus group to coordinate software development to improve the efficiency of resource use by means of common Open Source based development projects using Open Source methodology.Agreements on specialization, e.g., one institution specializes in geographical analysis and visualization, providing services to other institutions or projectsAgreement on long-term management procedures to provide stable identifiers. This agreement may be technology neutral (except that some way to use the identifiers in the human readable as well as semantic web should be specified). Both stable http-URIs (preferred in semantic web) and DOI technology (publishing industry) are possible implementations.Agreement on following the Linked Open Data example. (Note: Edinburgh may be a best practices example?)Agreement to communicate the data policies according to the Linked Open Data five star scoringPolicy agreements on Open AccessAgreement to register all services that are provided to other Biodiversity institutions in the Biodiversity Catalogue (Univ. Manchester, myExperiment).Agreement to communicate the expected and planned stability of services by means of a standard vocabulary (e.g.: undecided, experimental, long-term service without fixed API, long-term service with stable and versioned API)Agreement to collaborate on the development of shared term definitions (glossary-style) with the understanding that new terms can be freely added, but an effort will be made to re-use or improve existing term definitions.Agreement on crowdsourcing activities to clean up data, e.g. bibliographic references, or markup content in legacy literature, e.g. scientific names, treatments, material citations.Paul Kirk: Centrally 'cached' data should have a clear mechanism for providing usage statistics back to sources.
Type of content we discuss (e.g., occurrences, genes, behaviour, morphology, etc.)Sources of content (from where)Formats of content (formats, standards)Methods of gathering information (e.g., harvesting, ftp uploads, protocols)Methods of delivery of information (e,g., free searches, API, web services, automated exports, linking mechanisms, etc.; provide links to API and web services documentation)Identifiers used (type, persistence, dereferencing, resolvability)Present or forthcoming interoperability features with other platformsConstraints, needs and expectations to: a) Suppliers of content, and b) Users of content
[PortalUser Interface]
[Book Viewer Interface]
We ask the user to provide metadata if they’re generating a chapter or book title
On legacy literature, what your plans are with BHL, and especially your move into content?GrowthMore Global ContentTaxon NamesArticle MetadataMicrocitations and COiNSAPIZoobankOCR improvements through GamingCrowdsource MarkupWFO?
[Citebank homepage]
[Citebank homepage]
[Citebank stats]
[World in which CiteBank lives]
[Citations in BHL and Sustainability Considerations]
[Citebank homepage]
[GNA Diagram]
[Define functional requirements]
We ask the user to provide metadata if they’re generating a chapter or book title
We ask the user to provide metadata if they’re generating a chapter or book title
[Where are we going?]
[Diagram of citations reconciliation]
Type of content we discuss (e.g., occurrences, genes, behaviour, morphology, etc.)Sources of content (from where)Formats of content (formats, standards)Methods of gathering information (e.g., harvesting, ftp uploads, protocols)Methods of delivery of information (e,g., free searches, API, web services, automated exports, linking mechanisms, etc.; provide links to API and web services documentation)Identifiers used (type, persistence, dereferencing, resolvability)Present or forthcoming interoperability features with other platformsConstraints, needs and expectations to: a) Suppliers of content, and b) Users of content