Linked data has been hailed as a disruptive innovation that will change the way we organize and discover information, but what does it really mean for catalogers and metadata creators?
COOL-WD: A Completeness Tool for WikidataFariz Darari
COOL-WD is a completeness tool for Wikidata. Its features are: (1) Display any Wikidata entity enriched with completeness information for each of its properties; (2) Adding new completeness statements or by removing incorrect ones; (3) Aggregate completeness statements and analyze the completeness of classes of entities; and (4) Process any SPARQL query over Wikidata and evaluate the completeness of the query answer.
Current metadata landscape in the library world Getaneh AlemuGetaneh Alemu
This workshop was presented at MTSR-2017 (Nov. 27, 2017) in Tallinn, Estonia http://www.mtsr-conf.org/index.php/programme The workshop aims to bring the current metadata landscape in libraries in context, with particular emphasis on emerging theory/principles and best practices covering:
• The theory of enriching and filtering
• Metadata enriching through RDA (Hands on - The RDA Toolkit and implementation of RDA at Southampton Solent University)
• Metadata filtering through FRBR (practical issues that cataloguers face in FRBRising their catalogue)
• Metadata management (metadata quality, authority control and subject headings)
• Metadata systems, tools and applications (practical issues of e-books and database cataloguing)
Using the conversion of Taxonomic Literature II as an example, I discuss in this high-level presentation some things to keep in mind while creating a linked open data set.
Also I present a few examples and links to LOD data sets and more information.
COOL-WD: A Completeness Tool for WikidataFariz Darari
COOL-WD is a completeness tool for Wikidata. Its features are: (1) Display any Wikidata entity enriched with completeness information for each of its properties; (2) Adding new completeness statements or by removing incorrect ones; (3) Aggregate completeness statements and analyze the completeness of classes of entities; and (4) Process any SPARQL query over Wikidata and evaluate the completeness of the query answer.
Current metadata landscape in the library world Getaneh AlemuGetaneh Alemu
This workshop was presented at MTSR-2017 (Nov. 27, 2017) in Tallinn, Estonia http://www.mtsr-conf.org/index.php/programme The workshop aims to bring the current metadata landscape in libraries in context, with particular emphasis on emerging theory/principles and best practices covering:
• The theory of enriching and filtering
• Metadata enriching through RDA (Hands on - The RDA Toolkit and implementation of RDA at Southampton Solent University)
• Metadata filtering through FRBR (practical issues that cataloguers face in FRBRising their catalogue)
• Metadata management (metadata quality, authority control and subject headings)
• Metadata systems, tools and applications (practical issues of e-books and database cataloguing)
Using the conversion of Taxonomic Literature II as an example, I discuss in this high-level presentation some things to keep in mind while creating a linked open data set.
Also I present a few examples and links to LOD data sets and more information.
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
As part of a 5 series discussion, this informal learning group discussion focused on the overview of Semantic web and an introduction to Linked Data principles. Additionally participants received an overview of the foundations of triple statement. Instructor then led a hands on triple statement activity
Getting onboard the data training: How librarians fit inDiane Clark
Academic librarians' roles and responsibilities are evolving and expanding into the area of data, how to manage, share, access and preserve. Providing training on the topic of how to discuss data with faculty and researchers was the focus of upskilling a cohort of academic librarians.
Getting the best of Linked Data and Property Graphs: rdf2neo and the KnetMine...Rothamsted Research, UK
Graph-based modelling is becoming more popular, in the sciences and elsewhere, as a flexible and powerful way to exploit data to power world-changing digital applications. Com- pared to the initial vision of the Semantic Web, knowledge graphs and graph databases are be- coming a practical and computationally less formal way to manage graph data. On the other hand, linked data based on Semantic Web standards are a complementary, rather than alternative, ap- proach to deal with these data, since they still provide a common way to represent and exchange information. In this paper we introduce rdf2neo, a tool to populate Neo4j databases starting from RDF data sets, based on a configurable mapping between the two. By employing agrigenomics- related real use cases, we show how such mapping can allow for a hybrid approach to the man- agement of networked knowledge, based on taking advantage of the best of both RDF and prop- erty graphs.
This is an informal overview of Linked Data and the usage made of it for the project http://res.space (presented on August 11th 2016 during a team meeting)
What is Linked Data?
Presented at the Linked Data for Libraries on Thursday, November 6, 2014 at Trinity College Dublin
http://www.dri.ie/linked-data-libraries
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
Building the new open linked library: Theory and PracticeTrish Rose-Sandler
What tools and services are necessary to build an open linked library and how can we move existing digital library content into an open linked data model and use those tools to repurpose our own content?
As part of a 5 series discussion, this informal learning group discussion focused on the overview of Semantic web and an introduction to Linked Data principles. Additionally participants received an overview of the foundations of triple statement. Instructor then led a hands on triple statement activity
Getting onboard the data training: How librarians fit inDiane Clark
Academic librarians' roles and responsibilities are evolving and expanding into the area of data, how to manage, share, access and preserve. Providing training on the topic of how to discuss data with faculty and researchers was the focus of upskilling a cohort of academic librarians.
Getting the best of Linked Data and Property Graphs: rdf2neo and the KnetMine...Rothamsted Research, UK
Graph-based modelling is becoming more popular, in the sciences and elsewhere, as a flexible and powerful way to exploit data to power world-changing digital applications. Com- pared to the initial vision of the Semantic Web, knowledge graphs and graph databases are be- coming a practical and computationally less formal way to manage graph data. On the other hand, linked data based on Semantic Web standards are a complementary, rather than alternative, ap- proach to deal with these data, since they still provide a common way to represent and exchange information. In this paper we introduce rdf2neo, a tool to populate Neo4j databases starting from RDF data sets, based on a configurable mapping between the two. By employing agrigenomics- related real use cases, we show how such mapping can allow for a hybrid approach to the man- agement of networked knowledge, based on taking advantage of the best of both RDF and prop- erty graphs.
This is an informal overview of Linked Data and the usage made of it for the project http://res.space (presented on August 11th 2016 during a team meeting)
What is Linked Data?
Presented at the Linked Data for Libraries on Thursday, November 6, 2014 at Trinity College Dublin
http://www.dri.ie/linked-data-libraries
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...Armin Haller
Linked Open Data promises to provide guiding principles to publish interlinked knowledge graphs on the Web in the form of findable, accessible, interoperable, and reusable datasets. In this talk I argue that while as such, Linked Data may be viewed as a basis for instantiating the FAIR principles, there are still a number of open issues that cause significant data quality issues even when knowledge graphs are published as Linked Data. In this talk I will first define the boundaries of what constitutes a single coherent knowledge graph within Linked Data, i.e., present a principled notion of what a dataset is and what links within and between datasets are. I will also define different link types for data in Linked datasets and present the results of our empirical analysis of linkage among the datasets of the Linked Open Data cloud. Recent results from our analysis of Wikidata, which has not been part of the Linked Open Data Cloud, will also be presented.
The Power of Sharing Linked Data - ELAG 2014 WorkshopRichard Wallis
Presentation to set the scene and stimulate discussion in the Workshop "The Power of Sharing Linked Data" at ELAG 2014 - Bath University, UK June 10/11 2014
The Impact of Linked Data in Digital Curation and Application to the Catalogu...Hong (Jenny) Jing
(Full version of the presentation: https://www.youtube.com/watch?v=WS9Svbmp-YY)
Information organization and systems in libraries are in a state of significant flux. In systems there is a shift to XML and RDF-based schemas and ontologies while resource description content standards have changed from AACR2 to RDA. A move from MARC to BIBFRAME and other linked data applications is on the horizon. Linked data and the semantic web have become buzzwords, but what is linked data and why it is important for librarians? How can we use it in digital curation? What can libraries do now to “prepare” for this change in their current practice?
In light of these questions, the panel presentation will discuss two projects. First, there will be coverage of a sample project using the Fedora-based open source framework, Islandora to demonstrate the concepts of connecting related data across the Web with URIs, HTTP and RDF. The second half of the presentation will describe how a consortia has taken a holistic approach to writing an RDA workflow to help front-line cataloguers develop a wider perspective when it comes to resource description (creating more structured, future compatible metadata). Up for discussion: the current state and future possibilities of library metadata with a focus on the implications of linked data.
Presented for managers & researchers at The Global One Health Initiative of the Ohio State University, Africa Regional Branch in Addis Ababa, Ethiopia (April 24th 2019)
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
Strange new world: linked data for catalogers and metadata librarians
1. Strange New World
Linked Data for Catalogers and
Metadata Librarians
Online Northwest Conference 2014
2. Linked Data Basics
•
•
•
•
Records
Statements
Documents
Data
Resource Description Framework (RDF)
Relationships defined in “triples”
– Thing A
•
•
•
•
•
Has relationship to
Thing B
Record becomes a graph formed by triples
Foundation = URIs
Not limited to libraries
Database (closed)
Web (open)
SQL
SPARQL
10. Edward Abbey
is author of
Desert Solitaire
[http://lccn.loc.gov/n78093802]
[http://purl.org/dc/elements/1.1/creator]
[http://www.worldcat.org/oclc/17353644]
77. Linked Data Resources
• Linked Open Data: the Essentials by Florian Bauer and Martin
Kaltenbock. (http://www.semantic-web.at/LOD-TheEssentials.pdf)
• Guides and tutorials on Linked Data Cloud
(http://linkeddata.org/guides-and-tutorials)
• Stanford Linked Data Workshop (2011)
(http://lib.stanford.edu/files/Stanford_Linked_Data_Workshop_Rep
ort_FINAL.pdf)
• Library Technology Reports:
– By Karen Coyle
• v. 46, no. 1, 2010. “Understanding the Semantic Web: Bibliographic Data and
Metadata”
• v. 48, no. 4, 2012. “Linked Data Tools: Connecting on the Web”
– By Erik T. Mitchell
• V. 49, no. 5, 2013. “Library Linked Data: Research and Adoption”
Linked data is all about breaking documents/records into individual statements represented by RDF triples. Each element of the triple has a data value, which can be a URI or a literal value.Much as we use SQL to search our existing databases, SPARQL is the new query language that will enable searching a linked data environment.
Need to get our data where the users are—that’s usually not the library’s page. (This graph is taken from the University of Northern Colorado’s 2013 LibQual+® Survey.)
Authority based on text match are easily broken.
We are accustomed to manipulating/acting on bits of data using some of our existing software programs (MS Excel here). Image the power of an open web of data.
http://www.gapminder.org/Here’s one example of a website using various data repositories to create powerful visualizations.
A tiny sampling of the data utilized by Gapminder.
http://well-formed.eigenfactor.org/Another data visualization.
To understand the underlying concepts of linked data, we need to start at the basics. We can think of linked data as a series of interlinking RDF triples. All the linkages can be displayed as a graph.
Here’s the simple RDF triple. Subject, predicate, object. Here you see that you can use an URI to represent each piece of this data.
When describing a subject, many statements are aggregated.
Of course, as data sets are published on the web. The same concept may be found in many places. The Web Ontology Language (OWL) uses the “sameas” to cross reference data.
http://sameas.orgThere’s even resources to help.
Here you see the owl:sameAs references for the dbpedia page for F. Scott Fitzgerald.
Now we’re going to look at some of the pioneers in linked data.One project, Schema.org is a collaboration between:Google, bing, yahoo, yandex
Facebook’s knowledge graph is an example of linked data in use. Think about it: in Facebook, every time you hit a “like” or “friend” someone, you’ve just established a RDF triple. You, like, thing. Jane, if friend of, Tom. This is the data feeding the knowledge graph.
For instance, here are musicians my fiends have “liked”.
You can even dig further.
In order for us to make use of the data for our own purposes, the data must be open. Social media is notoriously in silos. Many library vendors working in linked data are also doing so in a silo. You can only use that data if you pay for it. The goal of the linked data movement is open data that’s available to all to use as needed.
Google knowledge graphAlready seeing some changes to this. So far, the book links go to Google book project.
Those of us in library cataloging will recognize what Google is doing here as defining specific “works”. Is Google FRBR-izing before libraries?
Wikipedia has a tremendous amount of data behind it.
http://dbpedia.org/page/F._Scott_FitzgeraldYou can see this data when you view it in Dbpedia.
For instance, if you would like to see other people associated with the Lost Generation movement, Dbpedia can do that for you.
As I mentioned earlier, the Google knowledge graph does not lead easily back to libraries. For something as popular and widespread as the “Hero with a Thousand Faces” you do not see a link to library resources anywhere on the first page of hits.
OCLC has done quite a bit of work with linked data. You can see it if you search for something more obscure. This is a title in my library’s holdings. With something this obscure, WorldCat is the first hit.
This exposure is partially achieved by WorldCat’s incorporation of schema.org. You can see this by expanding the “Linked Data” section at the bottom of the WorldCat results.
So, how do libraries make use of linked data and catch up to the power players on the Web?
A lot of library data has already been published as linked data.
The Library of Congress is taking linked data and the open web environment seriously. Let’s look at a few of the projects LOC is doing.
Viewshare is a LOC tool that offers libraries a way to publish some of their collections on line, augment the data with linked data sets, and create various views and visualizations of that data.You may request a free account at viewshare.org.
You can upload your data in various forms.
You can add fields that are augmented with open data sets. Dates/times and geographic locations are good examples of fields that can be augmented.
With the geographic information, you can then create map views. Dates can be used to create timeline views.
With image collections, you can generate galleries. You can also insert simple widgets like the search box and list on the left. Tag clouds are another option. Viewshare can be used by libraries with limited resources to publish their unique collections online.
Most catalogers will be familiar with LC’s Bibframe initiative. This is the project working on a replacement for MARC. The goal is a tool that will represent bibliographic data in a way that opens it up to the web and utilizes linked data to better expose library resources.
Bibrame.orgThe site has lots of good information. A good place to start learning about Bibframe is the model primer document available in the getting started section. We’re going to look at the information and resources this site makes available.
Here’s the four main classes of the Bibframe model.
The Bibrame vocabulary is a work in progress. It is published on this site, and the goal is to soon have a stable vocabulary.
You can also find working documents that discuss the development of the Bibframe model.
We’re going to focus on the resources available in the “tools” tab.
I believe these tools are invaluable for catalogers. They offer a way for us to begin thinking of our records in terms of Bibrame. First, we’ll look at the comparison service. To use this, all you need is the 001 field from a LC bibliographic record.
I have located an 001 from the Library of Congress catalog.
You just paste that 001 number into the comparison tool and search.
Here’s the same record in MARC/XML. We can click at the top on “Bibframe RDF/XML” to view it in RDF.
Here we have the catalog record of the future. But before we look too closely at this, let’s refresh our memories about the Bibframe structure.
A Bibframe work is basically equivalent to what we’ve come to think of in FRBR terms as both works and expressions. A Bibrame instance roughly equates to both manifestations and items.
We can now tease out the parts of the Bibframe model in our comparison example. Keep in mind that the goal is to think of this less as a record and instead as an aggregate of statements.
Now, I’m going to show you how to take your own MARC records and convert them into Bibframe using the transformation service.
I started by creating a list in my local ILS. We have a local collection of materials by the author Connie Willis. I created a list of various formats of her novel “Doomsday Book.”
I then converted that file into MARC/XML using the freely available MarcEdit software.
Here’s what you get in MarcEdit.
Click on the option here to “Paste MARC/XML”
Copy your data from MarcEdit.
Paste it into the transformation tool window.
The tool will generate a URL that you will be able to use for at least “a few days.”
Here you see the first bit of the generated view. We have a collection title that Bibframe is pulling out first. Remember, it is a work in progress.
Bibframe has generated several “work” records, here it looks like they’re coming from the series statements. You can click on any one of these to see the RDF behind that piece of information.
Here you see an instance associated with a Bibframe work.
You may have noticed the stars beside some of the data lines. This is where we already begin to see some linked data capability. If you click on the star, you will see this bit of data in LC’s linked data service, id.loc.gov.
Id.loc.govNotice our URI for Connie Willis.
If you click on the information itself, you will see the RDF for that piece of data.
You see what we’ve been led to expect in a linked data world. Notice the URI from id.loc.gov.
If you’d like to try the transformation service and don’t have a list of your own, there is a sample set of records provided on the “contribute” tab.The link to join the Bibframe listserv is also here, as well as some tools for the coders among us.
To wrap up, I’d just like to showcase some library-related projects that demonstrate how awesome open data can be. These visualizations are fantastic examples of linked data in action.The first is a project from Stanford. The map and timeline are fed by data from the Library of Congress’s Chronicling America collection.
This data visualization is from the Kansas City Public Library. It displays data from their Civil War on the Western Border collection.
I don’t believe we can assume that this is something we won’t have to “worry about” for a good time to come. The growth of the Web and the rate of development of Web 2.0 is exponential. If we don’t move and move rapidly, libraries are going to be left behind.
We’ve all seen these linked data diagrams. This is March 2009.
This is two years later in 2011. What does it look like now?