The presentation discusses the Fluor project which aims to integrate the Sakai learning management system with the Fedora digital repository to allow researchers to share and access research data. It provides an overview of the project goals, history, architecture involving Fedora, Solr and the Fluor tool, and features for searching, browsing and uploading data. The status of the project is outlined, noting some bugs being addressed in the user interface, search and integration with Solr.
The document discusses the ten key principles of Cataloging Cultural Objects (CCO) for cataloging objects in museum collections. It focuses on Principle 1, which is to establish logical relationships between work, collection, and image records. An example is provided of a stereoscope card depicting the Taj Mahal that would have collection, work, and image records linked together. Relationships can be indicated through record links or controlled vocabularies. Principle 2 is also covered, which is to include all required CCO elements in records, such as title, creator, and date. CCO elements are aligned with standards like VRA Core.
The Library of Congress engaged in linked data efforts starting in 2009 and created its Linked Data Service. It contracted with Zepheira to develop the initial BIBFRAME model and vocabulary 1.0 with input from early experimenters. The Library of Congress conducted a pilot of BIBFRAME from October 2015 to March 2016 with 40 staff cataloging in both MARC and BIBFRAME. The pilot helped develop BIBFRAME and identified areas for improvement. The Library of Congress will continue to refine BIBFRAME 2.0 and conduct additional testing.
This document summarizes Corey Harper's presentation on Linked Open Data at the Penn Humanities Forum in 2014. The presentation introduced key concepts of the semantic web such as using URIs to identify resources and linking data through relationships. It provided examples of large linked open data projects including DBpedia and the Google Knowledge Graph. The presentation also discussed using linked data to provide additional context and narratives about cultural heritage collections through users' stories and scholars' interactions with archival materials. Harper envisioned linked open data interfaces that aggregate data from multiple sources to provide richer discovery experiences for users.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
The document discusses key principles for cataloging cultural objects using Cataloging Cultural Objects (CCO). It summarizes Principle 3 to follow CCO rules and make additional local rules to effectively retrieve, repurpose, and exchange data. It then discusses using controlled vocabularies, creating local authorities, using metadata standards, understanding different cataloging functions, and consistently establishing relationships between works. The document stresses planning for growth when adding new objects and relationships over time.
This document provides an overview of archives, archival description standards, and finding aids. It defines what archives are, distinguishing them from libraries. It describes the archival mission to identify, preserve, and provide access to materials of enduring value. Key aspects covered include the Descriptive Archival Content Standard (DACS), the Encoded Archival Description (EAD) standard for encoding archival finding aids, and how EAD maps to MARC21 fields. The document compares the differences between libraries and archives and outlines the core elements in DACS for archival description.
This presentation was given by Ted Lawless of Thomson Reuters during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
The document discusses the ten key principles of Cataloging Cultural Objects (CCO) for cataloging objects in museum collections. It focuses on Principle 1, which is to establish logical relationships between work, collection, and image records. An example is provided of a stereoscope card depicting the Taj Mahal that would have collection, work, and image records linked together. Relationships can be indicated through record links or controlled vocabularies. Principle 2 is also covered, which is to include all required CCO elements in records, such as title, creator, and date. CCO elements are aligned with standards like VRA Core.
The Library of Congress engaged in linked data efforts starting in 2009 and created its Linked Data Service. It contracted with Zepheira to develop the initial BIBFRAME model and vocabulary 1.0 with input from early experimenters. The Library of Congress conducted a pilot of BIBFRAME from October 2015 to March 2016 with 40 staff cataloging in both MARC and BIBFRAME. The pilot helped develop BIBFRAME and identified areas for improvement. The Library of Congress will continue to refine BIBFRAME 2.0 and conduct additional testing.
This document summarizes Corey Harper's presentation on Linked Open Data at the Penn Humanities Forum in 2014. The presentation introduced key concepts of the semantic web such as using URIs to identify resources and linking data through relationships. It provided examples of large linked open data projects including DBpedia and the Google Knowledge Graph. The presentation also discussed using linked data to provide additional context and narratives about cultural heritage collections through users' stories and scholars' interactions with archival materials. Harper envisioned linked open data interfaces that aggregate data from multiple sources to provide richer discovery experiences for users.
NISO Webinar:
Experimenting with BIBFRAME: Reports from Early Adopters
About the Webinar
In May 2011, the Library of Congress officially launched a new modeling initiative, Bibliographic Framework Initiative, as a linked data alternative to MARC. The Library then announced in November 2012 the proposed model, called BIBFRAME. Since then, the library world is moving from mainly theorizing about the BIBFRAME model to attempts to implement practical experimentation and testing. This experimentation is iterative, and continues to shape the model so that it’s stable enough and broadly acceptable enough for adoption.
In this webinar, several institutions will share their progress in experimenting with BIBFRAME within their library system. They will discuss the existing, developing, and planned projects happening at their institutions. Challenges and opportunities in exploring and implementing BIBFRAME in their institutions will be discussed as well.
Agenda
Introduction
Todd Carpenter, Executive Director, NISO
Experimental Mode: The National Library of Medicine and experiences with BIBFRAME
Nancy Fallgren, Metadata Specialist Librarian, National Library of Medicine, National Institutes of Health, US Department of Health and Human Services (DHHS)
Exploring BIBFRAME at a Small Academic Library
Jeremy Nelson, Metadata and Systems Librarian, Colorado College
Working with BIBFRAME for discovery and production: Linked data for Libraries/Linked Data for Production
Nancy Lorimer, Head, Metadata Dept, Stanford University Libraries
The document discusses key principles for cataloging cultural objects using Cataloging Cultural Objects (CCO). It summarizes Principle 3 to follow CCO rules and make additional local rules to effectively retrieve, repurpose, and exchange data. It then discusses using controlled vocabularies, creating local authorities, using metadata standards, understanding different cataloging functions, and consistently establishing relationships between works. The document stresses planning for growth when adding new objects and relationships over time.
This document provides an overview of archives, archival description standards, and finding aids. It defines what archives are, distinguishing them from libraries. It describes the archival mission to identify, preserve, and provide access to materials of enduring value. Key aspects covered include the Descriptive Archival Content Standard (DACS), the Encoded Archival Description (EAD) standard for encoding archival finding aids, and how EAD maps to MARC21 fields. The document compares the differences between libraries and archives and outlines the core elements in DACS for archival description.
This presentation was given by Ted Lawless of Thomson Reuters during the NISO Virtual Conference, BIBFRAME & Real World Applications of Linked Bibliographic Data, held on June 15, 2016.
Marist College initiated a project to integrate the XWiki wiki platform with their Sakai LMS. They evaluated over a dozen wiki products before selecting XWiki due to its features and extensibility. The integration uses XWiki's REST API and rendering engine in an iframe within Sakai. It allows Sakai users and roles to manage wiki access and permissions while taking advantage of XWiki's functionality. Future work includes additional testing and integrating other Sakai tools.
This document describes LODeX, a tool for exploring and querying Linked Open Data (LOD) sources. LODeX aims to make LOD discovery and consumption easier for both skilled and unskilled users. It has two main modules: an extraction and summarization module that analyzes LOD datasets and generates schema summaries; and a visualization and querying module that allows users to browse schema summaries and build visual queries without SPARQL knowledge. The visual queries are compiled into SPARQL and executed over LOD endpoints. LODeX indexes information from the LOD cloud and aims to provide a standardized way to understand LOD dataset structures and query LOD sources.
Slides prepared for a guest appearance at Jane Greenberg's metadata class at the University of North Carolina, Chapel Hill. Delivered Monday, Dec. 6, 2010.
IFLA LIDASIG Open Session 2017: Introduction to Linked DataLars G. Svensson
At the IFLA Linked Data Special Interest Group open session in Wroclaw we briefly introduced the mission of the SIG and then went on to a brief introduction to what linked data is and why that topic is important to libraries.
The presentation was held jointly by Astrid Verheusen (general introduction to the SIG) and Lars G. Svensson (introduction to Linked Data)
Linked data presentation for libraries (COMO)robin fay
The document provides an overview of linked data and libraries. It discusses basic principles of linked data such as reusing and linking data to make it reusable, easy to correct, and potentially useful to others. The document also discusses how linked data fits into the semantic web vision by allowing machines to better understand and utilize data. Finally, it discusses getting started with linked data through terminology, advantages, and modeling library data in linked data formats like RDF.
Stefano Cossu, The Art Institute of Chicago - Open Repositories 2014 presenta...Stefano Cossu
Stefano Cossu is a data and application architect for the Art Institute of Chicago's Collections.
In this presentation, screened at the Open Repositories 2014 conference, he explains the Museum's long-range plan for Digital Asset Management leveraging the new features offered by the Fedora 4 open source repository system (https://wiki.duraspace.org/display/FF).
Presentation can be viewed on vimeo: https://vimeo.com/98736678
UCSF Profiles provides a searchable database of over 2,600 UCSF faculty profiles populated with publicly available data like publications from PubMed. It is part of a national effort to enable research networking across institutions. UCSF is leading a project for national research networking involving 15-20 other institutions using Profiles and other tools. A pilot launch is anticipated in January 2011 to showcase an aggregated federated search across participating institutions.
Charleston 2012 - The Future of Serials in a Linked Data WorldProQuest
The educational objective of this session is to review today’s MARC-based environment in which the serial record predominates, and compare that with what might be possible in a future world of linked data. The session will inspire conversation and reflection on a number of questions. What will a world of statement-based rather than record-based metadata look like? What will a new environment mean for library systems, workflows, and information dissemination?
The document discusses several projects related to open metadata and linked data including:
1. The AIM25 project which aggregates archive descriptions from 123 partners and aims to test the value of linked data.
2. The COMET project which is releasing a large subset of bibliographic records under an open license and working to convert them to linked open data.
3. The Jerome project which harvests and unifies data from several library systems, supplements it with open data, and provides fast search APIs.
The Semantic Web and Libraries in the United States: Experimentation and Achi...New York University
This presentation reflects the paper titled "The Semantic Web and Libraries in the United States: Experimentation and Achievements," published in the proceedings of 75th IFLA General Conference and Assembly, Satellite Meeting: Emerging Trends in Technology: Libraries between Web 2.0, Semantic Web and Search Technology 8/19-20/2009, in Florence, Italy, presented by Sharon Yang, Rider University, Yanyi Lee, Wagner College, and Amanda Xu, St. John's University. Here is the URL to the full paper: http://www.ifla2009satelliteflorence.it/meeting3/program/assets/SharonYang.pdf
This document summarizes the development of a self-organizing repository for fusion science data at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. The repository was designed to manage the large and diverse data generated by NIF experiments in a way that supports data discovery, analysis, and collaboration over a 30-year retention period. Key features of the repository include a taxonomic data model, database storage with analysis linkages, a viewer interface, data suitcases for offline analysis, and integration with a wiki for discussions.
Do the LOCAH-Motion: How to Make Bibliographic and Archival Linked DataAdrian Stevenson
Presentation given at the Dev8d Developer Days event at the University of London Students Union, London, UK on 15th February 2011.
The talk was primarily aimed at developers with the assumption that they knew a bit about RDF and Linked Data, so it doesn’t discuss these except in passing. I was mainly trying to give some specifics on the technicalities involved, and what platforms and tools we’re using, so people can follow the same path if they wanted.
More info at http://blogs.ukoln.ac.uk/locah/2011/02/14/locah-lightening-at-dev8d/ and http://wiki.2011.dev8d.org/w/Session-L18
agINFRA work on germplasm and soil Linked Data by Luca Matteus, Giovanni L’Ab...CIARD Movement
Presentation delivered at the Agricultural Data Interoperability Interest Group -- Research Data Alliance (RDA) 4th Plenary Meeting -- Amsterdam, September 2014
“Publishing and Consuming Linked Data. (Lessons learnt when using LOD in an a...Marta Villegas
This document discusses lessons learned from using linked open data in applications. It describes converting metadata from a language resource catalogue into RDF triples, including resolving complex instances like people and organizations. Issues addressed include data enrichment, linking to external datasets, and making implicit relations explicit. The goals of displaying comprehensive data to users and aggregating external data sensitively are discussed.
Linked Data in a University Context: Publication, Applications and Beyond
The Open University (OU) is exposing its data as linked open data to make it more transparent, reusable and discoverable both internally and externally. This includes data about courses, research outputs, library resources and more. By linking its data to other university and external datasets, the OU aims to create new applications and make existing processes more efficient. Other universities in the UK and worldwide are now following the OU's example in publishing institutional data as linked open data.
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
Marist College initiated a project to integrate the XWiki wiki platform with their Sakai LMS. They evaluated over a dozen wiki products before selecting XWiki due to its features and extensibility. The integration uses XWiki's REST API and rendering engine in an iframe within Sakai. It allows Sakai users and roles to manage wiki access and permissions while taking advantage of XWiki's functionality. Future work includes additional testing and integrating other Sakai tools.
This document describes LODeX, a tool for exploring and querying Linked Open Data (LOD) sources. LODeX aims to make LOD discovery and consumption easier for both skilled and unskilled users. It has two main modules: an extraction and summarization module that analyzes LOD datasets and generates schema summaries; and a visualization and querying module that allows users to browse schema summaries and build visual queries without SPARQL knowledge. The visual queries are compiled into SPARQL and executed over LOD endpoints. LODeX indexes information from the LOD cloud and aims to provide a standardized way to understand LOD dataset structures and query LOD sources.
Slides prepared for a guest appearance at Jane Greenberg's metadata class at the University of North Carolina, Chapel Hill. Delivered Monday, Dec. 6, 2010.
IFLA LIDASIG Open Session 2017: Introduction to Linked DataLars G. Svensson
At the IFLA Linked Data Special Interest Group open session in Wroclaw we briefly introduced the mission of the SIG and then went on to a brief introduction to what linked data is and why that topic is important to libraries.
The presentation was held jointly by Astrid Verheusen (general introduction to the SIG) and Lars G. Svensson (introduction to Linked Data)
Linked data presentation for libraries (COMO)robin fay
The document provides an overview of linked data and libraries. It discusses basic principles of linked data such as reusing and linking data to make it reusable, easy to correct, and potentially useful to others. The document also discusses how linked data fits into the semantic web vision by allowing machines to better understand and utilize data. Finally, it discusses getting started with linked data through terminology, advantages, and modeling library data in linked data formats like RDF.
Stefano Cossu, The Art Institute of Chicago - Open Repositories 2014 presenta...Stefano Cossu
Stefano Cossu is a data and application architect for the Art Institute of Chicago's Collections.
In this presentation, screened at the Open Repositories 2014 conference, he explains the Museum's long-range plan for Digital Asset Management leveraging the new features offered by the Fedora 4 open source repository system (https://wiki.duraspace.org/display/FF).
Presentation can be viewed on vimeo: https://vimeo.com/98736678
UCSF Profiles provides a searchable database of over 2,600 UCSF faculty profiles populated with publicly available data like publications from PubMed. It is part of a national effort to enable research networking across institutions. UCSF is leading a project for national research networking involving 15-20 other institutions using Profiles and other tools. A pilot launch is anticipated in January 2011 to showcase an aggregated federated search across participating institutions.
Charleston 2012 - The Future of Serials in a Linked Data WorldProQuest
The educational objective of this session is to review today’s MARC-based environment in which the serial record predominates, and compare that with what might be possible in a future world of linked data. The session will inspire conversation and reflection on a number of questions. What will a world of statement-based rather than record-based metadata look like? What will a new environment mean for library systems, workflows, and information dissemination?
The document discusses several projects related to open metadata and linked data including:
1. The AIM25 project which aggregates archive descriptions from 123 partners and aims to test the value of linked data.
2. The COMET project which is releasing a large subset of bibliographic records under an open license and working to convert them to linked open data.
3. The Jerome project which harvests and unifies data from several library systems, supplements it with open data, and provides fast search APIs.
The Semantic Web and Libraries in the United States: Experimentation and Achi...New York University
This presentation reflects the paper titled "The Semantic Web and Libraries in the United States: Experimentation and Achievements," published in the proceedings of 75th IFLA General Conference and Assembly, Satellite Meeting: Emerging Trends in Technology: Libraries between Web 2.0, Semantic Web and Search Technology 8/19-20/2009, in Florence, Italy, presented by Sharon Yang, Rider University, Yanyi Lee, Wagner College, and Amanda Xu, St. John's University. Here is the URL to the full paper: http://www.ifla2009satelliteflorence.it/meeting3/program/assets/SharonYang.pdf
This document summarizes the development of a self-organizing repository for fusion science data at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. The repository was designed to manage the large and diverse data generated by NIF experiments in a way that supports data discovery, analysis, and collaboration over a 30-year retention period. Key features of the repository include a taxonomic data model, database storage with analysis linkages, a viewer interface, data suitcases for offline analysis, and integration with a wiki for discussions.
Do the LOCAH-Motion: How to Make Bibliographic and Archival Linked DataAdrian Stevenson
Presentation given at the Dev8d Developer Days event at the University of London Students Union, London, UK on 15th February 2011.
The talk was primarily aimed at developers with the assumption that they knew a bit about RDF and Linked Data, so it doesn’t discuss these except in passing. I was mainly trying to give some specifics on the technicalities involved, and what platforms and tools we’re using, so people can follow the same path if they wanted.
More info at http://blogs.ukoln.ac.uk/locah/2011/02/14/locah-lightening-at-dev8d/ and http://wiki.2011.dev8d.org/w/Session-L18
agINFRA work on germplasm and soil Linked Data by Luca Matteus, Giovanni L’Ab...CIARD Movement
Presentation delivered at the Agricultural Data Interoperability Interest Group -- Research Data Alliance (RDA) 4th Plenary Meeting -- Amsterdam, September 2014
“Publishing and Consuming Linked Data. (Lessons learnt when using LOD in an a...Marta Villegas
This document discusses lessons learned from using linked open data in applications. It describes converting metadata from a language resource catalogue into RDF triples, including resolving complex instances like people and organizations. Issues addressed include data enrichment, linking to external datasets, and making implicit relations explicit. The goals of displaying comprehensive data to users and aggregating external data sensitively are discussed.
Linked Data in a University Context: Publication, Applications and Beyond
The Open University (OU) is exposing its data as linked open data to make it more transparent, reusable and discoverable both internally and externally. This includes data about courses, research outputs, library resources and more. By linking its data to other university and external datasets, the OU aims to create new applications and make existing processes more efficient. Other universities in the UK and worldwide are now following the OU's example in publishing institutional data as linked open data.
Presentation at ELAG 2011, European Library Automation Group Conference, Prague, Czech Republic. 25th May 2011
http://elag2011.techlib.cz/en/815-lifting-the-lid-on-linked-data/
What do MARC, RDF, and OWL have in common?Violeta Ilik
It is understood that in the current library ecosystem, catalogers must be willing to adapt to new semantic web environment while keeping in mind the crucial library mission – providing efficient access to information. How could catalogers transform their jobs in order to enable library users to retrieve information more effectively in the age of semantic web?
Researchers have argued that catalogers have the fundamental skills to successfully work with and repurpose the metadata originally created for use in traditional library systems by utilizing various programing languages. In the new environment their jobs will require new tools and new systems but the basic skills of organization of information, knowledge of commonly used access points, and an ever growing knowledge of information technology systems will still be the same. This presentation will stress the role of catalogers in bringing the data silos down, merging, augmenting, and creating interoperable data that can be used not just in library specific systems, but in various other systems. Catalogers’ indispensable knowledge of controlled vocabularies, authority aggregators, metadata creation, metadata reuse, taxonomies, and data stores makes it all possible.
We will demonstrate how catalogers’ knowledge can be leveraged to design an institutional repository and/or a researchers profiling system, create semantic web compliant data, create ontologies, utilize unique identifiers, and (re)use data from legacy systems.
2. Overview Project goals & drivers History of the project Short walkthrough Overall Architecture The features of the fluor tool Setting up a collection Project Status 12th Sakai Conference – Los Angeles, California – June 14-16 2
3. Project Goals Make researchers share research data in a controlled community Integrate Sakai with the Fedora Content Repository Support searching and browsing Support different access models Open vs. closed 12th Sakai Conference – Los Angeles, California – June 14-16 3
4. Project Drivers UvALibrary: make researchers aware of the importance of sharing data Making research data publically available becomes more and more a requirement than a wish Publishing: Support publications by disseminate the underlying research data Teaching: have students work with actual research data 12th Sakai Conference – Los Angeles, California – June 14-16 4
5. A bit of history And now... 12th Sakai Conference – Los Angeles, California – June 14-16 5
6. Project history: testweeklab ‘Testweeklab’ project (2008) Work with 40 years of privacy sensitive research data Strong security requirements Only metadata (publically) accessible Complicated access procedure for accessing the actual data Very specific metadata schema, search and browse requirements Very specific fields (year, N, type of test, scale) Build as Sakai tool for connecting to a Fedora repository 12th Sakai Conference – Los Angeles, California – June 14-16 6
7. Project history: next steps Findings from testweeklab User Interaction improvements Configurability Support more types of usage Make the tool flexible to support different collections and types of use Make the access model flexible Support personalization 12th Sakai Conference – Los Angeles, California – June 14-16 7
8. A short functional walkthrough And now… 12th Sakai Conference – Los Angeles, California – June 14-16 8
15. Overall architecture: components Fedora: act as a content repository. Generic search: do the updates and transformations to Solr Solr: indexing, provide a search and browse interface Sakai FLUOR tool: create a UI for researchers to work with 12th Sakai Conference – Los Angeles, California – June 14-16 15
16. Fedora Content repository Content managed as data objects Unique identifier: PID Metadata Datastreams Relation between objects Virtual datastreams Versioning, logging Multiple collections Objects handled as XML (FOXML) 12th Sakai Conference – Los Angeles, California – June 14-16 16
17. Fedora generic search Enables browsing and search with Lucene, Solr and Zebra Gets notifications about updates from Fedora and fetches the objects XSLT transforms FOXML into documents for the search engine a Rest and SOAP interface, search and browse based on SRW/SRU. 12th Sakai Conference – Los Angeles, California – June 14-16 17
18. Solr Search engine Build on top of Lucene Easy to deploy and configure Advanced full-text searching and indexing Open interfaces, Rest, JSON, XML Admin interfaces Plugin architecture 12th Sakai Conference – Los Angeles, California – June 14-16 18
19. the fluor tool And now… 12th Sakai Conference – Los Angeles, California – June 14-16 19
20. Fluor tool features Access research data Search and browse Access items in the repository Create favorites Upload new items Added directly depending on security model 12th Sakai Conference – Los Angeles, California – June 14-16 20
21. Data access security model Metadata is always accessible Access on datastreams is limited Open: no restrictions Request based: user needs to create a request, and admin reviews them. Fluor tool features 12th Sakai Conference – Los Angeles, California – June 14-16 21
22. Fluor tool features 12th Sakai Conference – Los Angeles, California – June 14-16 22
23. Fluor tool features The access model is configurable. Open Request based Per object configurable 12th Sakai Conference – Los Angeles, California – June 14-16 23
24. Fluor tool features Support for versioning Enables the download of previous versions. Configurable 12th Sakai Conference – Los Angeles, California – June 14-16 24
25. Fluor tool features Data encryption The ability to encrypt datastreams Backups etc. cause no threat to privacy Configurable on the datastream 12th Sakai Conference – Los Angeles, California – June 14-16 25
26. Setting up a repository 12th Sakai Conference – Los Angeles, California – June 14-16 26
27. Setting up a collection Describe the collection What datastreams are there? What metadata is there? Set up the Fedora repository Configure the Fedora datamodel Set up generic search and Solr Configure the FLUOR tool 12th Sakai Conference – Los Angeles, California – June 14-16 27
28. Setting up a collection 12th Sakai Conference – Los Angeles, California – June 14-16 28
29. The status of the project And now… 12th Sakai Conference – Los Angeles, California – June 14-16 29
30. Project status The project is currently being tested by targeted end-users Common UI bugs and issues Search and index has problems 12th Sakai Conference – Los Angeles, California – June 14-16 30
31. Solrvs. Generic search Generic search 2.2 Browse functionality broken with Solr. Does not use facet browsing, instead access lucene index on file system. Browse not limited to collection, results polluted. Solution: Access Solr directly instead of generic search 12th Sakai Conference – Los Angeles, California – June 14-16 31
32. Any questions? An finally… 12th Sakai Conference – Los Angeles, California – June 14-16 32
33. Thank you! 12th Sakai Conference – Los Angeles, California – June 14-16 33
Editor's Notes
Make researchers aware of the importance of sharing data.Sharing data becomes even more a requirement than a wish.Support publications
Role of the library is to support researchers managing and storing data