Trusted Datagrids: Library of Congress Projects with UCSD
Upcoming SlideShare
Loading in...5
×
 

Trusted Datagrids: Library of Congress Projects with UCSD

on

  • 674 views

 

Statistics

Views

Total Views
674
Views on SlideShare
672
Embed Views
2

Actions

Likes
0
Downloads
3
Comments
0

1 Embed 2

http://www.slideshare.net 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • This presentation discusses an project between the Library of Congress and SDSC. The focus of the project has been on “trust building” between the organizations. ***[THEME OF THE PROJECT: Exploring large-scale data transfer and storage, and developing trust in the ability of an outside party to reliably store content.] One of the collections that we’ve been working with from the LC is the Prokudin-Gorskiii photograph collection from Prints and Photographs. I’m going to be using some of the characters from this collection to help me tell my story.
  • This presentation discusses an project between the Library of Congress and SDSC. The focus of the project has been on “trust building” between the organizations. ***[THEME OF THE PROJECT: Exploring large-scale data transfer and storage, and developing trust in the ability of an outside party to reliably store content.] One of the collections that we’ve been working with from the LC is the Prokudin-Gorskiii photograph collection from Prints and Photographs. I’m going to be using some of the characters from this collection to help me tell my story.
  • As they moved into the realm of large-scale data, the LC found themselves wandering into unmarked territory. It can be a real wilderness out there! [Careful not to portray us as lost and clueless. You don’t want to be accidentally insulting.] [Again, the theme is : Exploring large-scale data transfer and storage, and developing trust in the ability of an outside party to reliably store content.]
  • Of course, this isn’t just a problem for the LC. There are a number of organizations facing the same problems – and wandering in the same wilderness. How can they decide what direction to take without an accurate map?
  • This is Pinchus the boatman, and he’s one of the most famous people from the P-G collection. And of course, there are plenty of people like Pinchus - people who claim they can help. They even claim to have the tools needed to move forward. But of course the question remains – how can you trust someone you can’t control? Because the problem is that these promises are often just like old photographs – they look clear and well-defined at a distance…
  • But, as you get closer and close, the picture gets fuzzier and fuzzier.
  • Self explanatory.
  • At SDSC we’re focused on bringing forward the right combinations of tools and experts – cyberinfrastructure. [Points to make: “ We began our project together with a set of scenarios created by the LC, describing, as best as LC could imagine, tests and tasks for SDSC to prove that LC’s content (hosted at SDSC) was safe and secure.” “ The first step for SDSC was to help LC create a solid cyberinfrastructure.”]
  • For example.
  • [Elaborate just a little more on what it took to set this up: your remote access to LC, Andy’s role in tweaking the internal network, the network monitoring, authentication at your end.] Another famous face from the P-G collection is the Emir of Burakha. Today the Emir is going to stand in for the LC’s data. One of the services that we provided was high speed replication across the country. We did this by checksumming the data first, then shipping it across the country using Internet2. Then we checked it upon arrival. We then did the opposite and shipped the data back, checking it upon arrival. All in all we achieved good transfer speeds. We hope to improve this service as we continue to work with LC.
  • At the same time we create a safe haven for the data, both within SDSC as well as across the country.
  • And of course we provided all of the enhanced services you would expect to keep your data safe. [You’ll elaborate, right? People will want to know specific details, probably culminating in the server report emails.]
  • We also created a demonstration for a new tool that the LC could use to access and manipulate their data. […if they chose too. “Ultimately LC decided they wanted to not manipulate their data but only to have SDSC store it safely.”]
  • [Will you talk about the SAM-QFS outage? It proved SDSC’s reliability and highlighted “accessibility” as a separate issue.] [“Duplication of structure is complicated.” Are you going to talk about the Prokudin-Gorskii symbolic links issue here?]
  • [Will you talk about the SAM-QFS outage? It proved SDSC’s reliability and highlighted “accessibility” as a separate issue.] [“Duplication of structure is complicated.” Are you going to talk about the Prokudin-Gorskii symbolic links issue here?]
  • Another LC collection we were working with was a 6TB web archive collection. We were tasked with working with the collection in new and innovative ways. We modified the original software to create a FAST version which still looked default to users. (In fact Internet Archive liked our changes so much that they rolled some of them back into the application.)
  • This was not acceptable. The time constraints were also important because we were working under a deadline to complete the work.
  • In here is where I need to talk about how we did this. Can I get some more information for this slide? - How many physical machines was this on - Was anything special done on the software side to get this to happen - Was there any custom configuration that needed to happen?
  • Finally, we’re nearing the end of the project and are working with the LC to come up with documentation based on our work that could be used by anyone in NDIIPP. This should provide a starting place for people looking to do the same kinds of tasks.
  • Overview Providers / NCAR, UMd, SDSC, UCSD Libraries Partners, in this case they are clients / ICPSR, CDL
  • [Are you going to be more specific? You address specifics of our projects (transfer of data over Internet 2, description of storage and system-monitoring environment). In what specific way does Chronopolis compare?]
  • Self-explanatory. [And please summarize exactly how the LC validated trust in SDSC]
  • Self-explanatory. [In what way? In a review sense?]
  • Self-explanatory. [I missed the point. Could you summarize how Cyberinfrastructure applies specifically to the elements of our project? This could be as simple as reviewing the Internet 2 network setup, the SDSC storage system, system reporting, and…you never mentioned geographic data replication. If you are going to end on “trust” on the next slide, make sure you make a relevant and specific summary statement here about how SDSC demonstrated trustabililty.
  • Self-explanatory.

Trusted Datagrids: Library of Congress Projects with UCSD Trusted Datagrids: Library of Congress Projects with UCSD Presentation Transcript

  • Trusted Datagrids: Library of Congress Projects with UCSD Ardys Kozbial – UCSD Libraries David Minor - SDSC
  • Building Trust in a 3 RD Party Repository: A Pilot Project David Minor San Diego Supercomputer Center
  •  
  •  
  •  
  • How can the LC trust someone they can’t control?
  •  
  • Moving forward in the right direction requires more than fuzzy promises
  • … it takes a combination of experts and tools. Cyberinfrastructure
  • Cyberinfrastructure is the collection of ... Resources + Glue Computers, data storage, networks, scientific instruments, experts, etc. Integrating software, systems, and organizations
  • “ Effective cyberinfrastructure for the humanities and social sciences will allow scholars to focus their intellectual and scholarly energies on the issues that engage them, and to be effective users of new media and new technologies, rather than having to invent them.” - ACLS Commission on Cyberinfrastructure for the Humanities & Social Sciences
    • “ The mission of the San Diego Supercomputer Center (SDSC) is to empower communities in data-oriented research, education, and practice through the innovation and provision of Cyberinfrastructure”
    • SDSC ...
    • Is one of the original NSF supercomputer centers
    • Supports high performance computing systems
    • Supports data applications for science, engineering, social sciences, cultural heritage institutions
    • Has LARGE data capabilities
        • 3+ PB Disk Storage
        • 25+ PB Tape Storage
  • UCSD Libraries
      • 3.5+ million volumes
      • Digital Access Management System (in development)
        • 250,000+ objects
        • 15+ TB
      • Shared collections with UC
        • California Digital Library
          • Digital Preservation Repository
          • eScholarship repository
  • Partnerships and Collaborations
    • LC Pilot Project – Building Trust in a 3 rd Party Repository
      • Using test image collections/web crawls ingest content to SDSC repository
      • Allow access for content audit
      • Track usage of content over time
      • Deliver content back to LC at end of project
    • Library of Congress NDIIPP Chronopolis Program
      • Build Production Capable Chronopolis Grid (50 TB x 3)
      • Further define transmission packaging for archival communities
      • Investigate best network transfer models for I2 and TeraGrid networks
    • California Digital Library (CDL) Mass Transit Program
      • Enable UC System Libraries to transfer high-speed mass digitization collections across CENIC/I2
      • Develop transmission packaging for CDL content
    • UCSD Libraries’ Digital Asset Management System
      • RDF System with data managed in SRB at SDSC
  • SDSC DPI Group
    • Digital Preservation Initiatives Group
      • Charged with Developing and Supporting Digital Preservation Services within the Production Systems Division of SDSC.
      • http://dpi.sdsc.edu
      • Cross-Organizational Group
        • SDSC Personnel/UCSD Libraries Personnel
          • Libraries
          • Archives
          • Technology
          • Information Science
  • Cyberinfrastructure Trust
  • For Example:
  • We worked together to setup high speed data replication services Checksums Checksums Achieved 200Mb/s = 2 TB/day Highly reliable Internet2
  • Network setup involved …
    • LC and SDSC staff working together
    • Configurations on networks and computers
    • Resolving different security environments
    • Network monitoring
  • Networking is hard! Can’t forget it once it’s setup It’s not magic - there’s always a reason It highlights collaborative nature of work Lessons Learned
  • Has a long-term solution been found? Have multi-institutional issues been solved? Does new infrastructure improve process? Trust Elements Is solution useful for other organizations?
  •  
  • SDSC created a robust storage environment for this data Multiple replications … … at SDSC … and geographically diverse locations
  • (a process with several characteristics)
    • Needed to replicate structure exactly
    • This had to be done for 5+ replications
    • Complex environment had to be transparent
    • Data had to be available for manipulation
    • The Storage Resource Broker provided replication services ...
  • ... and extensive monitoring, logging and reporting functions (which led to many conversations)
  • Logging and monitoring procedures
    • Scripts which compared the files within the system with a master list – checked changes on either side … fairly straightforward
    • But …
    What is the master list and who maintains it? Who decides what is a legitimate change? Do you want a dark archive or an active remote data center?
  • We tested a new Front-End
  • … and explored an important issue
    • “ Reliability”
    • Versus
    • “ Accessibility”
  • Always keep expectations aligned Don’t confuse accessibility and reliability Duplication of structure is complicated Communication highlights communication Lessons Learned
  • Can remote data be accessed? Can remote data be retrieved and re-used? Can remote data be verified? Can ownership be clearly defined? Trust Elements
  • SDSC and LC explored a new approach to working with web archives 50,000 ARC files 6 Terabytes of data Short processing time Parallel indexing and display system Looked “default” to the user
  • Using default tools, our initial indexing rate was 1000 files per day… This was over our time budget. … more than 6 weeks of constant computing to index entire collection.
  • We ran 18 parallel indexing instances – reduced processing to a week We modified the Wayback sourcecode to create a new access infrastructure
  • Sometimes you need to start over Default setup isn’t always easiest Time is a wonderful motivator Experts are often interested in your work Lessons Learned
  • Can a new organization bring new expertise? Are the final results the same? Can the results be reached in a better way? Can a new organization work with your partners? Trust Elements
  • Next steps …. Chronopolis!
  • Chronopolis: A Partnership
    • Chronopolis is being developed by a national consortium led by SDSC and the UCSD Libraries.
    • Initial Chronopolis provider sites include:
      • SDSC and UCSD Libraries at UC San Diego
      • University of Maryland
      • National Center for Atmospheric Research (NCAR) in Boulder, CO
    UCSD Libraries
  • Institutions and Roles - UCSD
    • SDSC
      • Storage and networking services
      • SRB support
      • Transmission Packaging Modules
    • UCSD Libraries
      • Metadata services (PREMIS)
      • DIPs (Dissemination Information Packages)
      • Other advanced data services as needed
  • Institutions and Roles - NCAR
    • National Center for Atmospheric Research
      • Archives: Complete copy of all data
      • Storage and network support
      • Network testing
  • Institutions and Roles - UMIACS
    • University of Maryland – Institute for Advanced Computer Studies
      • Archives: Complete copy of all data
      • Advanced data services
        • PAWN: P roducer – A rchive W orkflow N etwork in Support of Digital Preservation
        • ACE: A uditing C ontrol E nvironment to Ensure the Long Term Integrity of Digital Archives
      • Other advanced data services as needed
  • SDSC Chronopolis Program
  • Chronopolis Vocabulary
    • Partners – UCSD Libraries, National Center for Atmospheric Research, University of Maryland Institute for Advanced Computer Studies all provide grid enabled storage nodes for Chronopolis services.
    • Clients – ICPSR, CDL– contribute content to the Chronopolis preservation network.
    • SRB – Storage Resource Broker – datagrid software.
    • iRODS – integrated Rule Oriented Data System – datagrid software.
    • ACE – Audit Control Cnvironment – part of the ADAPT project at UMD.
    • PAWN – Producer Archive Workflow Network – part of the ADAPT project at UMD.
    • INCA – user level grid monitoring - executes periodic, automated, user-level testing of Grid software and services – grid middleware.
    • Bagit – Transfer specification developed by CDL and the Library of Congress.
    • GridFTP – parallel transfer technology - moves large collections within a grid wide-area network.
  • Chronopolis: Inside
    • Linked by main staging grid where data is verified for integrity, and quarantined for security purposes.
    • Collections are independently pulled into each system.
    • Manifest layer provides added security for database management and data integrity validation.
    • Benefits
      • 3 independently managed copies of the collection
      • High availability
      • High reliability
    Grid Brick Disks NCAR SDSC Core Center Archive SDSC Staging Grid Pull Pull Chron Clients: CDL ICPSR Pull Push UMD Copy 1 Copy 2 Copy 3 Manifest Management MCAT DB Multiple Hash Verifications Grid Brick Disks MCAT MCAT MCAT HPSS Tape
  • SDSC Leveraged Infrastructure
    • Serves Both HPC & Digital Preservation
    • Archive
      • 25 PB capacity
      • Both HPSS & SAM-QFS
    • Online disk
      • ~3PB total
      • HPC parallel file systems
      • Collections
      • Databases
    • Access Tools
    Adapted from Richard Moore (SDSC)
  • Chronopolis Demonstration Project
    • Demonstration Project 2006-2007
      • Demonstration Collections Ingested within Chronopolis
        • National Virtual Observatory (NVO)
          • 3 TB Hyperatlas Images (partial collection)
        • Library of Congress PG Image Collection
          • 600 GB Prokudin-Gorskii Image Collection
        • Interuniversity Consortium for Political and Social Research (ICPSR)
          • 2TB Web Accessible Data
        • NCAR Observational Data
          • 3TB Observational Re-Analysis Data
  • NDIIPP Chronopolis Project
    • Creating a 3-node federated data grid at SDSC, NCAR and UMD – up to 50 TB data from CDL and ICPSR
    • Installing and testing a suite of monitoring tools using ACE, PAWN, INCA
    • Creating Appropriate Transmission Information Packages
    • Generating PREMIS definitions for data
    • Writing Best Practices documents for clients and partners
  • Chronopolis Grid Framework Sun 6140 62TB SRB D-Broker SRB D-Broker SRB MCAT Sun SAM-QFS SRB D-Broker SRB D-Broker SRB MCAT Apple Xsan SRB D-Broker SRB D-Broker SRB MCAT CDL Server ICPSR Server NCAR Network MarylandNetwork SDSC Network ICPSR Network UC BerkeleyNetwork Chronopolis Data 12-25TB Chronopolis Data 12TB CDL Server SDSC Network NCAR Network UMD Network Tape Silos Adapted from Bryan Banister (SDSC)
  • NDIIPP Chronopolis Clients-CDL
    • California Digital Library
      • A part of UCOP, supports the University of California libraries
      • Providing up to 25TB of data: Web-At-Risk project
        • Five years of political and governmental websites
        • ARC files created from web crawls
        • Using Bagit Transfer Structure
  • Diagram of CDL Data Transfer CDL Virtual Machine at UCB SDSC Network Wget Bagit Wget files 1-10, 11-20 File n Bagit Manifest File 1 Possible SRB/Bagit Module UMIACS Chron Staging Chron Repository NCAR Parallel Wget Xfer UMIACS Network NCAR Network Adapted from Bryan Banister (SDSC)
  • NDIIPP Chronopolis Clients-ICPSR
    • Inter-University Consortium for Political and Social Research, University of Michigan
      • Providing @12TB of data: Wide variety of types
      • Already working with SDSC using SRB
  • Diagram of ICSPR Transfer ICPSR SRB Repository UMich SDSC Network Sput/Srsync Files Sput tar files File n EMC SAN File 1 Chron SRB MCAT UMIACS Chron Staging Chron Repository NCAR Parallel Sput/Srsync Xfer UMIACS Network NCAR Network Adapted from Bryan Banister (SDSC)
  • Ongoing and Future Initiatives
    • Migration of Chronopolis from SRB to iRODS
    • Develop Interoperability with Community Based Archival Systems/Standards
    • TRAC compliance for SDSC Production Preservation Services/Chronopolis Consortium
  • Looking for Partnerships
    • Repositories interested in moving large digital collections among heterogeneous repository systems.
    • Fedora, DSpace or E-Prints sites interested in managed datagrid storage.
    • Institutions interested in personnel swaps to conduct TRAC audit assessment compliance.
    • Community Needs for Mass-Scale Data Transmission and Storage.
  • Chronopolis Credits
    • SDSC
      • Fran Berman
      • Richard Moore
      • David Minor
      • Chris Jordan
      • Jim D’Aoust
      • Robert McDonald
      • Don Sutton
      • Brian Banister
      • Phong Dinh
      • Jay Dombrowski
      • Emilio Valente
    • UCSD Libraries
      • Brian Schottlaender
      • Luc Declerck
      • Ardys Kozbial
      • Brad Westbrook
      • Arwen Hutt
    • NCAR
      • Don Middleton
      • Michael Burek
      • Linda McGinley
    • UMIACS
      • Joseph JaJa
      • Mike Smorul
      • Mike McGann
    • Library of Congress
      • Martha Anderson
      • Lisa Hoppis
    • CACI
      • Mike Ivey
  • http://chronopolis.sdsc.edu
  •  
  •  
  •  
    • a geographically distributed preservation environment that supports long-term management and stewardship of digital collections
    • implemented by developing and deploying a distributed data grid, and by supporting its human, policy, and technological infrastructure.
    • technology forecasting and migration in support of long-term life-cycle management of the dedicated preservation environment.
    Chronopolis is ...
    • Assessment of the needs of potential user communities and development of appropriate service models
    • Development of Memoranda of Understanding (MOUs), Service Level Agreements (SLAs), etc. to formalize trust relationships and manage expectations
    • Assessment and prototyping of best practices for bit preservation, authentication, metadata, etc.
    • Development of cost and risk models for long-term preservation
    • Development of appropriate success metrics to evaluate usefulness, reliability, and usability of infrastructure
    Chronopolis focuses on ...
  • The people of Chronopolis are ... UCSD Libraries
  • Organizations need ways to validate trust in 3rd parties In conclusion …
  •  
  • SDSC and the Library of Congress explored one way to do this … … and demonstrating trust. by working with Cyberinfrastructure
  • With a trusted relationship, many journeys become possible