Cine grid exchange@cenic2010-5

307 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
307
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • These are the goals of the first phase CX design and development. Basically the primary goal is to integrate all the distributed storages provided by CineGrid members andProvide a transparent way to access it; to define the right way to manage and distribute digital contents; to design the metadata management scheme.
  • In 2008,Cinegrid Exchange started from three major sites in San Diego, Tokyo and Amsterdam. These three sites are connected by 10Gb networks.
  • And in 2009, we have expanded CX to include 7 major sites worldwide including San Dieog, Los Angeles, Chicago, Tokyo, Toronto, Prague, and Amsterdam. Still, the connections Between them are 10Gbps networks.
  • This table listed the current storage provided by those sites. There are about 250TB currently managed by Cinegrid Exchange.
  • We designed CX as a three level hierarchical structures. The lowest level is the physical distributed content repositories, connected by high speed optical networks. The middle ware mange the resources and provide support for applications. iRODS is the major middleware in CX, it is datagrid software that can manages distributed storages. It is also programmable through it’s rule system. CollectiveAccess is a metadata management software that we are working on. Our partners in AMPASAnd NPS are working on binding iRODS and CA together.
  • Here are some features of iRODS. It’s programmable, it’s transparent. It uses a centralized catalog to maintain file location, checksum, and some metadata. We integrated UDP into iRODS, so that it now support Very fast file transfer within CX IT forms the base of the test bed we want to build in CX. I have a short demo of how this works.
  • We have defined a few workflows in CX to start with. Here is an examples. I will give a small demo of the ingest workflow. I will put a file into the dropBox, and that file will be automatically replicated into two other storage system and then moved to a central points. This is our first phase implementation only, so if you are not surprised at the demo, I am not surprised either.
  • A distributed testbed like CX requires a good network, and good protocols. We integrated both TCP and UDP into CX. And performance is satisfactory. This figures shows the file transfer speed with TCP and UDP. The x-asix is the number of flows, and y-axis is the overall transfer speed. With 3 UDP streams, we can transfer files at a speed of 350MB/sec from Japan to US, and with TCP, we can achieve 250MB/s with 15 streams or so.
  • The performance of ingest/distribution are subjected to other factors, like checksum, we found with checksum performed with the transfer, the overall speed is significantly slowed down. We are investigating ways to solve these problems. For example, we can do delayed checksum, or do pipelined checksum, or do FEC coding. This will be solved.
  • ×