Large Scale On-Demand  Image Processing for Disaster Relief Robert Grossman Open Cloud Consortium February 22, 2010 www.opencloudconsortium.org
501(3)(c) Not-for-profit corporation Supports the development of standards, interoperability frameworks, and reference implementations. Manages testbeds: Open Cloud Testbed and  Intercloud Testbed. Manages cloud computing infrastructure to support scientific research: Open Science Data Cloud. Develops benchmarks. www.opencloudconsortium.org
Focus of OCC Large Data Cloud Working Group Developing APIs for this framework. Cloud Storage Services Cloud Compute Services  (MapReduce, UDF, & other programming frameworks) Table-based Data Services Relational-like Data Services App App App App App App App App App
Storage Services Compute Services Applications Virtual Network Manager Data Services Network Transport Virtual Machine Manager IF-MAP (Metadata) Services Identity Manager IaaS PaaS Apps
Bridging the Gaps…A Small Step Infrastructure as a Service Virtual Data Centers (VDC) Virtual Networks (VN) Virtual Machines (VM) Physical Resources Platform as a Service Cloud Compute Services Data as a Service Open Virtualization  Format (OVF) Open Cloud Computing Interface (OCCI) SNIA Cloud Data Management Interface (CDMI) Large Data Cloud Interoperability Framework Metadata service linking IaaS and DaaS Metadata service naming and linking entities in the IaaS layers
Open Science Data Cloud Astronomical data Biological data (Bionimbus) Networking data Image processing for disaster relief
Image Processing on Large Data Clouds Data parallel applications Parallelism is often required at file or directory level From a MapReduce perspective, often only Map operations are required. Data intensive applications The input data size can be at 10s or 100s of TB Requires parallel disk IO  & data locality is important
Distributed File Systems Sector is broadly similar to the Hadoop Distributed File System Main differences Hadoop directly implements a distributed block based file system Sector is a layer over a native file system Sector does not split files A single image will not be split, therefore when it is being processed, the application does not need to read the data from other nodes via network A directory can be kept together on a single node as well, as an option
Sphere UDF Sphere allows a User Defined Function to be applied to each file (either it is a single image or multiple images) Existing applications, such as OSSIM, can be wrapped up in a Sphere UDF or invoked via Sector streams In many situations, Sphere streaming utility accepts a data directory and a application binary as inputs
Sector and OSSIM ./sector_stream -i haiti -c ossim_foo -o results “ -i” specifies the input data directory. In this example all images are located in the directory “haiti” “ -c” refers to the command (or application) “ -o” specifies the output location. This is a directory and the output of each input image is stored in a corresponding file
Next Steps Working group will set up persistent on-demand cloud for image processing to assist disaster relief using OSSIM and related open source software. Will be used as a test case for Large Data Cloud and Intercloud Working Groups. One rack of dedicated hardware will be available, with required high performance networking in place. Initial operating capability by May 15,2010.
For More Information [email_address] www.opencloudconsortium.org

Large Scale On-Demand Image Processing For Disaster Relief

  • 1.
    Large Scale On-Demand Image Processing for Disaster Relief Robert Grossman Open Cloud Consortium February 22, 2010 www.opencloudconsortium.org
  • 2.
    501(3)(c) Not-for-profit corporationSupports the development of standards, interoperability frameworks, and reference implementations. Manages testbeds: Open Cloud Testbed and Intercloud Testbed. Manages cloud computing infrastructure to support scientific research: Open Science Data Cloud. Develops benchmarks. www.opencloudconsortium.org
  • 3.
    Focus of OCCLarge Data Cloud Working Group Developing APIs for this framework. Cloud Storage Services Cloud Compute Services (MapReduce, UDF, & other programming frameworks) Table-based Data Services Relational-like Data Services App App App App App App App App App
  • 4.
    Storage Services ComputeServices Applications Virtual Network Manager Data Services Network Transport Virtual Machine Manager IF-MAP (Metadata) Services Identity Manager IaaS PaaS Apps
  • 5.
    Bridging the Gaps…ASmall Step Infrastructure as a Service Virtual Data Centers (VDC) Virtual Networks (VN) Virtual Machines (VM) Physical Resources Platform as a Service Cloud Compute Services Data as a Service Open Virtualization Format (OVF) Open Cloud Computing Interface (OCCI) SNIA Cloud Data Management Interface (CDMI) Large Data Cloud Interoperability Framework Metadata service linking IaaS and DaaS Metadata service naming and linking entities in the IaaS layers
  • 6.
    Open Science DataCloud Astronomical data Biological data (Bionimbus) Networking data Image processing for disaster relief
  • 7.
    Image Processing onLarge Data Clouds Data parallel applications Parallelism is often required at file or directory level From a MapReduce perspective, often only Map operations are required. Data intensive applications The input data size can be at 10s or 100s of TB Requires parallel disk IO & data locality is important
  • 8.
    Distributed File SystemsSector is broadly similar to the Hadoop Distributed File System Main differences Hadoop directly implements a distributed block based file system Sector is a layer over a native file system Sector does not split files A single image will not be split, therefore when it is being processed, the application does not need to read the data from other nodes via network A directory can be kept together on a single node as well, as an option
  • 9.
    Sphere UDF Sphereallows a User Defined Function to be applied to each file (either it is a single image or multiple images) Existing applications, such as OSSIM, can be wrapped up in a Sphere UDF or invoked via Sector streams In many situations, Sphere streaming utility accepts a data directory and a application binary as inputs
  • 10.
    Sector and OSSIM./sector_stream -i haiti -c ossim_foo -o results “ -i” specifies the input data directory. In this example all images are located in the directory “haiti” “ -c” refers to the command (or application) “ -o” specifies the output location. This is a directory and the output of each input image is stored in a corresponding file
  • 11.
    Next Steps Workinggroup will set up persistent on-demand cloud for image processing to assist disaster relief using OSSIM and related open source software. Will be used as a test case for Large Data Cloud and Intercloud Working Groups. One rack of dedicated hardware will be available, with required high performance networking in place. Initial operating capability by May 15,2010.
  • 12.
    For More Information[email_address] www.opencloudconsortium.org