• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
IDF 2011: ODCA & Developing a Usage Model Roadmap for Cloud Computing
 

IDF 2011: ODCA & Developing a Usage Model Roadmap for Cloud Computing

on

  • 3,322 views

Open Data Center Alliance ...

Open Data Center Alliance
Intel Developer Forum 2011 lecture session with:
Anna Claiborne
ODCA WG Chair, ODCA & Product Manager Security Services, Terremark
Ravi Subramaniam
Lead Technical Facilitator, ODCA & Principal Engineer, Intel
Open Data Center Alliance (ODCA) Overview

Overview:
Why Should You Care? (How can you participate?)
1st Release Introduction
Usage Topics Discussion
Ecosystem Opportunities and Engagement

Statistics

Views

Total Views
3,322
Views on SlideShare
3,311
Embed Views
11

Actions

Likes
1
Downloads
47
Comments
0

2 Embeds 11

http://www.linkedin.com 6
http://twitter.com 5

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    IDF 2011: ODCA & Developing a Usage Model Roadmap for Cloud Computing IDF 2011: ODCA & Developing a Usage Model Roadmap for Cloud Computing Presentation Transcript

    • The Open Data Center Alliance andDeveloping a Usage Model Roadmap forCloud ComputingAnna ClaiborneWG Chair, ODCA & Product Manager Security Services, TerremarkRavi SubramaniamLead Technical Facilitator, ODCA & Principal Engineer, Intel DCCS004
    • Agenda •  Open Data Center Alliance (ODCA) Overview •  Why Should You Care? (How can you participate?) •  1st Release Introduction •  Usage Topics Discussion •  Ecosystem Opportunities and Engagement2
    • ODCA Overview3
    • The Open Data Center Alliance Enable delivery of cloud and data center solutions that meet the challenges facing data centers today and tomorrow, support solution development in an open, industry- standard and multi-vendor fashion, and aid in deploying solutions by defining member SM requirements through usage models. Create Deliver Commit Unified voice for data Requirements to and Guide internal IT center requirements with industry deployments4
    • >300 GLOBAL IT LEADERS Steering Committee Contributing Members Solution Providers Huawei JouleX Philips Technology Services Adopter Members AIMS Data Centre SDN BHD Biznet Networks Connectria Hosting Getronics NL BV JARINGCommunications Sdn Bhd RampRate Scope Infotech, Inc. Temperature ControlIntel serves as Technical Advisor to the Alliance
    • Leadership and Work AREA Structure Steering Committee Board of Directors Standards Organization Intel: Technical Coordination Liaisons Technical Advisor Committee External Technical Forums Regulation Infrastructure Management Security Services and Ecosystem China Technical Sub- WG invites select members 0.7 roadmap Active WG review & Group for consultation. 0.6 participation input roadmap review & input Contributor Solution Provider Adopter Members Members Members6
    • Why Should You Care?7
    • Why Should You Care? •  Cloud is here to stay – 59% of IT decision makers surveyed1 in 2010 indicate cloud is the future model of IT – 49% already have cloud as part of IT strategy •  Cloud and cloud usage patterns are primarily driven by end-users •  Need an open ecosystem at many levels in the cloud for the paradigm to succeed in meeting expectations •  Most IT and other end-users concerned with vendor lock- in1 – looking for interoperable and interchangeable services and service components •  Market leadership requires right products and solutions from deep understanding of customer requirements and expectations •  ODCA members collectively brings 100+ billion in purchasing power8 1KPMG Cloud Computing Survey 2010 – “From Hype to Future”
    • ODCA is Looking to Ecosystem for Compliant and Open Solutions… Alliance Focus Ecosystem Focus Open Data Ecosystem Center Alliance Integration Programs ISV OEM ISV OEM ISV OEM Externally ISV OEM Usages & published usages Requirements & roadmap to ecosystem (informational) • Reference architectures • Platforms • Solution stacks Other Industry Solutions Efforts Alliance (e.g. Open Source, System Working Integrators etc.) Groups Ecosystem non-binding feedback & suggestions9
    • 1st Release Introduction10
    • Initial Release Document Map Released June 7th, 2011 Meta docs Framework Usages ODCA Vision Document Implementation Usages Alliance Technical Usage Alliance Usage Model Releases Framework Initial Release Conceptual Overview and Document MapDomains Regulation & Work Security Management Services Infrastructure Ecosystem Provider Standard Units Virtual Carbon Security of Measure for Machine Footprint Assurance IaaS Interoperability Future release Security Regulatory Compliance Service I/O Control Framework Monitoring Catalog 11 SM
    • Open Data Center Usage Model Overview SECURE & COMMON MGMT AGILITY TRANSPARENCY FEDERATED AND POLICY Service Provider Virtual Regulatory Catalog Assurance Machine Framework Compare service Industry Interoper- Guide industry features & price standard ability in requirements across providers provider Standard, & compliance security tiers: interoperable VM management Standard Unit bronze-platinum deployment & best practices of Measure management Standardized Compliance cloud performance Monitoring IO Control comparison Transparent Extend QoS oversight of guarantees from Carbon provider system to Footprint security network Cloud services become “CO2 aware”12 SM
    • Usage Topics Discussion13
    • Usage Model: PROVIDER SECURITY ASSURANCE Use Case Challenges •  Security stance of a Cloud Provider is a big concern and impediment to enterprise cloud adoption •  Need consistent and simple ways to define the level of security of a cloud – need standard requirements and semantics Usage A Usage Model providing standard definitions of security levels for Summary cloud services. This will allow users to: Ensure providers meet certain security standards. Compare security between providers. Allow users to make more informed choices. Levels: Bronze, Silver, Gold, and Platinum Expectations •  Consistent definitions of security to increase transparency of offerings •  Allow programmatic and user-driven methods to determine the security stance •  Allow independent validation of SP security claims14 SM
    • Usage Model: SECURITY COMPLIANCE MONITORING Use Case Challenges •  Need reliable mechanisms to assess the security stance of a Cloud •  Need a simple and standard way to qualify security both a)  initially and b)  at any other time instant that is determined by subscriber Usage A Usage Model designed to provide cloud users with a standard Summary monitoring framework, format, and syntax that will let them query the status of security and compliance on a continuous basis. Expectations •  Provide standardized definitions of security for cloud-based (Provider Security Assurance) •  Give cloud providers the ability to demonstrate compliance to an agreed standard through certification processes maintained by a cloud compliance agency •  Give cloud-subscribers the ability to validate adherence to cloud security standards (direct assessment or third-party accreditation) •  A standard API or mechanism to monitor security levels15 SM
    • Usage Model: CARBON FOOTPRINT Use Case Challenges •  Organizations are under pressure to report and reduce their environmental impact •  Reduce wastage and reduce operational costs •  Difficult to evaluate and predict the carbon footprint from current methodologies – additional capabilities are required especially in the cloudUsage A Usage Model designed to ensure organizations can predict CO2Summary emissions and track actual emissions through technical capabilities instituted by providers of cloud services. Discuss requirements and use of metrics like CUE and PUE.Expectations •  Establish an open standard approach for measuring carbon footprint for cloud services (focus on the execution footprint; wider aspects in future documents) •  Allow the organization subscribing to the cloud services to: –  Consider shifting the workload to other suppliers with a lower footprint –  Analyze carbon production over time to aid in driving green IT policies –  Provide audits and reports to corporate and regulatory bodies on its green and carbon profile16 SM
    • Usage Model: REGULATORY FRAMEWORK Use Case Challenges •  Technology is not the only enabler or impediment to cloud adoptions – regulations and policies and the burdens to meet these are major aspects •  Penalties for non-compliance are very heavy •  Need strong education to drive right compromises into regulators, technologies & providers Usage A Usage Model aimed at helping organizations assess and monitor Summary their regulatory obligations when engaging and acquiring cloud services. Expectations •  Ensure subscriber obligations, define requirements for providers to meet regulatory obligations and audit the compliance to regulatory obligations •  Do a reasonable job of cataloging global regulatory organizations (not an endeavor to be absolutely comprehensive) •  Build consistent framework and agendas for influence and identify implications to regulatory bodies (across geographies), regulations, applicable laws, and standards17 SM
    • Use Case Challenges •  Need better management and allocation of capacity •  VM density increase on host creates increase potential for I/O conflicts •  Need to eliminate contention to meet SLA and QoS expectations Usage A Usage Model aimed at ensuring organizations can create and Summary launch virtual machine (VM) with workloads that meet their storage and network IO performance requirements and effectively manage IO performance and inter-VM contentions. Expectations •  Need to manage allocation of instantaneous bandwidth and total bandwidth (quota) •  Monitor network use and allow throttling and limiting where required •  Mechanisms to map workload requirements to capabilities initially and at runtime and controls to manage and deliver the right QoS18 SM
    • Usage Model: INTEROPERABILITY OF HYPERVISORS Use Case Challenges •  Realizing full cloud benefits need: a)  Seamless use & management of any cloud hypervisor – ability to choose SP on ROI b)  Manage linked Private and Public clouds consistently •  For IaaS, need consistent VM & VMM interoperability – mgmt. interfaces, format and configuration Usage A Usage Model specifying actions and process to spur development Summary of interoperable, VM management solutions aimed at lowering management complexity and costs, especially in heterogeneous, multi-vendor environments. Expectations •  Given hypervisor/VM heterogeneity, minimize constrains to customer choice of SPs and ease management across multiple SPs (including public <-> private) •  Consistent command sets and semantics between hypervisor implementations – require consistent management interfaces, policy enforcement and IT practices •  OVF (DMTF) great for packaging VMs for migration – need additional standards for “true” interoperability19 SM
    • Usage Model: SERVICE CATALOG Use Case Challenges •  Users need standard and comprehensive mechanism to select and assess offered services •  Service Catalogs aid users in identifying services, their capabilities, configurations and constraints in a normalized manner over many and different providers – allows for comparisons Usage The Usage model describes a standard programmatic Summary interface to securely interrogate catalogs, a data model for representing service characteristics and requirements and mechanisms to negotiate, reserve and provision services. Expectations •  Services offered will be defined in a standard (programmatic) way •  Enable a global services marketplace - open discovery and free market principles for selling and buying cloud services •  Ensure that a base set of service information be available ubiquitously (allows for consistent differentiation, customization and/or extension beyond this set)20 SM
    • Usage Model: STANDARD UNITS OF MEASURE (IaaS) Use Case Challenges •  Enterprises need to quantitatively compare service offerings and measure against internal requirements and offerings •  Need relevant, consistent and accurate measures of service characteristics and QoS that is meaningful to end-users •  Current metrics and measures are too granular and low level Usage The Usage model defines requirements for quantitative macro measures for Summary compute, network and storage along linear, throughput, consumption-based, time and block scale dimensions. Also defines requirements for qualitative measures. Identifies 4 standard levels: Bronze, Silver, Gold and Platinum and requirements for each of these levels. Expectations •  SUoM for quantitative and qualitative measures to describe the capacity, performance and quality of the service components •  Define metrics for: –  Before Use: within a Service Catalog prior to service delivery; for defining SLA –  During Use: as a definition of the expected service capabilities and monitoring while services are in use to manage SLA and –  After Use: as a usage measure for billing after consumption SM21
    • Ecosystem Opportunities and Engagement22
    • New Industry Collaborations ECLC Advance the description of cloud services features with ECLC. OASIS Drive standards for service transparency with OASIS. DMTF Define IT infrastructure management requirements with DMTF. CSA Define cloud security and audit requirements with CSA.23
    • Solution Provider Members24
    • Call to Action25
    • The Time is Now … Call to Action Enterprise IT & Service Providers • Review: Read all Alliance publications (provide feedback) • Commit: Use usage models within your organizations • Accelerate: Join the Alliance to help shape the future of cloud IT Standards Bodies & Solutions Vendors • Review: Read all Alliance publications for relevant requirements • Commit: Integrate requirements into your roadmap • Accelerate: Join as a Solutions Provider to engage with over 280 global cloud customers Visit www.opendatacenteralliance.org for more details26
    • This Week’s NewsSolutions Providers Respond To Alliance Usage Models Today’s panel themeCollaboration with Facebook-led Open Compute Project Focus on acceleration of efficient data center infrastructure and open, scalable systems managementAlliance kicks off “Conquering the Cloud Challenge” Best practice competition with $10,000 top prize
    • Additional Sources of Information on This Topic: •  Stay right here for the Open Data Center Alliance Solutions Provider Panel – 11:20 AM in this room –  Host: Marvin Wheeler, ODCA Chairman –  Panelists: Citrix, Dell, EMC, Red Hat, Vmware •  Visit the tech showcase to see solutions provider usage model POCs –  Demos of Carbon Footprint, I/O Control, Security Compliance, Service Catalog, & VM Interoperability28
    • Other Technical Sessions Company Description Time RM DCCS001 Build Your Own SMB Hybrid Cloud Using Pay-As-You-  DCCS002 Intel Go Intel AppUpSM Small Business Service 13:05 2001 Intel Cloud Trends – Harnessing Innovation in IT 14:10 2002  DCCS003 Intel, Improving Data Center Efficiency with Intel®  16:25 2002 Facebook Products, Technologies and Solutions Wednesday DCCS004 Intel, The Open Data Center Alliance and Developing 10:15 2002  Terremark a Usage Model Roadmap for Cloud Computing Panel: Open Data Center Alliance Solution DCCP001 Intel, 11:20 2002 Provider Intel, Intel® Cloud Builders Reference Architecture: DCCS005 13:05 2002 HyTrust Inc Enabling Policy-based Trusted Clouds Hot Topic Q&A: Cloud Computing: Evolution of DCCQ001 Intel 16:25 2002 the Data Center Track29 = DONE
    • Please Fill out the Online Session Evaluation Form Be entered to win fabulous prizes every day! Winners will be announced at 6pm (Day 1/2) and 3:30pm (Day 3) You will receive an email prior to the end of this session.30
    • Q&A31
    • Legal Disclaimer •  INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. •  UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. •  Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. •  The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. •  Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. •  Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm. •  Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. •  Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to: http://www.intel.com/products/processor_number. •  Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel representative to obtain Intels current plan of record product roadmaps. •  Intel, Sponsors of Tomorrow and the Intel logo are trademarks of Intel Corporation in the United States and other countries. •  *Other names and brands may be claimed as the property of others. •  Copyright ©2011 Intel Corporation.32
    • Risk Factors The above statements and any others in this document that refer to plans and expectations for the second quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should,” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intels expectations due to factors including changes in business and economic conditions, including supply constraints and other disruptions affecting customers; customer acceptance of Intel’s and competitors’ products; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Potential disruptions in the high technology supply chain resulting from the recent disaster in Japan could cause customer demand to be different from Intel’s expectations. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intels products; actions taken by Intels competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; product mix and pricing; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long- lived assets, including manufacturing, assembly/test and intangible assets. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intels products and the level of revenue and profits. The majority of Intel’s non-marketable equity investment portfolio balance is concentrated in companies in the flash memory market segment, and declines in this market segment or changes in management’s plans with respect to Intel’s investments in this market segment could result in significant impairment charges, impacting restructuring charges as well as gains/ losses on equity investments and interest and other. Intels results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intels results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust and other issues, such as the litigation and regulatory matters described in Intels SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting us from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the report on Form 10-Q for the quarter ended April 2, 2011. Rev. 5/9/1133
    • Backup Slides34
    • Establishing a Vision for Cloud Computing Drive new levels of IT agility through delivery of unified customer requirements for cloud computing enabling secure & federated cloud services, agility of IT infrastructure, common management and policy for data center resources, and transparency in cloud service capability and metrics.35