VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

843
-1

Published on

This design guide explains how service providers can use specific products in the compute, network, storage, security, and management component layers of Vblock Systems to support the six foundational elements of trusted multi-tenancy. By meeting these objectives, Vblock Systems offer service providers and enterprises an ideal business model and IT infrastructure to securely provision IaaS to multiple tenants.

This guide demonstrates processes for:
• Designing and managing Vblock Systems to deliver infrastructure multi-tenancy and service multi-tenancy
• Managing and operating Vblock Systems securely and reliably

The specific goal of this guide is to describe the design of and rationale behind the solution. The guide looks at each layer of the Vblock System and shows how to achieve trusted multi-tenancy at each layer. The design includes many issues that must be addressed prior to deployment, as no two environments are alike.
The target audience for this guide is highly technical, including technical consultants, professional services personnel, IT managers, infrastructure architects, partner engineers, sales engineers, and service providers deploying a trusted multi-tenancy environment with leading technologies from VCE

Download this design guide to get the full details.

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
843
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
50
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

  1. 1. Vbl ock Sol uti on for Trusted M ulti-Tenancy: D esign Guide Tabl e of C ontents VBLOCK™ SOLUTION FOR TRUSTED MULTI-TENANCY: DESIGN GUIDE Version 2.0 March 2013 www.v ce.com © 2013 VCE Company, LLC. All Rights Reserved.
  2. 2. Copy right 2013 VCE Company , LLC. All Rights Reserv ed. VCE believ es the inf ormation in this publication is accurate as of its publication date. The inf ormation is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OR MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 2© 2013 VCE Company, LLC. All Rights Reserved.
  3. 3. Contents Introduction .........................................................................................................................................7 About this guide.................................................................................................................................7 Audience ............................................................................................................................................8 Scope .................................................................................................................................................8 Feedback ...........................................................................................................................................8 Trusted multi-tenancy foundational elements ..............................................................................9 Secure separation ...........................................................................................................................10 Service assurance...........................................................................................................................10 Security and compliance ................................................................................................................11 Availability and data protection ......................................................................................................11 Tenant management and control...................................................................................................11 Service provider management and control...................................................................................12 Technology overview.......................................................................................................................13 Management....................................................................................................................................14 Advanced Management Pod ......................................................................................................14 EMC Ionix Unified Infrastructure Manager/Provisioning...........................................................14 Compute technologies ....................................................................................................................15 Cisco Unified Computing System...............................................................................................15 VMw are vSphere .........................................................................................................................15 VMw are vCenter Server ..............................................................................................................15 VMw are vCloud Director..............................................................................................................15 VMw are vCenter Chargeback.....................................................................................................16 VMw are vShield ...........................................................................................................................16 Storage technologies ......................................................................................................................16 EMC Fully Automated Storage Tiering ......................................................................................16 EMC FA ST Cache .......................................................................................................................17 EMC Pow erPath/V E ....................................................................................................................17 EMC Unified Storage...................................................................................................................17 EMC Unisphere Management Suite...........................................................................................17 EMC Unisphere Quality of Service Manager.............................................................................18 Netw ork technologies......................................................................................................................18 Cisco Nexus 1000V Series .........................................................................................................18 Cisco Nexus 5000 Series............................................................................................................18 Cisco Nexus 7000 Series............................................................................................................18 Cisco MDS....................................................................................................................................18 Cisco Data Center Netw ork Manager ........................................................................................19 3© 2013 VCE Company, LLC. All Rights Reserved.
  4. 4. Security technologies......................................................................................................................19 RSA Archer eGRC.......................................................................................................................19 RSA enV ision ...............................................................................................................................19 Design framework.............................................................................................................................20 End-to-end topology........................................................................................................................20 Virtual machine and cloud resources layer................................................................................21 Virtual access layer/vSw itch .......................................................................................................22 Storage and SA N layer................................................................................................................22 Compute layer..............................................................................................................................22 Netw ork layers .............................................................................................................................23 Logical topology ..............................................................................................................................23 Tenant traffic flow representation ...............................................................................................26 VMw are vSphere logical framew ork overview...........................................................................28 Logical design..................................................................................................................................32 Cloud management cluster logical design .................................................................................32 vSphere cluster specifications ....................................................................................................33 Host logical design specifications for cloud management cluster ...........................................33 Host logical configuration for resource groups..........................................................................34 VMw are vSphere cluster host design specification for resource groups ................................34 Security.........................................................................................................................................35 Tenant anatomy overview ..............................................................................................................35 Design considerations for management and orchestration.....................................................37 Configuration ...................................................................................................................................39 Enabling services ............................................................................................................................40 Creating a service offering ..........................................................................................................41 Provisioning a service..................................................................................................................41 Design considerations for compute..............................................................................................42 Design considerations for secure separation................................................................................43 Cisco UCS ....................................................................................................................................43 VMw are vCloud Director .............................................................................................................52 Design considerations for service assurance ...............................................................................58 Cisco UCS ....................................................................................................................................58 VMw are vCloud Director .............................................................................................................60 Design considerations for security and compliance .....................................................................62 Cisco UCS ....................................................................................................................................62 VMw are vCloud Director .............................................................................................................65 VMw are vCenter Server ..............................................................................................................67 4© 2013 VCE Company, LLC. All Rights Reserved.
  5. 5. Design considerations for availability and data protection...........................................................67 Cisco UCS ....................................................................................................................................68 Virtualization.................................................................................................................................69 Design considerations for tenant management and control........................................................73 VMw are vCloud Director .............................................................................................................73 Design considerations for service provider management and control........................................74 Virtualization.................................................................................................................................74 Design considerations for storage................................................................................................78 Design considerations for secure separation................................................................................78 Segmentation by VSA N and zoning...........................................................................................78 Separation of data at rest............................................................................................................80 Address space separation...........................................................................................................80 Separation of data access...........................................................................................................83 Design considerations for service assurance ...............................................................................89 Dedication of runtime resources.................................................................................................89 Quality of service control.............................................................................................................89 EMC V NX FA ST V P ....................................................................................................................90 EMC FA ST Cache .......................................................................................................................92 EMC Unisphere Management Suite...........................................................................................92 VMw are vCloud Director .............................................................................................................92 Design considerations for security and compliance .....................................................................93 Authentication w ith LDA P or Active Directory ...........................................................................93 VNX and RSA enV ision...............................................................................................................96 Design considerations for availability and data protection...........................................................97 High availability ............................................................................................................................97 Local and remote data protection ...............................................................................................99 Design considerations for service provider management and control......................................101 Design considerations for networking.......................................................................................102 Design considerations for secure separation..............................................................................102 VLANs.........................................................................................................................................102 Virtual routing and forw arding...................................................................................................103 Virtual device context ................................................................................................................105 Access control list ......................................................................................................................105 Design considerations for service assurance .............................................................................106 Design considerations for security and compliance ...................................................................108 Data center firew alls ..................................................................................................................109 Services layer.............................................................................................................................112 Cisco Application Control Engine .............................................................................................112 Cisco Intrusion Prevention System ..........................................................................................114 5© 2013 VCE Company, LLC. All Rights Reserved.
  6. 6. Cisco A CE, Cisco ACE Web Application Firew all, Cisco IPS traffic flow s............................117 Access layer...............................................................................................................................118 Security recommendations .......................................................................................................123 Threats mitigated .......................................................................................................................124 Vblock™ Systems security features.........................................................................................124 Design considerations for availability and data protection.........................................................125 Physical redundancy design consideration .............................................................................125 Design considerations for service provider management and control......................................129 Design considerations for additional security technologies .................................................130 Design considerations for secure separation..............................................................................131 RSA Archer eGRC.....................................................................................................................131 RSA enV ision .............................................................................................................................131 Design considerations for service assurance .............................................................................131 RSA Archer eGRC.....................................................................................................................131 RSA enV ision .............................................................................................................................132 Design considerations for security and compliance ...................................................................133 RSA Archer eGRC.....................................................................................................................133 RSA enV ision .............................................................................................................................134 Design considerations for availability and data protection.........................................................134 RSA Archer eGRC.....................................................................................................................134 RSA enV ision .............................................................................................................................135 Design considerations for tenant management and control......................................................135 RSA Archer eGRC.....................................................................................................................135 RSA enV ision .............................................................................................................................135 Design considerations for service provider management and control......................................136 RSA Archer eGRC.....................................................................................................................136 RSA enV ision .............................................................................................................................136 Conclusion.......................................................................................................................................137 Next steps ........................................................................................................................................139 Acronym glossary..........................................................................................................................140 6© 2013 VCE Company, LLC. All Rights Reserved.
  7. 7. Introduction The Vblock™ Solution for Trusted Multi-Tenancy (TMT) Design Guide describes how Vblock™ Systems allow enterprises and service providers to rapidly build virtualized data centers that support the unique challenges of provisioning Infrastructure as a Service (IaaS) to multiple tenants. The trusted multi-tenancy solution comprises six foundational elements that address the unique requirements of the IaaS cloud service model:  Secure separation  Service assurance  Security and compliance  Availability and data protection  Tenant management and control  Service provider management and control The trusted multi-tenancy solution deploys compute, storage, network, security, and management Vblock System components that address each element while offering service providers and tenants numerous benefits. The following table summarizes these benefits. Provider benefits Tenant benefits Lower cost-to-serv e Cost sav ings transf erred to tenants Standardized off erings Faster incident resolution with standardized serv ices Easier growth and scale using standard inf rastructures Secure isolation of resources and data More predictable planning around capacity and workloads Usage-based serv ices model, such as backup and storage About this guide This design guide explains how service providers can use specific productsin the compute, network, storage, security, and management componentlayers of Vblock Systems to support the six foundational elements of trusted multi-tenancy. By meeting these objectives, Vblock Systems offer service providers and enterprises an ideal business model and IT infrastructure to securely provision IaaS to multiple tenants. This guide demonstrates processes for:  Designing and managing Vblock Systems to deliver infrastructure multi-tenancy and service multi-tenancy  Managing and operating Vblock Systems securely and reliably 7© 2013 VCE Company, LLC. All Rights Reserved.
  8. 8. The specific goal of this guideis to describe the design of and rationale behind the solution. The guide looks at eachlayer of the Vblock System and shows how to achieve trusted multi-tenancy at each layer. The design includes many issues that must be addressed prior to deployment, as no two environments are alike. Audience The target audience for this guide is highly technical, including technical consultants, professional services personnel, IT managers, infrastructure architects, partner engineers, sales engineers, and service providers deploying a trusted multi-tenancy environment with leading technologies from VCE. Scope Trusted multi-tenancy can be used to offer dedicated IaaS (compute, storage, network, management, and virtualization resources) or leverage single instances of services and applications for multiple consumers. This guide only addresses design considerations for offering dedicated IaaS to multiple tenants. While this design guide describes how Vblock Systems can be designed, operated, and managed to support trusted multi-tenancy, it does not provide specific configuration information, which must be specifically considered for each unique deployment. In this guide, the terms “Tenant” and “Consumer” refer to the consumers of the services provided by a service provider. Feedback To suggest documentation changes and provide feedback on this paper, send email to docfeedback@vce.com. Include the title of this paper, the name of the topic to which your comment applies, and your feedback. 8© 2013 VCE Company, LLC. All Rights Reserved.
  9. 9. Trusted multi-tenancy foundational elements The trusted multi-tenancy solution comprises six foundational elements that address the unique requirements of the IaaS cloud service model:  Secure separation  Service assurance  Security and compliance  Availability and data protection  Tenant management and control  Service provider management and control Figure 1. Six elements of the Vblock Solution for Trusted Multi-Tenancy 9© 2013 VCE Company, LLC. All Rights Reserved.
  10. 10. Secure separation Secure separation refers to the effective segmentation and isolation of tenants and their assets within the multi-tenant environment. Adequate secure separation ensures that the resources of existing tenants remain untouched and the integrity of the applications, workloads, and data remains uncompromised when the service provider provisions new tenants. Each tenant might have access to different amounts of network, compute, and storage resources in the converged stack. The tenant sees only those resources allocated to them. From the standpoint of the service provider, secure separation requires the systematic deployment of various security control mechanisms throughout the infrastructure to ensure the confidentiality, integrity, and availability of tenant data, services, and applications. The logical segmentation and isolation of tenant assets and information is essential for providing confidentiality in a multi-tenant environment. In fact, ensuring the privacy and security of each tenant becomes a key design requirement in the decision to adopt cloud services. Service assurance Service assurance plays a vital rolein providing tenants with consistent, enforceable, and reliable service levels. Unlike physical resources, virtual resources are highly scalable and easy to allocate and reallocate on demand. In a multi-tenant virtualized environment, the service provider prioritizes virtual resources to accommodate the growth and changing business needs of tenants. Servicelevel agreements (SLA) define the level of service agreed to by the tenant and service provider.The service assurance element of trusted multi-tenancy provides technologies and methods to ensure that tenants receive the agreed-upon level of service. Various methods are available to deliver consistent SLAs across the network, compute, and storage components of the Vblock System,including:  Quality of service in the Cisco Unified Computing System (UCS) and Cisco Nexus platforms  EMC Symmetrix Quality of Service tools  EMC Unisphere Quality of Service Manager (UQM)  VMware Distributed Resource Scheduler (DRS) Without the correct mix of service assurance features and capabilities, it can be difficult to maintain uptime, throughput, quality of service, and availability SLAs. 10© 2013 VCE Company, LLC. All Rights Reserved.
  11. 11. Security and compliance Security and compliance refers to the confidentiality, integrity, and availability of each tenant’s environment at every layer of the trusted multi-tenancy stack. Trusted multi-tenancy ensures security and compliance using technologies like identity management and access control, encryption and key management, firewalls, malware protection, and intrusion prevention.This is a primary concern for both service provider and tenant. The trusted multi-tenancy solution ensures that all activities performed in the provisioning, configuration, and management of the multi-tenant environment, as well as day-to-day activities and events for individual tenants, are verified and continuously monitored. Itis also important that all operational events are recorded and that these records are available as evidence during audits. As regulatory requirements expand, the private cloud environment will become increasingly subject to security and compliance standards, such as Payment Card Industry Data Security Standards (PCI- DSS), HIPAA, Sarbanes-Oxley (SOX), and Gramm-Leach-Bliley Act (GLBA). With the proper tools, achieving and demonstrating compliance is not only possible, but it can often become easier than in a non-virtualized environment. Availability and data protection Resources and data must be available for use by the tenant. High availability means that resources such as network bandwidth, memory, CPU, or data storage are always online and available to users when needed. Redundant systems, configurations, and architecture can minimize or eliminate points of failure that adversely affect availability to the tenant. Data protection is a key ingredient in a resilient architecture. Cloud computing imposes a resource trade-off from high performance. Increasingly robust security and data classification requirements are an essential tool for balancing that equation. Enterprises need to know what data is important and where it islocated as prerequisites to making performance cost-benefit decisions, as well as ensuring focus on the most critical areas for data loss prevention procedures. Tenant management and control In every cloud services model there are elements of control that the service provider delegates to the tenant. The tenant’s administrative, management, monitoring, and reporting capabilities need to be restricted to the delegated resources. Reasons for delegating control include convenience, new revenue opportunities, security, compliance, or tenant requirement. In all cases, the goal of the trusted multi-tenancy model is to allow for and simplify the management, visibility, and reporting of this delegation. 11© 2013 VCE Company, LLC. All Rights Reserved.
  12. 12. Tenants should have control over relevant portions of their service. Specifically, tenants should be able to:  Provision allocated resources  Manage the state of all virtualized objects  View change management status for theinfrastructure component  Add and remove administrative contacts  Request more services as needed In addition, tenants taking advantage of data protection or data backup services should be able to manage this capability on their own, including setting schedules and backup types, initiating jobs, and running reports. This tenant-in-control model allows tenants to dynamically change the environment to suit their workloads as resource requirements change. Service provider management and control Another goal of trusted multi-tenancy is to simplify management of resources at every level of the infrastructure and to provide the functionality to provision, monitor, troubleshoot, and charge back the resources used by tenants. Management of multi-tenant environments comes with challenges, from reporting and alerting to capacity management and tenant control delegation. The Vblock System helps address these challenges by providing scalable, integrated management solutions inherent to the infrastructure, and a rich, fully developed application programming interface (API) stack for adding additional service provider value. Providers of infrastructure services in a multi-tenant environment require comprehensive control and complete visibility of the shared infrastructure to provide the availability, data protection, security, and service levels expected by tenants. The ability to control, manage, and monitor resources at all levels of the infrastructure requires a dynamic, efficient, and flexible design that allows the service provider to access, provision, and then release computing resources from a shared pool – quickly, easily, and with minimal effort. 12© 2013 VCE Company, LLC. All Rights Reserved.
  13. 13. Technology overview The Vblock System from VCE is the world's most advanced converged infrastructure—one that optimizes infrastructure, lowers costs, secures the environment, simplifies management, speeds deployment, and promotes innovation. The Vblock System is designed as one architecture that spans the entire portfolio, includes best-in-class components, offers a single point of contact from initiation through support, and provides the industry's most robust range of configurations. Vblock Systems provide production ready (fully tested) virtualized infrastructure components, including industry-leading technologies from Cisco, EMC, and VMware. Vblock Systems are designed and built to satisfy a broad range of specific customer implementation requirements. To design trusted multi- tenancy, you need to understand eachlayer (compute, network, and storage) of the Vblock System architecture. Figure 2 provides an example of Vblock System architecture. Figure 2. Example of Vblock System architecture Note: Cisco Nexus 7000 is not part of the Vblock System architecture. This section describes the technologies at each layer of the Vblock System addressed in this guide to achieve trusted multi-tenancy. 13© 2013 VCE Company, LLC. All Rights Reserved.
  14. 14. Management Management technologies include Advanced Management Pod (AMP) and EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P) (optional). Advanced Management Pod Vblock Systems include an AMP that provides a single management point for the Vblock System. It enables the following benefits:  Monitors and manages Vblock System health, performance, and capacity  Provides fault isolation for management  Eliminates resource overhead on the Vblock System  Provides a clear demarcation point for remote operations Two versions of the AMP are available: a mini-AMP and a high-availability version (HA AMP). A high- availability AMP is recommended. For moreinformation on AMP, refer to the Vblock Systems Architecture Overview documentation located at www.vce.com/vblock. EMC Ionix Unified Infrastructure Manager/Provisioning EMC Ionix UIM/P can be used to provide automated provisioning capabilities for the Vblock System in a trusted multi-tenancy environment by combining provisioning with configuration, change, and compliance management. With UIM/P, you can speed service delivery and reduce errors with policy- based, automated converged infrastructure provisioning. Key features include the ability to:  Easily define and create infrastructure service profiles to match business requirements  Separate planning from execution to optimize senior IT technical staff  Respond to dynamic business needs withinfrastructure service life cycle management  Maintain Vblock System compliance through policy-based management  Integrate with VMware vCenter and VMware vCloud Director for extended management capabilities 14© 2013 VCE Company, LLC. All Rights Reserved.
  15. 15. Compute technologies Within the computing infrastructure of the Vblock System, multi-tenancy concerns at multiplelevels must be addressed, including the UCS server infrastructure and the VMware vSphere Hypervisor. Cisco Unified Computing System The Cisco UCS is a next-generation data center platform that unites network, compute, storage, and virtualization into a cohesive system designed to reduce total cost of ownership and increase business agility. The system integrates a low-latency, lossless, 10 Gb Ethernet (GbE) unified network fabric with enterprise class x86 architecture servers. The system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. Whether it has only one server or many servers with thousands of virtual machines (VM), the Cisco UCS is managed as a single system, thereby decoupling scale from complexity. Cisco UCS Manager provides unified, centralized, embedded management of all software and hardware components of the Cisco UCS across multiple chassis and thousands of virtual machines. The entire UCS is managed as a single logical entity through an intuitive graphical user interface (GUI), a command-line interface (CLI), or an XML API. UCS Manager delivers greater agility and scale for server operations while reducing complexity and risk. It provides flexible role- and policy- based management using service profiles and templates, and it facilitates processes based on IT Infrastructure Library (ITIL) concepts. VMw are vSphere VMware vSphere is a complete, scalable, and powerful virtualization platform, delivering the infrastructure and application services that organizations need to transform their information technology and deliver IT as a service. VMware vSphere is a host operating system that runs directly on the Cisco UCS infrastructure and fully virtualizes the underlying hardware, allowing multiple virtual machine guest operating systems to share the UCS physical resources. VMw are vCenter Server VMware vCenter Server is a simple and efficient way to manage VMware vSphere. It provides unified management of all the hosts and virtual machines in your data center from a single console with aggregate performance monitoring of clusters, hosts and virtual machines. VMware vCenter Server gives administrators deep insight into the status and configuration of clusters, hosts, virtual machines, storage, the guest operating system, and other critical components of a virtual infrastructure. It plays a key role in helping achieve secure separation, availability, tenant management and control, and service provider management and control. VMw are vCloud Director VMware vCloud Director gives customers the ability to build secure private clouds that dramatically increase data center efficiency and business agility. With VMware vSphere, VMware vCloud Director delivers cloud computing for existing data centers by pooling virtual infrastructure resources and delivering them to users as catalog-based services. 15© 2013 VCE Company, LLC. All Rights Reserved.
  16. 16. VMw are vCenter Chargeback VMware vCenter Chargeback is an end-to-end metering and cost reporting solution for virtual environments that enables accurate cost measurement, analysis, and reporting of virtual machines using VMware vSphere. Virtual machine resource consumption datais collected from VMware vCenter Server. Integration with VMware vCloud Director also enables automated chargeback for private cloud environments. VMw are vShield The VMware vShield family of security solutions provides virtualization-aware protection for virtual data centers and cloud environments. VMware vShield products strengthen application and data security, enable trusted multi-tenancy, improve visibility and control, and accelerate IT compliance efforts across the organization. VMware vShield products include vShield App and vShield Edge. vShield App provides firewall capability between virtual machines by placing a firewall filter on every virtual network adapter. It allows for easy application of firewall policies. vShield Edge virtualizes data center perimeters and offers firewall, VPN, Web load balancer, NAT, and DCHP services. Storage technologies The features of multi-tenancy offerings can be combined with standard security methods such as storage area network (SAN) zoning and Ethernet virtual local area networks (VLAN) to segregate, control, and manage storage resources among the infrastructure tenants. EMC Fully Automated Storage Tiering EMC Fully Automated Storage Tiering (FAST) automates the movement and placement of data across storage resources as needed. FAST enables continuous optimization of your applications by eliminating trade-offs between capacity and performance, while simultaneously lowering cost and delivering higher service levels. EMC VNX FAST VP EMC VNX FAST VPis a policy-based auto-tiering solution that efficiently utilizes storage tiers by moving slices of colder data to high-capacity disks. It increases performance by keeping hotter slices of data on performance drives. In a VMware vCloud environment, FAST VP enables providers to offer a blended storage offering, reducing the cost of a traditional single-type offering while allowing for a wider range of customer use cases. This helps accommodate a larger cross-section of virtual machines with different performance characteristics. 16© 2013 VCE Company, LLC. All Rights Reserved.
  17. 17. EMC FA ST Cache EMC FAST Cacheis an industry-leading feature supported by Vblock Systems. It extends the EMC VNX array’s read-write cache and ensures that unpredictable I/O spikes are serviced at enterprise flash drive (EFD) speeds, which is of particular benefit in a VMware vCloud Director environment. Multiple virtual machines on multiple virtual machine file system (VMFS) data stores spread across multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors as well as the DRAM cache. FAST Cache, a standard feature on all Vblock Systems, mitigates the effects of this kind of I/O by extending the DRAM cache for reads and writes, increasing the overall cache performance of the array, improving l/O during usage spikes, and dramatically reducing the overall number of dirty pages and cache misses. Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache work together to improve array performance. Data that has been promoted to an EFD tier is never cached inside FAST Cache, ensuring that both options are leveragedin the most efficient way. EMC Pow erPath/V E EMC PowerPath/VE delivers PowerPath multipathing features to optimize storage access in VMware vSphere virtual environments by removing the administrative overhead associated with load balancing and failover. Use PowerPath/VE to standardize path management across heterogeneous physical and virtual environments. PowerPath/VE enables you to automate optimal server, storage, and path utilization in a dynamic virtual environment. PowerPath/VE works with VMware vSphere ESXi as a multipathing plug-in that provides enhanced path management capabilities to ESXi hosts. It installs as a kernel module on the vSphere host and plugs in to the vSphere I/O stack framework to bring the advanced multipathing capabilities of PowerPath–dynamic load balancing and automatic failover–to the VMware vSphere platform. EMC Unified Storage The EMC Unified Storage system is a highly available architecture capable of five nines availability. The Unified Storage arrays achieve five nines availability by eliminating single points of failure throughout the physical storage stack, using technologies such as dual-ported drives, hot spares, redundant back-end loops, redundant front-end and back-end ports, dual storage processors, redundant fans and power supplies, and cache battery backup. EMC Unisphere Management Suite EMC Unisphere provides a simple,integrated experience for managing EMC Unified Storage through both a storage and VMwarelens. Key features include a Web-based management interface to discover, monitor, and configure EMC Unified Storage; self-service support ecosystem to gain quick access to realtime online support tools; automatic event notification to proactively manage critical status changes; and customizable dashboard views and reporting. 17© 2013 VCE Company, LLC. All Rights Reserved.
  18. 18. EMC Unisphere Quality of Service Manager EMC Unisphere Quality of Service (QoS) Manager enables dynamic allocation of storage resources to meet servicelevel requirements for critical applications. QoS Manager monitors storage system performance on an appliance-by-application basis, providing a logical view of application performance on the storage system. In addition to displaying real-time data, performance data can be archived for offline trending and data analysis. Network technologies Multi-tenancy concerns must be addressed at multiple levels within the network infrastructure of the Vblock System. Various methods, including zoning and VLANs, can enforce network separation. Internet Protocol Security (IPsec) also provides application-independent network encryption at the IP layer for additional security. Cisco Nexus 1000V Series The Cisco Nexus 1000V is a software switch embedded in the software kernel of VMware vSphere. The Nexus 1000V provides virtual machine-level network visibility,isolation, and security for VMware server virtualization. With the Nexus 1000V Series, virtual machines can leverage the same network configuration, security policy, diagnostic tools, and operational models as their physical server counterparts attached to dedicated physical network ports. Virtualization administrators can access predefined network policies that follow mobile virtual machines to ensure proper connectivity, saving valuable resources for virtual machine administration. Cisco Nexus 5000 Series Cisco Nexus 5000 Series switches are data center class, high performance, standards-based Ethernet and Fibre Channel over Ethernet (FCoE) switches that enable the consolidation of LAN, SAN, and cluster network environments onto a single unified fabric. Cisco Nexus 7000 Series Cisco Nexus 7000 Series switches are modular switching systems designed for use in the data center. Nexus 7000 switches deliver the scalability, continuous systems operation, and transport flexibility required for 10 GB/s Ethernet networks today. In addition, the system architecture is capable of supporting future 40 GB/s Ethernet, 100 GB/s Ethernet, and unified I/O modules. Cisco MDS The Cisco MDS 9000 Series helps build highly available, scalable storage networks with advanced security and unified management.The Cisco MDS 9000 family facilitates secure separation at the network layer with virtual storage area networks (VSAN) and zoning. VSANs help achieve higher security and greater stability in fibre channel (FC) fabrics by providing isolation among devices that are physically connected to the same fabric. The zoning service within a fibre channel fabric provides security between devices sharing the same fabric. 18© 2013 VCE Company, LLC. All Rights Reserved.
  19. 19. Cisco Data Center Netw ork Manager Cisco Data Center Network Manager provides an effective tool to manage the Cisco data center infrastructure and actively monitor the SAN and LAN. Security technologies RSA Archer eGRC and RSA enVision security technologies can be used to achieve security and compliance. RSA Archer eGRC The RSA Archer eGRC Platform for enterprise governance, risk, and compliance has the industry’s most comprehensive library of policies, control standards, procedures, and assessments mapped to current global regulations and industry guidelines. The flexibility of the RSA Archer framework, coupled with this library, provides the service providers and tenants in a trusted multi-tenant environment the mechanism to successfully implement a governance, risk, and compliance program over the Vblock System. This addresses both the components and technologies comprising the Vblock System and the virtualized services and resourcesit hosts. Organizations can deploy the RSA Archer eGRC Platform in a variety of configurations, based on the expected user load, utilization, and availability requirements. As business needs evolve, the environment can adapt and scale to meet the new demands. Regardless of the size and solution architecture, the RSA Archer eGRC Platform consists of three logical layers: a .NET Web-enabled interface, the application layer, and a Microsoft SQL database backend. RSA enV ision The RSA enVision platform is a security information and event management (SIEM) solution that offers a scalable, distributed architecture to collect, store, manage, and correlate event logs generated from all the components comprising the Vblock System–from the physical devices and software products to the management and orchestration and security solutions. By seamlessly integrating with RSA Archer eGRC, RSA enVision provides both service providers and tenants a powerful solution to collect and correlate raw datainto actionable information. Not only does RSA enVision satisfy regulatory compliance requirements, it helps ensure stability and integrity through robust incident management capabilities. 19© 2013 VCE Company, LLC. All Rights Reserved.
  20. 20. Design framework This section provides the following information:  End-to-end topology  Logical topology  Logical design details  Overview of tenant anatomy End-to-end topology Secure separation creates trusted zones that shield each tenant’s applications, virtual machines, compute, network, and storage from compromise and resource effects caused by adjacent tenants and external threats. The solution framework presented in this guide considers additional technologies that comprehensively provide appropriatein-depth defense. A combination of protective, detective, and reactive controls and solid operational processes are required to deliver protection against internal and external threats. Key layers include:  Virtual machine and cloud resources (VMware vSphere and VMware vCloud Director)  Virtual access/vSwitch (Cisco Nexus 1000V)  Storage and SAN (Cisco MDS and EMC storage)  Compute (Cisco UCS)  Access and aggregation (Nexus 5000 and Nexus 7000) Figure 3illustrates the design framework. 20© 2013 VCE Company, LLC. All Rights Reserved.
  21. 21. Figure 3. Trusted multi-tenancy design framework Virtual machine and cloud resources layer VMware vSphere and VMware vCloud Director are used in the cloud layer to accelerate the delivery and consumption of IT services while maintaining the security and control of the data center. VMware vCloud Director enables the consolidation of virtual infrastructure across multiple clusters, the encapsulation of application services as portable vApps, and the deployment of those services on- demand with isolation and control. 21© 2013 VCE Company, LLC. All Rights Reserved.
  22. 22. Virtual access layer/vSw itch Cisco Nexus 1000V distributed virtual switch acts as the virtual network access layer for the virtual machines. Edge LAN policies such as quality of service marking and vNIC ACLs are implemented at this layer in Nexus 1000V port-profiles. The following table describes the virtual access layer. Component Description One data center One primary Nexus 1000V Virtual Superv isor Module (VSM) One secondary Nexus 1000V Virtual Superv isor Module VMware ESXi serv ers Each running an instance of the Nexus 1000V Virtual Ethernet Module (VEM) Tenant Multiple v irtual machines, which hav e diff erent applications such as Web serv er, database, and so f orth, for each tenant Storage and SA N layer The trusted multi-tenancy design framework is based on the use of storage arrays supporting fibre channel connectivity. The storage arrays connect through MDS SAN switches to the UCS 6120 switches in the access layer. Several layers of security (including zoning, access controls at the guest operating system and ESXi level, andlogical unit number (LUN) masking within the VNX) tightly control access to data on the storage system. Compute layer The following table provides an example of the components of a multi-tenant environment virtual compute farm. Note: A Vblock System may have more resources than what is described in thef ollowing table. Component Description Three UCS 5108 chassis  11 UCS B200 servers (dual quad-coreIntel Xeon X5570 CPU at 2.93 GHZ and 96 GB RAM)  Four UCS B440 serv ers (f ourIntel Xeon 7500 series processors and 32 dual in-line memory module slots with 256 GB memory)  Ten GbE Cisco VIC conv erged network adapters (CNA) organized into a VMware ESXi cluster 15 serv ers (4 clusters)  Each serv er has two CNAs and are dual-attached to the UCS 6100 f abric interconnect  The CNAs provide: - LAN and SAN connectivity tothe serv ers, which run VMware ESXi 5.0 hypervisor - LAN and SAN services to the hy pervisor 22© 2013 VCE Company, LLC. All Rights Reserved.
  23. 23. Netw ork layers Access layer Nexus 5000 is used at the access layer and connects to the Cisco UCS 6120s. In the Layer 2 access layer, redundant pairs of Cisco UCS 6120 switches aggregate VLANs from the Nexus 1000V distributed virtual switch. FCoE SAN traffic from virtual machinesis handed off as FC traffic to a pair of MDS SAN switches, and then to a pair of storage array controllers. FC expansion modules in the UCS 6120 switch provide SAN interconnects to dual SAN fabrics. The UCS 6120 switches are in N Port virtualization (NPV) mode to interoperate with the SAN fabric. Aggregation layer Nexus 7000 is used at the aggregationlayer. The virtual device context (VDC) feature in the Nexus 7000 separates it into sub-aggregation and aggregation virtual device contexts for Layer 3 routing. The aggregation virtual device context connects to the core network to route the internal data center traffic to the Internet and from the Internet back to the internal data center. Logical topology Figure 4 shows the logical topology for the trusted multi-tenancy design framework. 23© 2013 VCE Company, LLC. All Rights Reserved.
  24. 24. Figure 4. Trusted multi-tenancy logical topology 24© 2013 VCE Company, LLC. All Rights Reserved.
  25. 25. The logical topology represents the virtual components and virtual connections that exist within the physical topology. The following table describes the topology. Component Details Nexus 7000 Virtualized aggregation lay er switch. Prov ides redundant paths to the Nexus 5000 access lay er. Virtual port channel prov ides a logically loopless topology with conv ergence times based on EtherChannel. Creates three v irtual dev ice contexts (VDC): WAN edge v irtual dev ice context, sub-aggregation v irtual dev ice context, and aggregation v irtual dev ice context. Sub-aggregation v irtual dev ice context connects to Nexus 5000 and aggregation v irtual dev ice context by v irtual port channel. Nexus 5000 Unif ied access lay er switch. Prov ides 10 GbE IP connectiv ity between the Vblock System and the outside world. In a unif ied storage conf iguration, the switches also connect the f abric interconnects in the compute lay er to the data mov ers in the storage lay er. The switches also prov ide connectiv ity to the AMP. Two UCS 6120 f abric interconnects Prov ides a robust compute lay er platf orm. Virtual port channel prov ides a topology with redundant chassis, cards, and links with Nexus 5000 and Nexus 7000. Each connects to one MDS 9148 to f orm its own f abric. Four 4 GB/s FC links connect the UCS 6120 to MDS 9148. The MDS 9148 switches connect to the storage controllers. In this example, the storage array has two controllers. Each MDS 9148 has two connections to each FC storage controller. These dual connections prov ide redundancy if an FC controller f ails and the MDS 9148 is not isolated. Connect to the Nexus 5000 access switch through EtherChannel with dual-10 GbE. Three UCS chassis Each chassis is populated with blade serv ers and Fabric Extenders f or redundancy or aggregation of bandwidth. UCS blade serv ers Connect to the SAN f abric through the Cisco UCS 6120XP f abric interconnect, which uses an 8-port 8 GB f ibre channel expansion module to access the SAN. Connect to LAN through the Cisco UCS 6120XP f abric interconnects. These ports require SFP+ adapters. The serv er ports of f abric interconnects can operate at 10 GB/s and Fibre Channel ports of f abric interconnects can operate at 2/4/8 GB/s. EMC VN X storage Connects to the f abric interconnect with 8 GB f ibre channel f or block. Connects to the Nexus 5000 access switch through EtherChannel with dual-10 GbE f or f ile. 25© 2013 VCE Company, LLC. All Rights Reserved.
  26. 26. Tenant traffic flow representation Figure 5 depicts the traffic flow through each layer of the solution, from the virtual machine level to the storage layer. Figure 5. Tenant traffic flow 26© 2013 VCE Company, LLC. All Rights Reserved.
  27. 27. Traffic flow in the data center is classified into the following categories:  Front-end—User to data center, Web, GUI  Back-end—Within data center, multi-tier application, storage, backup  Management—Virtual machine access, application administration, monitoring, and so forth Note: Front-end traffic, also called client-to-server traffic, trav erses the Nexus 7000 aggregation layer and a select number of network-based services. At the application layer, each tenant may have multiple vApps with applications and have different virtual machines for different workloads. The Cisco Nexus 1000V distributed virtual switch acts as the virtual access layer for the virtual machines. Edge LAN policies, such as quality of service marking and vNIC ACLs, can be implemented at the Nexus 1000V. Each ESXi server becomes a virtual Ethernet blade of Nexus 1000V, called Virtual Ethernet Module (VEM). Each vNIC connects to Nexus 1000V through a port group; each port group specifies one or more VLANs used by a virtual machine NIC. The port group can also specify other network attributes, such as ratelimit and port security. The VM uplink port profile forwards VLANs belonging to virtual machines. The system uplink port profile forwards VLANs belonging to management traffic. The virtual machine traffic for different tenants traverses the network through different uplink port profiles, where port security, rate limiting, and quality of service apply to guarantee secure separation and assurance. VMware vSphere virtual machine NICs are associated to the Cisco Nexus 1000V to be used as the uplinks. The network interface virtualization capabilities of the Cisco adapter enable the use of VMware multi-NIC design on a server that has two 10 GB physical interfaces with complete quality of service, bandwidth sharing, and VLAN portability among the virtual adapters. vShield Edge controls all network traffic to and from the virtual data center and helps provide an abstraction of the separation in the cloud environment. Virtual machine traffic goes through the UCS FEX (I/O module) to the fabricinterconnect 6120. If the trafficis aligned to use the storage resources and it isintended to use FC storage, it passes over an FC port on the fabric interconnect and Cisco MDS, to the storage array, and through a storage processor, to reach the specific storage pool or storage groups. For example, if a tenant is using a dedicated storage resource with specific disks inside a storage array, traffic is routed to the assigned LUN with a dedicated storage group, RAID group, and disks. If there is NFS traffic, it passes over a network port on the fabric interconnect and Cisco Nexus 5000, through a virtual port channel to the storage array, and over a data mover, to reach the NFS data store. The NFS export LUN is tagged with a VLAN to ensure the security and isolation with a dedicated storage group, RAID group, and disks. Figure 5 shows an example of a few dedicated tenant storage resources. However, if the storage is designed for a shared traffic pool, trafficis routed to a specific storage pool to pull resources. ESXi hosts for different tenants pass the server-client and management traffic over a server port and reach the access layer of the Nexus 5000 through virtual port channel. Server blades on UCS chassis are allocated for the different tenants. The resource on UCS can be dedicated or shared. For example,if using dedicated servers for each tenant, VLANs are assigned for different tenants and are carried over the dot1Q trunk to the aggregation layer of the Nexus 7000, where each tenant is mapped to the Virtual Routing and Forwarding (VRF). Trafficis routed to the external network over the core. 27© 2013 VCE Company, LLC. All Rights Reserved.
  28. 28. VMw are vSphere logical framew ork overview Figure 6 shows the virtual VMware vSphere layer on top of the physical server infrastructure. Figure 6. vSphere logical framework The diagram shows blade server technology with three chassis initially dedicated to the VMware vCloud environment.The physical design represents the networking and storage connectivity from the blade chassis to the fabric and SAN, as well as the physical networking infrastructure. (Connectivity between the blade servers and the chassis switching is different and is not shown here.) Two chassis are initially populated with eight blades each for the cloud resource clusters, with an even distribution between the two chassis of blades belonging to each resource cluster. In this scenario, VMware vSphere resources are organized and separatedinto management and resource clusters with three resource groups (Gold, Silver, and Bronze). Figure 7 illustrates the management cluster and resource groups. 28© 2013 VCE Company, LLC. All Rights Reserved.
  29. 29. Figure 7. Management cluster and resource groups Cloud management clusters A cloud management cluster is a management cluster containing all core components and services needed to run the cloud. It is a resource group or “compute cluster” that represents dedicated resources for cloud consumption. It is best to use a separate cluster outside the Vblock System resources. Each resource groupis a cluster of VMware ESXi hosts managed by a VMware vCenter Server, and is under the control of VMware vCloud Director. VMware vCloud Director can manage the resources of multiple resource groups or multiple compute clusters. Cloud management components The following components run as minimum-requirement virtual machines on the management cluster hosts: Components Number of virtual machines vCenter Serv er 1 vCenter Database 1 vCenter Update Manager 1 vCenter Update Manager Database 1 vCloud Director Cells 2 (f or multi-cell) vCloud Director Database 1 29© 2013 VCE Company, LLC. All Rights Reserved.
  30. 30. Components Number of virtual machines vCenter Chargeback Serv er 1 vCenter Chargeback Database 1 v Shield Manager 1 Note: A vCloud Director cluster contains one or morevCloud Director serv ers; these servers are referred to as cells andf orm the basis of the VMware cloud. A cloud can be formed from multiple cells. The number of vCloud Director cells depends on thesize of the vCloud environment and the level of redundancy. Figure 8 highlights the cloud management cluster. Figure 8. Cloud management cluster Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vCenter management virtual machines. Best practices encourage separating the cloud management cluster from the cloud resource groups(s) in order to:  Facilitate quicker troubleshooting and problem resolution. Management components are strictly contained in a specified cluster and manageable management cluster.  Keep cloud management components separate from the resources they are managing.  Consistently and transparently manage and carve up resource groups.  Provide an additional step for high availability and redundancy for the trusted multi-tenancy infrastructure. 30© 2013 VCE Company, LLC. All Rights Reserved.
  31. 31. Resource groups A resource groupis a set of resources dedicated to user workloads and managed by VMware vCenter Server. vCloud Director manages the resources of all attached resource groups within vCenter Servers. All cloud-provisioning tasks are initiated through VMware vCloud Director and passed down to the appropriate vCenter Server instance. Figure 9 highlights cloud resource groups. Figure 9. Cloud resource groups Provisioning resources in standardized groupings promotes a consistent approach for scaling vCloud environments. For consistent workload experience, place each resource group on a separate resource cluster. The resource group design represents three VMware vSphere High Availability (HA) Distributed Resource Scheduler (DRS) clusters and infrastructure used to run the vApps that are provisioned and managed by VMware vCloud Director. 31© 2013 VCE Company, LLC. All Rights Reserved.
  32. 32. Logical design This section provides information about thelogical design, including:  Cloud management cluster logical design  VMware vSphere cluster specifications  Host logical design specifications  Host logical configurations for resource groups  VMware vSphere cluster host design specifications for resource groups  Security Cloud management cluster logical design The compute design encompasses the VMware ESXi hosts contained in the management cluster. Specifications are listed below. Attribute Specification Number of ESXi hosts 3 v Sphere datacenter 1 VMware Distributed Resource Scheduler conf iguration Fully automated VMware High Av ailability (HA) Enable Host Monitoring Yes VMware High Av ailability Admission Control Policy Cluster tolerances 1 host f ailure (percentage based) VMware High Av ailability percentage 67% VMware High Av ailability Admission Control Response Prev ent v irtual machines f rom being powered on if they violate av ailability constraints VMware High Av ailability Def ault VM Restart Priority N/A VMware High Av ailability Host Isolation Response Leav e v irtual machine powered on VMware High Av ailability Enable VM Monitoring Yes VMware High Av ailability VM Monitoring Sensitiv ity Medium Note: In this section, the scope is limited to only the Vblock System supportingthe management component workloads. 32© 2013 VCE Company, LLC. All Rights Reserved.
  33. 33. vSphere cluster specifications Each VMware ESXi host in the management cluster has the following specifications. Attribute Specification Host ty pe and v ersion VMware ESXi installable – v ersion 5.0 Processors x86 compatible Storage presented SAN boot f or ESXi – 20 GB SAN LUN f or v irtual machines – 2 TB NFS shared LUN f orv Cloud Director cells – 1 TB Networking Connectiv ity to all needed VLANs Memory Size to support all management v irtual machines. In this case, 96 GB memory in each host. Note: VMware v Cloud Director deployment requires storage for sev eral elements of the ov erall framework. The first is thestorage needed to house thevCloud Director management cluster. This includes the repository for configuration information, organizations, and allocations that arestored in an Oracle database. The second is the vSphere storage objects presented to vCloud Director as data stores accessed by ESXi serv ers inthe vCloud Director configuration. This storage is managed by thev Sphere administrator and consumed by vCloud Director users depending on vCloudDirector configuration. The third is the existence of a single NFS data store to serv e as a staging areafor vApps to be uploaded to a catalog. Host logical design specifications for cloud management cluster The following table identifies management components that rely on high availability and fault tolerance for redundancy. Management component High availability enabled? vCenter Serv er Yes vCloud Director Yes vCenter Chargeback Serv er Yes v Shield Manager Yes 33© 2013 VCE Company, LLC. All Rights Reserved.
  34. 34. Host logical configuration for resource groups The following table identifies the specifications for each VMware ESXi host in the resource cluster. Attribute Specification Host ty pe and v ersion VMware ESXi Installable – v ersion 5.0 Processors x86 compatible Storage presented SAN boot f or ESXi – 20 GB SAN LUN f or v irtual machines – 2 TB Networking Connectiv ity to all needed VLANs Memory Size to support v irtual machine workloads VMw are vSphere cluster host design specification for resource groups All VMware vSphere resource clusters are configured similarly with the following specifications. Attribute Specification VMware Distributed Resource Scheduler conf iguration Fully automated VMware Distributed Resource Scheduler Migration Threshold 3 stars VMware High Av ailability Enable Host Monitoring Yes VMware High Av ailability Admission Control Policy Cluster tolerances 1 host f ailure (percentage based) VMware High Av ailability percentage 83% VMware High Av ailability Admission Control Response Prev ent v irtual machines f rom being powered on if they v iolate av ailability constraints VMware High Av ailability Def ault VM Restart Priority N/A VMware High Av ailability Host Isolation Response Leav e v irtual machine powered on 34© 2013 VCE Company, LLC. All Rights Reserved.
  35. 35. Security The RSA Archer eGRC Platform can be run on a single server, with the application and database components running on the same server. This configuration is suitable for organizations:  With fewer than 50 concurrent users  That do not require a high-performance or high availability solution For the trusted multi-tenancy framework, RSA enVision can be deployed as a virtual appliance in the AMP. Each Vblock System component can be configured to utilize it as its centralized event manager through its identified collection method. RSA enVision can then beintegrated with RSA Archer eGRC per the RSA Security Incident Management Solution configuration guidelines. Tenant anatomy overview This design guide uses three tenants as examples: Orange (tenant 1), Vanilla (tenant 2), and Grape (tenant 3). All tenants share the sameinfrastructure and resources. Each tenant hasits own virtual compute, network, and storage resources. Resources are allocated for each tenant based on their business model, requirements, and priorities. Traffic between tenants is restricted, separated, and protected for the trusted multi-tenancy environment. Figure 10. Trusted multi-tenancy tenant anatomy 35© 2013 VCE Company, LLC. All Rights Reserved.
  36. 36. In this design guide (and associated configurations), three levels of services are provided in the cloud: Bronze, Silver, and Gold.These tiers define service levels for compute, storage, and network performance. The following table provides sample network and data differentiations by service tier. Bronze Silver Gold Serv ices No additional serv ices Firewall serv ices Firewall and load- balancing serv ices Bandwidth 20% 30% 40% Segmentation One VLAN per client, single Virtual Routing and Forwarding (VRF) Multiple VLANs per client, single VRF Multiple VLANs per client, single VRF Data Protection None Snap – v irtual copy (local site) Clone – mirror copy (local site) Disaster Recov ery None Remote application (with specif ic recov ery point objectiv e (RPO) / recov ery time objectiv e (RTO)) Remote replication (any - point-in-time recov ery ) Using this tiered model, you can do the following:  Offer service tiers with well-defined and distinct SLAs  Support customer segmentation based on desired service levels and functionality  Allow for differentiated application support based on service tiers 36© 2013 VCE Company, LLC. All Rights Reserved.
  37. 37. Design considerations for management and orchestration Service providers can leverage Unified Infrastructure Manager/Provisioning to provision the Vblock System in a trusted multi-tenancy environment. The AMP cluster of hosts holds UIM/P, which is accessed through a Web browser. Use UIM/P as a domain manager to provision Vblock Systems as a single entity. UIM/P interacts with the individual element managers for compute, storage, SAN, and virtualization to automate the most common and repetitive operational tasks required to provision services. It also interacts with VMware vCloud Director to automate cloud operations, such as the creation of a virtual data center. For provisioning, this guide focuses on the functional capabilities provided by UIM/Pin a trusted multi- tenancy environment. As shown in Figure 11, the UIM/P dashboard gives service provider administrators a quick summary of available infrastructure resources. This eliminates the need to perform manual discovery and documentation, thereby reducing the time it takes to begin deploying resources. Once administrators have resource availability information, they can begin to provision existing service offerings or create new ones. Figure 11. UIM/P dashboard 37© 2013 VCE Company, LLC. All Rights Reserved.
  38. 38. Figure 12. UIM/P service offerings 38© 2013 VCE Company, LLC. All Rights Reserved.
  39. 39. Configuration While UIM/P automates the operational tasks involvedin building services on Vblock Systems, administrators need to perform initial task sets on each domain manager before beginning service provisioning. This section describes both key initial tasks to perform on theindividual domain managers and operational tasks managed through UIM/P. The following table shows what is configured as part of initial device configuration and what is configured through UIM/P. Device manager Initial configuration Operational configuration completed with UIM/P UCS Manager  Management conf iguration (IP and credentials  Chassis discov ery  Enable ports  KVMIP pool  Create VLANs  Assign VLANs  VSANs  LAN  MAC pool  SAN  World Wide Name (WWN) pool  WWPN pool  Boot policies  Serv ice templates  Select pools  Select boot policy  Serv er  UUID pool  Create service profile  Associate profile to server  Installv Sphere ESXi Unisphere MDS/Nexus  Management conf iguration (IP and credentials)  RAID group, storage pool, or both  Create LUNs  Create storage group  Associate host and LUN  Zone  Aliases  Zone sets vCenter  Create Windows v irtual machine  Create database  InstallvCenter software  Create data center  Create clusters  High availability policy  DRS policy  Distributed power management (DPM) policy  Add hosts tocluster  Create data stores  Create networks 39© 2013 VCE Company, LLC. All Rights Reserved.
  40. 40. Enabling services After completing the initial configurations, use the following high-level workflow to enable services. Stage Workflow action Description 1 Vblock System discov ery Gather data f or Vblock Sy stem dev ices, interconnectiv ity, and external networks, and populate data in UIM database. 2 Serv ice planning Collect serv ice resource requirements, including:  The number of servers and serv er attributes  Amount of boot and data storage and storage attributes  Networks to be used for connectivity between the service resources and external networks  vCenter Server and ESXi cluster information 3 Serv ice prov isioning Reserv e resources based on the serv er and storage requirements def ined f or the serv ice during serv ice planning. Install ESXi on the serv ers. Conf igure connectiv ity between the cluster and external networks. 4 Serv ice activ ation Turn on the sy stem, start up Cisco UCS serv ice prof iles, activ ate network paths, and make resources av ailable f or use. The workf low separates prov isioning and activ ation, to allow activ ation of the serv ice as needed. 5 vCenter sy nchronization Sy nchronize the ESXi clusters with the vCenter Serv er. Once y ou prov ision and activ ate a serv ice, the sy nchronizing process includes adding the ESXi cluster to the vCenter serv er data store and registering the cluster hosts prov isioned with v Center Serv er. 6 vCloud sy nchronization Discov er vCloud and build a connection to the v Center serv ers. The clusters created in v Center Serv er are pushed to the appropriate vCloud. UIM/P integrates with vCloud Director in the same way it integrates with v Center Serv er. 40© 2013 VCE Company, LLC. All Rights Reserved.
  41. 41. Figure 13 describes the provisioning, activation, and synchronization process, including key sub-steps during the provisioning process. Figure 13. Provisioning, activation, and synchronization process flow Creating a service offering To create a service offering: 1. Select the operating system. 2. Define server characteristics. 3. Define storage characteristics for startup. 4. Define storage characteristics for application data. 5. Create network profile. Provisioning a service To provision a service: 1. Select the service offering. 2. Select Vblock System. 3. Select servers. 4. Configure IP and provide DNS hostname for operating system installation. 5. Select storage. 6. Select and configure network profile and vNICs. 7. Configure vCenter cluster settings. 8. Configure vCloud Director settings. 41© 2013 VCE Company, LLC. All Rights Reserved.
  42. 42. Design considerations for compute Within the computing infrastructure of Vblock Systems, multi-tenancy concerns can be managed at multiple levels, from the central processing unit (CPU), through the Cisco Unified Computing System (UCS) server infrastructure, and within the VMware solution elements. This section describes the design of and rationale behind the trusted multi-tenancy framework. The design includes many issues that must be addressed prior to deployment, as no two environments are alike. Design considerations are provided for the components listedin the following table. Component Version Description Cisco UCS 2.0 Core component of the Vblock System that prov ides compute resources in the cloud. It helps achiev e secure separation, serv ice assurance, security , av ailability , and serv ice prov ider management in the trusted multi-tenancy f ramework. VMware v Sphere 5.0 Foundation of underly ing cloud inf rastructure and components. Includes:  VMware ESXi hosts  VMware v Center Serv er  Resource pools  VMware High Av ailability and Distributed Resource Scheduler  VMware v Motion VMware v Cloud Director 1.5 Builds on VMware v Sphere to prov ide a complete multi-tenant inf rastructure. It deliv ers on-demand cloud inf rastructure so users can consume v irtual resources with maximum agility. It consolidates data centers and deploys workloads on shared inf rastructure with built-in security and role-based access control. Includes:  VMware v Cloud Director Serv er (two instances, each installed on a Red Hat Linux virtual machine and ref erredto as a “cell”)  VMware v Cloud Director Database (one instance per clustered set of VMware vCloud Directorcells) VMware v Shield 5.0 Prov ides network security serv ices, including NAT and f irewall. Includes:  v Shield Edge (deployed automatically on hosts as v irtual appliances by VMware v Cloud Director to separate tenants)  v Shield App (deployed on ESXi host layer to zone and securev irtual machine traffic)  v Shield Manager (one instance per vCenter Server in the cloud resource groups to manage vShield Edge andv Shield App) VMware v Center Chargeback 1.6.2 Prov ides resource metering and chargeback models. Includes:  VMware v Center Chargeback Serv er  VMware Chargeback Data Collector  VMware v Cloud Data Collector  VMware v Shield Manager Data Collector 42© 2013 VCE Company, LLC. All Rights Reserved.
  43. 43. Design considerations for secure separation This section discusses using the following technologies to achieve secure separation at the compute layer:  Cisco UCS  VMware vCloud Director Cisco UCS The UCS blade servers contain a pair of Cisco Virtual Interface Card (VIC) Ethernet uplinks. Cisco VIC presents virtual interfaces (UCS vNIC) to the VMware ESXi host, which allow for further traffic segmentation and categorization across all traffic types based on vNIC network policies. Using port aggregation between the fabric interconnect vNIC pairs enhances the availability and capacity of each traffic category. All inbound traffic is stripped ofits VLAN header and switched to the appropriate destination’s virtual Ethernetinterface. In addition, the Cisco VIC allows for the creation of multiple virtual host bus adapters (vHBA), permitting FC-enabled startup across the same physical infrastructure. Each VMware virtual interface type, VMkernel, andindividual virtual machine interface connects directly to the Cisco Nexus 1000V software distributed virtual switch. At this layer, packets are tagged with the appropriate VLAN header and all outbound traffic is aggregated to the two Cisco fabric interconnects. This section contains information about the high-level UCS features that help achieve secure separationin the trusted multi-tenancy framework:  UCS service profiles  UCS organizations  VLAN considerations  VSAN considerations UCS service profiles Use UCS service profiles to ensure secure separation at the compute layer. Hardware can be presented in a stateless manner that is completely transparent to the operating system and the applications that run onit. A service profile creates a hardware overlay that contains specific information sensitive to the operating system:  MAC addresses  WWN values  UUID  BIOS  Firmware versions 43© 2013 VCE Company, LLC. All Rights Reserved.
  44. 44. In a multi-tenant environment, the service provider can define a service profile giving access to any server in a predefined server resource with specific processor, memory, or other administrator-defined characteristics. The service provider can then provision one or more servers through service profiles, which can be used for an organization or a tenant. Service profiles are particularly useful when deployed with UCS Role-Based Access Control (RBAC), which provides granular administrative access control to UCS system resources based on administrative rolesin a service provider environment. Servers instantiated by service profiles start up from a LUN thatis tied to the specified WWPN, allowing an installed operating system instance to be locked with the service profile. The independence from server hardware allows installed systems to be re-deployed between blades. Through the use of pools and templates, UCS hardware can be quickly deployed and scaled. The trusted multi-tenancy framework uses three distinct server roles to segregate and classify UCS blade servers. This helpsidentify and associate specific service profiles depending on their purpose and policy. The following table describes these roles. Role Description Management These serv ers can be associated with a serv ice prof ile that is meant only for cloud management or any type of serv ice prov ider inf rastructure workload. Dedicated These serv ers can be associated with diff erent serv ice prof iles, serv er pools, and roles with VLAN policy ; f or example, for a specific tenant VLAN allowed access to those serv ers that are meant only for specif ic tenants. The trusted multi-tenancy f ramework considers a f ew tenants who strongly want to hav e a dedicated UCS cluster to f urther segregate workloads in the v irtualization lay er as needed. It also considers tenants who want dedicated workload throughput f rom the underly ing compute inf rastructure, which maps to the VMware Distributed Resource Scheduler cluster. Mixed These serv ers can be associated with a diff erent serv ice prof ile meant for shared resource clusters f or the VMware Distributed Resource Scheduler cluster. Depending on tenant requirements, UCS can be designed to use a dedicated compute resource or a shared resource. The trusted multi-tenancy framework uses mixed serv ers f or shared resource clusters as an example. These servers can be spread across the UCS fabric to minimize the impact of a single point of failure or a single chassis failure. 44© 2013 VCE Company, LLC. All Rights Reserved.
  45. 45. Figure 14 shows an example of how the three servers are designed in the trusted multi-tenancy framework. Figure 14. Trusted multi-tenancy framework server design 45© 2013 VCE Company, LLC. All Rights Reserved.
  46. 46. Figure 15 shows an example of three tenants (Orange, Vanilla, and Grape) using three service profiles on three different physical blades to ensure secure separation at the blade level. Figure 15. Secure separation at the blade level 46© 2013 VCE Company, LLC. All Rights Reserved.
  47. 47. UCS organizations The Cisco UCS organizations feature helps with multi-tenancy bylogically segmenting physical system resources. Organizations are logically isolated in the UCS fabric. UCS hardware and policies can be assigned to different organizations so that the appropriate tenant or organizational unit can access the assigned compute resources. A rich set of policies in UCS can be applied per organization to ensure that the right sets of attributes and I/O policies are assigned to the correct organization. Each organization can have its own pool of resources, including the following:  Resource pools (server, MAC, UUID, WWPN, and so forth)  Policies  Service profiles  Service profile templates UCS organizations are hierarchical. Root is the top-level organization. System-wide policies and pools in root are available to all organizations in the system. Any policies and pools created in other organizations are available only to organizations below it in the same hierarchy. The functional isolation provided by UCS is helpful for a multi-tenant environment. Use the UCS features of RBAC and locales (a UCS feature to isolate tenant compute resources) on top of organizations to assign or restrict user privileges and roles by organization. Figure 16 shows the hierarchical organization of UCS clusters starting from Root. It shows three types of cluster configurations (Management, Dedicated, and Mixed). Below that are the three tenants (Orange, Vanilla, and Grape) with their service levels (Gold, Silver, and Bronze). Figure 16. UCS cluster hierarchical organization 47© 2013 VCE Company, LLC. All Rights Reserved.
  48. 48. UCS allows the creation of resource pools to ensure secure separation between tenants. Use the following:  LAN resources  IP pool  MAC pool  VLAN pool  Management resources  KVM addresses pool  VLAN pool  SAN resources  WWN addresses pool  VSANs  Identity resources  UUID pool  Compute resources  Server pools Figure 17 illustrates how creating separate resource pools for the three tenants helps with secure separation at the compute layer. Figure 17. Resource pools 48© 2013 VCE Company, LLC. All Rights Reserved.
  49. 49. Figure 18 is an example of a UCS Service Profile workflow diagram for three tenants. Figure 18. UCS service profile workflow VLAN considerations In Cisco UCS, a named VLAN creates a connection to a specific management LAN and tenant- specific VLANs. The VLAN isolates traffic, including broadcast traffic, to that external LAN.The name assigned to a VLAN ID adds a layer of abstraction that you can use to globally update all servers associated with service profiles using the named VLAN. You do not need to reconfigure servers individually to maintain communication with the external LAN. For example,if a service provider wanted to isolate a group of compute clusters for a specific tenant, the specific tenant VLAN needs to be allowedin the service profile of that tenant.This provides another layer of abstraction in secure separation. 49© 2013 VCE Company, LLC. All Rights Reserved.
  50. 50. To illustrate, if Tenant Orange has dedicated UCS blades, it is recommended to allow only Tenant Orange-specific VLANs to ensure that only Tenant Orange has access to those blades. Figure 19 shows a dedicated service profile for Tenant Orange that uses a vNIC template as Orange. Tenant Orange VLANs are allowed to use that specific vNIC template. However, a global vNIC template can still be used for all blades, providing the ability to allow or disallow specific VLANs from updating service profile templates. Figure 19. Dedicated service profile for Tenant Orange VSAN considerations in UCS A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic, including broadcast traffic, to that external SAN.The traffic on one named VSAN knows that the traffic on another named VSAN exists, but it cannot read or access that traffic. The name assigned to a VSAN ID adds a layer of abstraction that allows you to globally update all servers associated with service profiles that use the named VSAN. You do not need to individually reconfigure servers to maintain communication with the external SAN. You can create more than one named VSAN with the same VSAN ID. In a cluster configuration, a named VSAN is configured to be accessible to only the FC uplinks on both fabric interconnects. 50© 2013 VCE Company, LLC. All Rights Reserved.
  51. 51. Figure 20 shows that VSAN 10 and VSAN 11 are configured in UCS SAN Cloud and uplinked to an FC port. Figure 20. VSAN configuration in UCS Figure 21 shows how an FC port is assigned to a VSAN ID in UCS. In this case, uplink FC Port 1 is assigned to VSAN10. Figure 21. Assigning a VSAN to FC ports 51© 2013 VCE Company, LLC. All Rights Reserved.
  52. 52. VMw are vCloud Director VMware vCloud Director introduces logical constructs to facilitate multi-tenancy and provide interoperability between vCloud instances built to the vCloud API standard. VMware vCloud Director helps administer tenants—such as a business unit, organization, or division—by policy. In the trusted multi-tenancy framework, each organization has isolated virtual resources, independent LDAP-based authentication, specific policy controls, and unique catalogs. To ensure secure separation in a trusted multi-tenancy environment where multiple organizations share Vblock System resources, the framework includes VMware vCloud Director along with VMware vShield perimeter protection, port-level firewall, and NAT and DHCP services. Figure 22 shows a logical separation of organizations in VMware vCloud Director. Figure 22. Organization separation 52© 2013 VCE Company, LLC. All Rights Reserved.
  53. 53. A service provider may want to view all the listed tenants or organizations in vCloud Director to easily manage them. Figure 23 shows the service provider’s tenant view in VMware vCloud Director. Figure 23. Tenant view in vCloud Director Organizations are the unit of multi-tenancy within vCloud Director.They represent a single logical security boundary. Each organization contains a collection of users, computing resources, catalogs, and vApp workloads. Organization users can belocal users or imported from an LDAP server. LDAP integration can be specific to an organization, or it can leverage an organizational unit within the system LDAP configuration, as defined by the vCloud system administrator.The name of the organization, specified during creation time, maps to a unique URL that allows access to the GUI for that organization. For example, Figure 24 shows that Tenant Orange maps to a specific default organization URL. Each tenant accesses the resource using its own URL and authentication. Figure 24. Organization unique identifier URL 53© 2013 VCE Company, LLC. All Rights Reserved.
  54. 54. The vCloud Director network provides an extra layer of separation. vCloud Director has three different types of networking, each with a specific purpose:  External network  Organization network  vApp network External network The external network is the connection to the outside world. An external network always needs a port group, meaning that a port group needs to be available within VMware vSphere and the distributed switch. Tenants commonly require direct connections from inside the vCloud environment into the service provider networking backbone. This is analogous to extending a wire from the network switch containing the network or VLAN to be used, all the way through the vCloudlayers into the vApp. Each organization in the trusted multi-tenancy environment has an internal organization network and a direct connect external organization network. Organization network An organization network provides network connectivity to vApp workloads within an organization. Users in an organization have no visibility into external networks and connect to outside networks through external organization networks. Thisis analogous to users in an organization connecting to a corporate network that is uplinked to a service provider for Internet access. The following table lists connectivity options for organization networks. Network type Connectivity External organization Direct connection External organization NAT/routed Internal organization Isolated A directly connected external organization network places the vApp virtual machines in the port group of the external network. IP address assignments for vApps follow the external network IP addressing. Internal and routed external organization networks are instantiated through network pools by vCloud system administrators. Organization administrators do not have the ability to provision organization networks but can configure network services such as firewall, NAT, DHCP, VPN, and static routing. Note: Organization network is meant only for the intra-organization network and is specific to an organization. 54© 2013 VCE Company, LLC. All Rights Reserved.
  55. 55. Figure 25 shows an example of an internal and external network configuration. Figure 25. Internal and external organization networks Service providers provision organization networks using network pools. Figure 26 shows the service provider’s administrator view of the organization networks. Figure 26. Administrator view of organization networks 55© 2013 VCE Company, LLC. All Rights Reserved.
  56. 56. vApp network A vApp network is similar to an organization network. It is meant for a vAppinternal network. It acts as a boundary for isolating specific virtual machines within a vApp. A vApp network is an isolated segment created for a particular application stack within an organization’s network to enable multi-tier applications to communicate with each other and, at the same time, isolate the intra-vApp traffic from other applications within the organization. The resources to create the isolation are managed by the organization administrator and allocated from a pool provided by the vCloud administrator. Figure 27 shows a vApp configuration for Tenant Grape. Figure 27. Micro-segmentation of virtual workloads Network pools All three network classes can be backed using the virtual network features of the Nexus 1000V. It is important to understand the relationship between the virtual networking features of the Nexus 1000V and the classes of networks defined andimplementedin a vCloud Director environment. Typically, a network class (specifically, an organization and vApp) is described as being backed by an allocation of isolated networks. For an organization administrator to create an isolated vApp network, the administrator must have a free isolation resource to consume and use in order to provide that isolated network for the vApp. 56© 2013 VCE Company, LLC. All Rights Reserved.
  57. 57. To deploy an organization or vApp network, you need a network pool in vCloud Director. Network pools contain network definitions used to instantiate private/routed organization and vApp networks. Networks created from network pools areisolated at Layer 2. You can create three types of network pools in vCloud Director, as shown in the following table. Network Pool Type Description v Sphere port group backed Network pools are backed by pre-prov isioned port groups in Cisco Nexus 1000V or VMware distributed switch. VLAN backed A range of pre-prov isioned VLAN IDs back network pools. This assumes all VLANs specif ied are trunked. vCloud Director network isolation backed Network pools are backed by v Cloud isolated networks, which are an ov erlay network uniquely identif ied by a f ence ID implemented through encapsulation techniques that span hosts and prov ide traffic isolation f rom other networks. It requires a distributed switch. vCloud Director creates port groups automatically on distributed switches as needed. Figure 28 shows how network pool types are presented in VMware vCloud Director. Figure 28. Network pools 57© 2013 VCE Company, LLC. All Rights Reserved.
  58. 58. Each pool has specific requirements, limitations, and recommendations. The trusted multi-tenancy framework uses a port group backed network pool with a Cisco Nexus 1000V Distributed switch. Each port group is isolated to its own VLAN ID. Each tenant (network, in this case) is associated withits own network pool, each backed by a set of port groups. VMware vCloud Director automatically deploys vShield Edge devices to facilitate routed network connections. vShield Edge uses MAC encapsulation for NAT routing, which helps prevent Layer 2 network information from being seen by other organizationsin the environment. vShield Edge also provides a firewall service that can be configured to blockinbound traffic to virtual machines connected to a public access organization network. Design considerations for service assurance This section discusses using the following technologies to achieve service assurance at the compute layer:  Cisco UCS  VMware vCloud Director Cisco UCS The following UCS features support service assurance:  Quality of service  Port channels  Server pools  Redundant UCS fabrics Compute, storage, and network resources need to be categorized in order to provide a differential service model for a multi-tenant environment. The following table shows an example of Gold, Silver, and Bronze service levels for compute resources. Level Compute resource Gold UCS B440 blades Silv er UCS B200 and B440 blades Bronze UCS B200 blades System classes in the UCS specify the bandwidth allocated for traffic types across the entire system. Each system class reserves a specific segment of the bandwidth for a specific type of traffic. Using quality of service policies, the UCS assigns a system class to the outgoing traffic and then matches a quality of service policy to the class of service (CoS) value marked by the Nexus 1000V Series switch for each virtual machine. UCS quality of service configuration can help achieve service assurance for multiple tenants. A best practice to ensure guaranteed quality of service throughout a multi-tenant environmentis to configure quality of service for different servicelevels on the UCS. 58© 2013 VCE Company, LLC. All Rights Reserved.
  59. 59. Figure 29 shows different quality of service weight values configured for different class of service values that correspond to Gold, Silver, and Bronze service levels. This helps ensure traffic priority for tenants associated with those service levels. Figure 29. Quality of service configuration Quality of service policies assign a system class to the outgoing traffic for a vNIC or vHBA.Therefore, to configure the vNIC or vHBA, include a quality of service policy in a vNIC or vHBA policy and then include that policy in a service profile. Figure 30 shows how to create quality of service policies. Figure 30. Creating quality of service policy 59© 2013 VCE Company, LLC. All Rights Reserved.
  60. 60. VMw are vCloud Director VMware vCloud Director provides several allocation models to achieve service levels in the trusted multi-tenancy framework. An organization virtual data center allocates resources from a provider virtual data center and makes them available for use by a given organization. Multiple organization virtual data centers can take from the same provider virtual data center. One organization can have multiple organization virtual data centers. Resources are taken from a provider virtual data center and allocated to an organization virtual data center using one of three resource allocation models, as shown in the following table. Model Description Pay as y ou go Resources are reserv ed and committed for v Apps only as v Apps are created. There is no upf ront reserv ation of resources. Allocation A baseline amount (guarantee) of resources f rom the prov ider v irtual data center is reserv ed f or the organization v irtual data center’s exclusiv e use. An additional percentage of resources are av ailable to ov ersubscribe CPU and memory, but this taps into compute resources that are shared by other organization v irtual data centers drawing f rom the prov ider v irtual data center. Reserv ation All resources assigned to the organization v irtual data center are reserv ed exclusiv ely f or the organization v irtual data center’s use. With all the above models, the organization can be set to deploy an unlimited or limited number of virtual machines. In selecting the appropriate allocation model, consider the service definition and organization’s use case workloads. Although all tenants use the shared infrastructure, the resources for each tenant are guaranteed based on the allocation model in place. The service provider can set the parameters for CPU, memory, storage, and network for each tenant’s organization virtual data center, as shown in Figure 31, Figure 32, and Figure 33. 60© 2013 VCE Company, LLC. All Rights Reserved.
  61. 61. Figure 31. Organization virtual data center allocation configuration Figure 32. Organization virtual data center storage allocation 61© 2013 VCE Company, LLC. All Rights Reserved.

×