• Like

What's Next in the Path to the Cloud?

  • 175 views
Uploaded on

From server huggers to cloud addicts to the optimized data center network - learn more about Juniper Networks' data center solutions here: http://juni.pr/SSlpP2c

From server huggers to cloud addicts to the optimized data center network - learn more about Juniper Networks' data center solutions here: http://juni.pr/SSlpP2c

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
175
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
3
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 1 WHAT’S NEXT IN THE PATH TO THE CLOUD? From Server Huggers to Cloud Addicts to the Optimized Data Center Network From Server Huggers to Cloud Addicts Once upon a time, IT needed to convince application owners to virtualize their servers instead of hanging on to a dedicated physical server. Now, many IT shops have won that battle, with Virtual Machines (VMs) becoming the de facto standard, and old “server hugger” application owners increasingly sold on the benefits of server virtualization. The End of Server Huggers: Key Benefits of Past Server Virtualization Projects1 None of the above 70% 70% 51% 49% 43% 4% Energy savings OpEx Easy maintenance Business operational efficiency Flexibility and speed of rolling out new services Hardware savings CapEx With the availability of new IT infrastructures and cloud services that are faster than ever before, a new set of expectations around speed, agility, and time to market have been established. Today’s “cloud addict” application owners expect instant provisioning of compute, storage, and network resources, and business managers increasingly cringe at the possibility of infrastructure constraints. Figure 1: Survey results: Company benefits from past virtualization projects 1 Source: IDG CIO Virtualization Quickpoll, Sponsored by Juniper Networks, Nov 2011, Base: 138 qualified respondents
  • 2. 2 Cloud Addicts dictate success attributes for future virtualization projects2 Energy savings OpEx Hardware savings CapEx Easy maintenance Business operational efficiency Flexibility and speed of rolling out new services Significant increaseChange in importance: Slight increase No change Slight decrease Significant decrease Figure 2: Survey results: Anticipated impacts of future virtualization projects. For infrastructure architects and managers, the next phase of virtualization will require balancing the workflow demands of cloud addicts, while continuing to migrate legacy applications and services onto a virtualized infrastructure that is optimized for efficiency. As seen in Figure 2, while business agility determines the success of future virtualization projects, improving efficiency and reducing maintenance burdens are also rising in importance. The Security and Management of Cloud Addicts If left unchecked, cloud addicts (or even regular IT staff) have been known to spin up virtual machines with wild abandon causing VM sprawl. Or perhaps they create VM with unique configurations rife with security vulnerabilities and compliance violations. Meanwhile, VM-to- VM communications are able to bypass the long trusted presence of the physical firewall located multiple network stops away. Finally, as the importance of the applications in a virtualized environment rises, the old security models need to be revisited to ensure that only infrastructure resources are shared. Key questions to ask: • As more important applications are virtualized, how will you evolve the security model to protect business critical applications? • Can IT staff or others create virtual machines with inconsistent and/or vulnerable virtual machines? • Can you secure VM-to-VM traffic without constraining the number of virtual machines per host or the performance of the applications? The Cloud Addict Push Toward Shared Compute—Time to Optimize After the first several rounds of server consolidation, many VMs rarely move off or require human intervention. In this mode of operation, compute resources are used more efficiently and flexibly, but cannot be used as a shared and common pool of resources for other applications that need them. Even if the notion of a shared pool of compute resources is too aggressive, the tendency for application owners to overestimate the resources used by an application can lead to significant over provisioning, as the percentage of virtualized workloads approaches 100%. 2 Source: IDG CIO Virtualization Quickpoll, Sponsored by Juniper Networks, Nov 2011, Base: 138 qualified respondents
  • 3. 3 Key questions to ask: • How quickly can VMs be moved in your data center? • Can VMs be moved without threatening performance or availability of all applications? • Can you measure the resources used by applications over time to determine the true needs versus the estimated needs of the application set? • Do you need to build your infrastructure to handle peak loads or average loads with the use of cloud services for peaks? Table 1: Summary of Data Center Changes and the Impact on the Network Infrastructure Element Change (Old vs. New) Network Impact (Old vs. New) IT infrastructure availability Old: Delivered by IT Old: Static connections inside and between data centers New: Globally resilient, obtained via cloud service and/or delivered by IT from multiple data centers New: Dynamic connections inside and between data centers Application architectures Old: Client/server Old: Data center networks optimized for client/server (north-south) traffic New: Distributed components combined into services New: Data center networks optimized for server-to-server and server-to-storage (east-west) traffic Compute Old: One physical server per app Old: Each application’s compute and data stays physically in one place New: One virtual machine per app on shared physical compute New: Virtual machines can move around the data center based on demand Storage Old: Physical storage dedicated to a single application (direct attach) or small number of applications (storage area network) Old: Separate and dedicated storage networks with access governed by cabling New: Virtualized storage and master data initiatives dictate common pools of data available to all applications New: Data physically connected to every app, with access governed by policy; requires low latency, low jitter networks to minimize the network distance between processor and disk Security and network services Old: The “castle with a drawbridge model” that places a large appliance dedicated to a particular service between the router and the server Old: Stack network appliances in a conga line specific to particular network segments New: The “hotel model,” with multiple layers of security; applications housed in rooms with different resources, (penthouse vs. ballroom, etc.) New: A pool of network and security services available to all traffic flows combined with firewall services built to gain visibility and security for VM-to- VM traffic without constraining app performance or compute efficiencies. As we have seen, location matters In a legacy network. Resources that need each other have to be placed next to each other, and the physical location of a virtualized application resource inside a data center can have a significant impact on application performance, security, and the agility of distributed applications. Components of an application are also optimally placed in the same vicinity and ideally behind a single switch. In the future, service-oriented architectures will require that all assets in the data center have connectivity to all other assets.
  • 4. 4 Figure 3: Building a larger bubble for optimal performance Larger Bubbles Are Needed When application assets are placed behind a single switch, the communication between application components is fast and more importantly predictable and consistent (see the left side of Figure 3). Sometimes this happens by design; sometimes it is happenstance due to project-based infrastructure build out. As you can see in the right side of Figure 3, when an application component (generally a VM) is placed farther away from the other components, latency increases along with unpredictability of the traffic as other applications come barreling through the network. Unfortunately, physical distance is not always the best indicator of network distance. Bubble Optimal performance Application assets behind a single switch Network distance introduces unpredicatability and latency One Hop One HopVM VM Traditional Security Models Fail Traditional security models don’t work well in a post virtualization world. Like network connectivity, security architectures were built for largely static IT infrastructures. Every year, another threat seems to arise and the conga line of appliances sit at the drawbridge of the castle filtering, inspecting, and blocking traffic as it exits and enters the data center. These appliances cast a shadow across the network as a whole, if they can even keep up, or across branches of the network where a particular security service is needed.
  • 5. 5 The Infrastructure Evolution Imperative Just like with changes in application, compute, and storage, infrastructure architects expect to get better economics and more agility, efficiency, and performance for their infrastructure investments. Cloud and virtualization technologies are driving the first major change in data center network architectures in 20 years. This means the number of options from the incumbent established vendors and start-ups is exploding, while the gravity of the decisions data center architects and network engineers need to make dictate the future capabilities of their data center operations. Managing IT Agility: Choosing data center network architectures, and the software and hardware that enable those architectures, has become a critical task. And there are a number of challenges that come with the scale of the environment, the capacity and speeds of server and storage equipment, and future growth needs. There are two architectural options for data centers that seek to optimize the network for virtualized compute, distributed applications, and scale-out needs such as big data clusters. The best option would be indicated based on the scale and capacity of the environment. Managing Security: The quandary for any security architecture is how to reduce risk without inhibiting the agility of the business, while at the same time ensuring that the costs of security don’t increase faster than the growth in the types of threats that need to be mitigated. With products like virtual gateways, data center managers gain visibility and the tools needed to enforce both physical and virtual flows, and they can create secure zones that can be enforced on all flows in a mixed traditional and VMware environment. Simplifying the Network: Data center managers don’t just want a status quo network, or a cheap network, they want a high performing network built for a virtualized data center environment. New network architectures can improve application performance, while reducing the network’s footprint and complexity. They can be optimized for server-to-server and server- to-storage traffic, and they also enable the use of network virtualization to replace multiple dedicated data center-to-data center links with fewer wide area connections. Figure 4: Shadows create blind spots in a post virtualization world Shadow Shadows worked pre-virtualization with static apps Post-virtualization, traditional security models fail Shadow Appliances and VLANs Appliances and VLANs VM to VM blind spot VM VM VM VM moved outside assigned VLAN and security zone
  • 6. 63200014-001-EN Oct 2012 Copyright 2012 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. EMEA Headquarters Juniper Networks Ireland Airside Business Park Swords, County Dublin, Ireland Phone: 35.31.8903.600 EMEA Sales: 00800.4586.4737 Fax: 35.31.8903.601 APAC Headquarters Juniper Networks (Hong Kong) 26/F, Cityplaza One 1111 King’s Road Taikoo Shing, Hong Kong Phone: 852.2332.3636 Fax: 852.2574.7803 Corporate and Sales Headquarters Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA Phone: 888.JUNIPER (888.586.4737) or 408.745.2000 Fax: 408.745.2100 www.juniper.net Printed on recycled paper Key questions to ask: • Do you have a common operating system and operational model across your entire data center network? • When connecting data centers, can you not only virtualize multiple networks over a shared MPLS service, but also enable granular traffic engineering over each virtualized link, enabling customers to confidently replace expensive dedicated links with less expensive MPLS services or private wide area networks? • When refreshing 1GbE servers or adding 10GbE servers, are you able to offload rack-to-rack traffic and achieve scale that used to require three tiers of switching with only two tiers? This removal of the aggregation tier can save up to 30% of CapEx and OpEx network cost. • When building a 10GbE server data center or POD, does your architecture allow you to optimize for server-to-server and server-to-storage connectivity? An industry-leading switch fabric can give you the simplicity of multiple switches behaving and being managed like a single switch, very high performance, and the resilience and scale expected from a network of autonomous devices. Conclusion As IT organizations embrace server virtualization and move to the cloud, the changes they make might begin on legacy networks but will eventually require transformational change. In the new data center, IT infrastructure will need to be globally resilient, delivered via cloud services or by IT from multiple data centers. The data center network will be optimized for east-west traffic, and VMs will be able to move around based on demand, or there will be one VM per app on shared physical compute. Virtualized storage and master data initiatives will dictate common pools of data available to all applications. Data will be physically connected to every app, with access governed by policy. And a pool of network and security services will be available to all traffic flows, combined with firewall services built to gain visibility and security for VM-to-VM traffic without constraining app performance or compute efficiencies. This is the promise of the new network as we move from legacy systems to the efficiencies of the cloud.