Notes:If you could solve all your business and technical challenges with one solution, what would you do?Would you eliminate the need for the Spanning-Tree Protocol?Would you implement Transparent Interconnection of Lots of Links (TRILL)?Would you enhance Ethernet to support lossless transmission, low latency, an active multipath fabric, similar to that found in SANs?Would you create devices with greater network awareness of virtual application servers and their mobility?Would you make sure that all network traffic was automatically distributed?Would you make sure that link failure did not result in a temporary outage?Would you want the ability to manage all these devices as a single entity?Would you want to figure out how to reduce power consumption of network devices?
We know that a lot of vendors are talking about fabrics. So how is Brocade different? We’ve been perfecting this technology on the SAN side for 10 years. We’ve already shown provide that our networking technology can overcome these challenges. We’re applying our knowledge and distributed intelligence to the data center network, the place where it really counts in the enterprise. But we’re using the standard Ethernet technology that the world relies on.
Here’s how you do it: with Virtual Cluster Switching, or VCS. Think of it as network virtualization, the same way you’d think of server virtualization. It will allow you to meet your needs for both plasticity and network stability, and that will guarantee success no matter what insane requests you get in any given week. Let’s talk about each of these in turn. With an Ethernet Fabric, you no longer have to deal with the Spanning Tree Protocol. You can take advantage of a multi-path, deterministic network, one that automatically – or as we like to say at Brocade, “automagically” – heals itself in a non-disruptive, transparent manner. You get lossless transmission with low latency, and you’re ready for a converged infrastructure. With distributed intelligence, the fabric is aware of everything on the network – all devices, all VMs. The intelligence is built into the devices, so there’s no reconfiguration. It’s all auto … magic. With a logical chassis, a thousand devices look like one device. Instead of managing lots of devices, you manage one that has distributed intelligence. You can flatten network layers and take advantage of auto-configuration, get scalability even at the edge of the network, and centrally manage everything as if it were a single switch.
Now we’re going to talk about the key transitions occurring in application optimization and delivery, and our strategy to help you take advantage of those transitions.
Like me, I’m sure you have all had experiences with slow or unresponsive applications. At best, they’re just frustrating and a waste of time. At worst, poor-performing applications can cost your company its customers.Google, Microsoft, and others have performed numerous studies on this, but one of the most compelling data points comes from Amazon.com, who found that a 100 ms delay in its e-commerce site cost them 1% in sales.<CLICK> Conversely, a rich and high-performing application environment can be a significant asset for the organization.Customers are more satisfied, employees get more done, and your business is ultimately more successful. Now, because data centers really exist to deliver applications and information to end users, ensuring that applications perform to the user’s expectation must be a key goal of any data center architecture. The data center network, in particular application delivery controllers, plays a vital role in meeting that goal by optimizing application performance and availability.In short, application delivery controllers help you drive productivity and gain competitive advantage. Brocade application delivery solutions help deliver these advantages today, and I will describe the product family for you in just a few minutes. First, I’d like to look at some of the major trends influencing data centers and applications.
There are four major trends that are relevant here: mobility, video, and rich media-based applications, the changing architectures of the applications themselves and, of course, virtualization and clouds.These trends reinforce the value of application delivery solutions and the central role the network will play in delivering high-performing and always-available applications.
Let’s look at the mobile Internet and its effect on applications. Smart phones, ubiquitous WiFi, 3G and now 4G, SSL VPNs, and other advances have forever changed user expectations. <CLICK> Users now expect access to their applications and data anytime, anywhere, and from any device. In addition, small numbers of static applications are giving way to thousands of dynamic applications—making the task of optimizing applications that much more difficult.Finally, the growing viral nature and often unpredictable traffic patterns created by mobile applications can make data center capacity planning very tricky.
A parallel trend is the rise of video and rich media-based applications. <CLICK> These applications provide more immersive, higher-value experiences that lead to increased employee satisfaction and customer loyalty. However, these applications have very demanding performance requirements, both on the servers and the network infrastructure.The network must be able to perform server offload and application-layer services at very high speed and with low latency. Fortunately, our strong heritage of high-performance and low-latency networking solutions positions us well to solve these problems.
Now let’s look at the architectures of the applications themselves. The continuing adoption of Web 2.0 technologies in application development is changing the nature of applications. <CLICK> The browser has become the universal client - the single window for both personal and business computing. Advances such as HTML 5 will turn the browser into a Swiss army knife, no longer requiring any plug-ins or add-ons to run certain applications. However, while the user experience is simplified, the applications themselves, Web-based or otherwise, are becoming much more sophisticated and complex. Applications are now large and diverse collections of code and content, taking many different forms and residing in multiple locations. Even relatively simple Web sites are composed of hundreds of objects and diverse content, often personalized to the end user.You can see from this example of cnn.com the wide range of object types downloaded with a single click.
The final and perhaps most compelling transitions occurring in application delivery are server virtualization and cloud architectures. <CLICK> Server virtualization provides significant advantages in server consolidation and deployment flexibility. And it has also laid the foundation for cloud architectures by abstracting the applications away from the underlying platforms, allowing applications and other services to be deployed more easily.Cloud architectures take this to the next step where resources become elastic, stretching and relaxing in real time based on user demand. Applications, or sub-components of applications, can be spun up or down with clicks rather than with racking new hardware.
Here’s the Brocade One architecture slide I showed you earlier.<CLICK> I’d like to now drill down into the application delivery portion of the architecture, sharing with you our existing Brocade ADX solutions, along with innovations we are pursuing relative to the application trends I just outlined.
This is our architecture for application delivery. It has several tiers, each responsible for key functions in application delivery. It starts with global server load balancing, which is responsible for optimizing the application experience by routing end-user requests to the most appropriate data center. It then continues with services applied within a data center, ranging from server offload and security to measuring application response time and allocating additional application resources in real time. And, of course, data is an integral part of any application, so the architecture leverages Brocade’s long-standing strengths in low-latency storage networking.
Application Delivery Controllers (ADCs) are a strategic data center technology. They load-balance user requests both within and across data centers based on a wide variety of policies. They ensure application availability by creating pools of servers, and they optimize the performance of those servers by offloading CPU-intensive tasks such as SSL termination. And, perhaps most interestingly, they provide a single point of visibility and control into the end-user’s experience with the application.ADCs are the bridge between the network and the application, and a vital point of network intelligence for optimizing applications.
This is our current generation of Application Delivery controllers. These are purpose-built switches that provide industry-leading throughput, scalability, and latency.In the design of these platforms, we have placed a premium on the key tenants of the Brocade One strategy. For example, simplicity is achieved by offering a small number of platforms that cover a broad range of performance levels and user requirements. Investment protection is achieved through our Capacity On-Demand feature, allowing you to unlock additional performance and I/O capacity through simple software license keys.In addition, our modular chassis allow you to add incremental I/O and performance through add-on line cards and processor modules, with up to 32 processor cores on our high-end platform.
Now I’d like to return to the topic of server virtualization and cloud architectures. <CLICK> For all of its benefits, server virtualization can actually make capacity planning harder by consolidating servers and reducing excess capacity.The IT administrator no longer has the comfort of dedicated physical servers for each application. And virtualization doesn’t necessarily recapture unused resources.<CLICK> Let’s take a look at an example of a financial reporting application. In the first two months of each quarter, the application has very little load on it.However, as the end of quarter approaches, load on the application will increase dramatically. Typically, the IT organization will have to provision resources at or above peak loads.<CLICK> In the best-case scenario where the IT organization predicted correctly, the results are significantly underutilized resources during the first two months of the quarter.<CLICK> In the worst case where demand exceeds provisioned resources, application SLAs are missed and users have difficulty running their quarter end reporting.
We have recently announced a new software product, the Brocade Application Resource Broker, to directly address this challenge by helping the IT organization match application resources with user demand in real time.The Application Resource Broker enables visibility across both the network and the VM infrastructures, providing a single point of intelligence to measure the end user’s experience with an application as well as the VM resources servicing the end user. <CLICK> Using that intelligence, the Application Resource Broker can dynamically provision additional server resources when it detects load increasing, and <CLICK> de-provision those resources when no longer needed—freeing them up for use by another application. This software is currently designed to work with VMware, but you will see us extend this solution to other hypervisors as well as provide a set of open APIs to allow service providers to tailor this solution for their environments.
Now let’s look at application delivery controllers in a public cloud environment. Traditionally, on-premise application delivery services have been deployed using purpose-built switches or appliances. Growing capacity in this environment typically requires additional hardware and takes time and planning to deploy.Application delivery services in public clouds will need to be deployed much more rapidly and flexibly than in traditional models.<CLICK> As a result, application delivery controllers will become more widely deployed in a software form factor and as virtual appliances. For cloud providers, these virtual appliances will be provisioned by clicks instead of by racking new gear, using whatever commodity hardware platform and hypervisor they choose. Virtual ADCs will enable lower operating costs, faster time-to-service, and new revenue models.These services can be leased on shared infrastructure with usage-based billing models. And new customers can be brought up in minutes, not days or weeks.
Many enterprises and service providers will adopt a hybrid cloud model. Some applications will be best suited to run in private cloud data centers while others will be delivered via public clouds – and some a mix of both.But where do users access their applications in this hybrid model? Which data center is most appropriate to service their request? With an understanding of user identity, the applications being requested, and the makeup of the hybrid data center, ADCs bring unique value to address this problem. <CLICK> The business might decide to service the general public with more cost-effective, public cloud services while routing high-value clients to more expensive, private data centers located closer to the user.The next generation of Global Server Load Balancing technology from Brocade will enable the hybrid-cloud architecture by leveraging intelligence about the users, the applications, and the data center characteristics, such as the cost of delivery.
In summary, application optimization is a key imperative of the Brocade One architecture because data centers exist in part to deliver great application experiences to end users.Our goal is to help you take advantage of the key technology transitions occurring in applications and data centers, such as server virtualization and cloud computing.And we will do that by building on our long-standing expertise in data center networking.
Microsoft UC is made up of Exchange for email and messaging, Lync Server (formerly OCS) for voice, video and presence, and SharePoint for collaboration.Of particular interest to Brocade … both Exchange and Lync have specific requirements for hardware load balancing. In fact, the Microsoft TechNet website lists the hardware load balancing vendors who have been validated by Microsoft.Let’s look first at the Exchange 2010 opportunity. The new architecture of Exchange 2010 has made hardware load balancing a requirement for most Exchange deployments of more than 1,000 users. Many customers are still on Exchange 2003 and never moved to 2007, so Microsoft expects a faster migration than usual as 2003 support comes to an end and customers move to upgrade to take advantage of the new features in 2010. In June, Brocade attended the annual Microsoft IT event in North America called TechEd. We surveyed 3,000 attendees … and 40% told us they plan to migrate to Exchange 2010 in the next 12 months. Now let’s look at Lync Server. This is the long-awaited upgrade to Office Communications Server which finally delivers the option to replace the PBX. Microsoft is selling Lync as an add-on or “upsell” opportunity to Exchange 2010. So customers who are looking at Exchange migrations are also likely to be considering Lync features such as presence, IM and voicemail integration. Again, this is the feature set where Microsoft directly competes with Cisco. This is why Microsoft is working directly with Brocade and other L2/3 networking vendors to validate that Lync runs smoothly on a Brocade network. This creates a new opportunity for Brocade campus LAN sales in these Microsoft environments.
Let’s look at the Exchange opportunity first as this is a massive upgrade that is already under evaluation by many customers. The new architecture in 2010 actually adds a requirement for a hardware load balancer for any sites with more than 1,000 users.Compute Network ImpactsDAG improves scalability, availability and recoveryRequires HW “Load Balancer”CAS receives all client traffic and maps to best mailbox serverHW “Load Balancer” supports Client affinity, scalability and securityHW “Load Balancer” required for more than eight CASEthernet SwitchingMinimal impact on access layer due to improved IO performance SAN ImpactsSAN storage with iSCSI or Fibre ChannelSimplifies data management, availability and DRArray replication reduces mailbox server work loadWDM links for Fibre Channel, iSCSIFCIP links over WAN for Fibre ChannelTCP links over WAN for iSCSIWAN links to support remote replication for DRNeed dedicated bandwidth for replication traffic so IP traffic does not disrupt Adaptive Rate Limiting (ARL)QoS and Traffic EngineeringMay need 10 GbE support as data volumes grow
Exchange 2010 Preliminary changes include: Storage Groups are being eliminated and incorporated into the Information Store. Clustering is now at the Database level, not Server level. LCR and SCC clustering no longer offered. CCR now at Datastore level, not Server Level although the terminology has changed. Clustering functionality is now known as DAG (Database Availability Group).Typical reasons why customer’s upgrade to Exchange 2010:AD integration with voice mail and email - Eliminates the need for voicemail directory Voicemail preview feature - Text-transcription of voicemail saves time Downtime protection – Data Availability Groups (DAG) feature does both on-site and off-site data replication by storing copies of data on different servers with automatic failover.Preventing Information Leaks in E-mail - Beefed up IRM (information rights management) features in Exchange 2010 prevent sending sensitive e-mail messages and attachments. Exchange 2010 automatically identifies corporate keywords predefined by IT that a company would not like to go outside.Saving with Cheaper Storage - Exchange 2010 has 70 percent less disk I/O requirements than Exchange 2007. Customers can use the slower, cheaper direct-attached storage.More reasons from Microsoft: http://www.microsoft.com/exchange/2010/en/us/why-upgrade.aspx
Key Takeaways:From chartSpeaker Notes:Let us share with you a customer who recently implemented the Brocade Real-time Campus solution, the BOK (or Bank of Oklahoma) Center in Tulsa Oklahoma.The BOK Center is Tulsa’s state-of-the-art sports and entertainment venue. The 19,000-seat venue is home to the Arena Football League’s Tulsa Talons and the Central Hockey Leagues Tulsa Oilers. The BOK Center was designed to host major concerts, family shows, ice shows and other world-class entertainment. As the BOK Center was being constructed, Brocade as approached to build a cost-effective, end-to-end network infrastructure to deliver unified communications, provide data connectivity to staffand patrons and to and increase venue’s overall security.The BOK Center choose to implement the Brocade Real-time Campus solution whichprovided wired and wireless access to staff and patrons and supporting internal operating requirements. The solution also provide seamless integration and support for their Avaya Unified Communications applicationintegration running over the Brocade network.This resulted in the convergence of separate voice, video security, data, and other special-purpose networks onto a single Brocade Real-time Campus network and the broad deployment of leading-edge IP-enabled applications and technologies within the BOK Center. The BOK Center also obtained a foundation for future services such as allowing patrons to ordering drinks and food or watch replays over the wireless network using mobile devices such as iPhones or iPads.
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.