Tech Book: WAN Optimization Controller Technologies

1,769 views

Published on

This EMC Engineering TechBook provides a high-level overview of the WAN Optimization Controller (WOC) appliance, including network and deployment topologies, storage and replication application, FCIP configurations, and WAN Optimization Controller appliances.

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,769
On SlideShare
0
From Embeds
0
Number of Embeds
122
Actions
Shares
0
Downloads
64
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Tech Book: WAN Optimization Controller Technologies

  1. 1. WAN Optimization Controller Technologies Version 2.0• Network and Deployment Topologies• Storage and Replication• FCIP Configuration• WAN Optimization Controller AppliancesVinay JonnakutiEric Pun
  2. 2. Copyright © 2012- 2013 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulator document for your product line, go to EMC Online Support (https://support.emc.com). Part number H8076.32 WAN Optimization Controller Technologies TechBook
  3. 3. ContentsPreface.............................................................................................................................. 5Chapter 1 Network and Deployment Topologies and Implementations Overview............................................................................................ 12 Network topologies and implementations ................................... 13 Deployment topologies .................................................................... 15 Storage and replication application................................................ 17 Configuration settings............................................................... 17 Network topologies and implementations ............................ 17 Notes............................................................................................ 17 References ................................................................................... 18Chapter 2 FCIP Configurations Brocade FCIP ..................................................................................... 20 Configuration settings............................................................... 20 Brocade FCIP Tunnel settings.................................................. 20 Rules and restrictions................................................................ 21 References ................................................................................... 21 Cisco FCIP .......................................................................................... 22 Configuration settings............................................................... 22 Notes............................................................................................ 22 Basic guidelines.......................................................................... 23 Rules and restrictions................................................................ 24 References ................................................................................... 24 WAN Optimization Controller Technologies TechBook 3
  4. 4. Contents Chapter 3 WAN Optimization Controllers Silver Peak appliances...................................................................... 26 Overview .................................................................................... 26 Terminology ............................................................................... 27 Features ....................................................................................... 29 Deployment topologies............................................................. 30 Failure modes supported ......................................................... 30 FCIP environment ..................................................................... 30 GigE environment ..................................................................... 31 References ................................................................................... 32 Riverbed appliances ......................................................................... 33 Overview .................................................................................... 33 Terminology ............................................................................... 34 Notes............................................................................................ 38 Features ....................................................................................... 39 Deployment topologies............................................................. 39 Failure modes supported ......................................................... 39 FCIP environment ..................................................................... 40 GigE environment ..................................................................... 42 References ................................................................................... 444 WAN Optimization Controller Technologies TechBook
  5. 5. Preface This EMC Engineering TechBook provides a high-level overview of the WAN Optimization Controller (WOC) appliance, including network and deployment topologies, storage and replication application, FCIP configurations, and WAN Optimization Controller appliances. E-Lab would like to thank all the contributors to this document, including EMC engineers, EMC field personnel, and partners. Your contributions are invaluable. As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative. Audience This TechBook is intended for EMC field personnel, including technology consultants, and for the storage architect, administrator, and operator involved in acquiring, managing, operating, or designing a networked storage environment that contains EMC and host devices.EMC Support Matrix For the most up-to-date information, always consult the EMC Support and E-Lab Matrix (ESM), available through E-Lab Interoperability Navigator Interoperability (ELN) at http://elabnavigator.EMC.com, under the PDFs and Navigator Guides tab. Under the PDFs and Guides tab resides a collection of printable resources for reference or download. All of the matrices, including the ESM (which does not include most software), are subsets of the WAN Optimization Controller Technologies TechBook 5
  6. 6. Preface E-Lab Interoperability Navigator database. Included under this tab are: ◆ The EMC Support Matrix, a complete guide to interoperable, and supportable, configurations. ◆ Subset matrices for specific storage families, server families, operating systems or software products. ◆ Host connectivity guides for complete, authoritative information on how to configure hosts effectively for various storage environments. Under the PDFs and Guides tab, consult the Internet Protocol pdf under the "Miscellaneous" heading for EMCs policies and requirements for the EMC Support Matrix. Related The following documents, including this one, are available through documentation the E-Lab Interoperability Navigator, Topology Resource Center tab, at http://elabnavigator.EMC.com. These documents are also available at the following location: http://www.emc.com/products/interoperability/topology-resource-center.htm • Backup and Recovery in a SAN TechBook • Building Secure SANs TechBook • Extended Distance Technologies TechBook • Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB) Concepts and Protocols TechBook • Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB) Case Studies TechBook • Fibre Channel SAN Topologies TechBook • iSCSI SAN Topologies TechBook • Networked Storage Concepts and Protocols TechBook • Networking for Storage Virtualization and RecoverPoint TechBook • EMC Connectrix SAN Products Data Reference Manual • Legacy SAN Technologies Reference Manual • Non-EMC SAN Products Data Reference Manual ◆ EMC Support Matrix, available through E-Lab Interoperability Navigator at http://elabnavigator.EMC.com >PDFs and Guides ◆ RSA security solutions documentation, which can be found at http://RSA.com > Content Library6 WAN Optimization Controller Technologies TechBook
  7. 7. Preface EMC documentation and release notes can be found at EMC Online Support (https://support.emc.com). For vendor documentation, refer to the vendor’s website. Authors of this This TechBook was authored by Vinay Jonnakuti and Eric Pun, along TechBook with other EMC engineers, EMC field personnel, and partners. Vinay Jonnakuti is a Sr. Corporate Systems Engineer in the Unified Storage division of EMC focusing on VNX and VNXe products, working on pre-sales deliverables including collateral, customer presentations, customer beta testing and proof of concepts. Vinay has been with EMCs for over 5 years. Prior to his current position, Vinay worked in EMC E-Lab leading the qualification and architecting of solutions with WAN-Optimization appliances from various partners with various replication technologies, including SRDF (GigE/FCIP), SAN-Copy, MirrorView, VPLEX, and RecoverPoint. Vinay also worked on Fibre Channel and iSCSI qualification on the VMAX Storage arrays. Eric Pun is a Senior Systems Integration Engineer and has been with EMC for over 12 years. For the past several years, Eric has worked in E-lab qualifying interoperability between Fibre Channel switched hardware and distance extension products. The distance extension technology includes DWDM, CWDM, OTN, FC-SONET, FC-GbE, FC-SCTP, and WAN Optimization products. Eric has been a contributor to various E-Lab documentation, including the SRDF Connectivity Guide.Conventions used in EMC uses the following conventions for special notices: this document Note: A note presents information that is important, but not hazard-related. Typographical conventions EMC uses the following type style conventions in this document. Bold Use for names of interface elements, such as names of windows, dialog boxes, buttons, fields, tab names, key names, and menu paths (what the user specifically selects or clicks) Italic Use for full titles of publications referenced in text WAN Optimization Controller Technologies TechBook 7
  8. 8. Preface Monospace Use for: • System output, such as an error message or script • System code • Pathnames, filenames, prompts, and syntax • Commands and options Monospace italic Use for variables. Monospace bold Use for user input. [] Square brackets enclose optional values | Vertical bar indicates alternate selections — the bar means “or” {} Braces enclose content that the user must specify, such as x or y or z ... Ellipses indicate nonessential information omitted from the example Where to get help EMC support, product, and licensing information can be obtained as follows: Note: To open a service request through the EMC Online Support site, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account. Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support site (registration required) at: https://support.EMC.com Technical support EMC offers a variety of support options. Support by Product — EMC offers consolidated, product-specific information on the Web at: https://support.EMC.com/products The Support by Product web pages offer quick links to Documentation, White Papers, Advisories (such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content, such as presentations, discussion, relevant Customer Support Forum entries, and a link to EMC Live Chat.8 WAN Optimization Controller Technologies TechBook
  9. 9. PrefaceEMC Live Chat — Open a Chat or instant message session with anEMC Support Engineer.eLicensing supportTo activate your entitlements and obtain your Symmetrix license files,visit the Service Center on https://support.EMC.com, as directed onyour License Authorization Code (LAC) letter e-mailed to you.For help with missing or incorrect entitlements after activation (thatis, expected functionality remains unavailable because it is notlicensed), contact your EMC Account Representative or AuthorizedReseller.For help with any errors applying license files through SolutionsEnabler, contact the EMC Customer Support Center.If you are missing a LAC letter, or require further instructions onactivating your licenses through the Online Support site, contactEMCs worldwide Licensing team at licensing@emc.com or call:◆ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.◆ EMEA: +353 (0) 21 4879862 and follow the voice prompts.Wed like to hear from you!Your suggestions will help us continue to improve the accuracy,organization, and overall quality of the user publications. Send youropinions of this document to:techpubcomments@emc.comYour feedback on our TechBooks is important to us! We want ourbooks to be as helpful and relevant as possible. Send us yourcomments, opinions, and thoughts on this or any other TechBook to:TechBooks@emc.com WAN Optimization Controller Technologies TechBook 9
  10. 10. Preface10 WAN Optimization Controller Technologies TechBook
  11. 11. 1 Network and Deployment Topologies and ImplementationsThis chapter provides the following information for the WANOptimization Controller (WOC) appliance:◆ Overview ............................................................................................. 12◆ Network topologies and implementations..................................... 13◆ Deployment topologies ..................................................................... 15◆ Storage and replication application................................................. 17 Network and Deployment Topologies and Implementations 11
  12. 12. Network and Deployment Topologies and Implementations Overview A WAN Optimization Controller (WOC) is an appliance that can be placed In-line or Out-of-Path to reduce and optimize the data that is to be transmitted over the LAN/MAN/WAN. These devices are designed to help mitigate the effects of packet loss, network congestion, and latency while reducing the overall amount of data to be transmitted over the network. In general, the technologies utilized in accomplishing this are Transmission Control Protocol (TCP) acceleration, data-deduplication, and compression. Additionally, features such as QoS, Forward Error Correction (FEC), and Encryption may also be available. Network links and WAN circuits can have high latency and/or packet loss as well as limited capacity. WAN Optimization Controllers can be used to maximize the amount of data that can be transmitted over a link. In some cases, these appliances may be a necessity, depending on performance requirements. WAN and data optimization can occur at varying layers of the OSI stack, whether it be at the network and transport layer, the session, presentation, and application layers, or just to the data (payload) itself.12 WAN Optimization Controller Technologies TechBook
  13. 13. Network and Deployment Topologies and ImplementationsNetwork topologies and implementations TCP was developed as a local area network (LAN) protocol. However, with the advancement of the Internet it was expanded to be used over the WAN. Over time TCP has been enhanced, but even with these enhancements TCP is still not well-suited for WAN use for many applications. The primary factors that directly impact TCPs ability to be optimized over the WAN are latency, packet loss, and the amount of bandwidth to be utilized. It is these factors on which the layer 3/4 optimization products focus. Many of these optimization products will re-encapsulate the packets into UDP or their proprietary protocol, while others may still use TCP, but optimize the connections between a set of WAN Optimization Controllers at each end of the WAN. While some products create tunnels to perform their peer-to-peer connection between appliances for the optimized data, others may just modify, or tag other aspects within the packet to ensure that the far-end WOC captures the optimized traffic. Optimization of the payload (data) within the packet focuses on the reduction of actual payload as it passes over the network through the use of data compression and/or data de-duplication engines (DDEs). Compression is performed through the use of data compression algorithms, while DDE uses large data pattern tables and associated pointers (fingerprints). Large amounts of memory and/or hard-drive storage can be used to store these pattern tables and pointers. Identical tables are built in the optimization appliances on both sides of the WAN, and as new traffic passes through the WOC patterns are matched, and only the associated pointers are sent over the network (versus resending data.) While typical LZ compression ratio is about 2:1, DDE ratios can range greatly, depending on many factors. In general the combination of both of these technologies, DDE and compression, will achieve around a 5:1 (and sometimes much higher ratios) reduction level. Layer 4/7 optimization is what is called the "application" layer of optimization. This area of optimization can take many approaches that can vary widely, but are generally done through the use of application-aware optimization engines. The actions taken by these engines can result in benefits, including reductions in the number of transactions that occur over the network or more efficient use of bandwidth. It is also at this layer the TCP optimization occurs. Network topologies and implementations 13
  14. 14. Network and Deployment Topologies and Implementations Overall, WAN optimizers can be aligned with customer networking best practices, and it should be made clear to the customer that applications using these devices can, and should, be prioritized based on their WAN bandwidth/throughput requirements.14 WAN Optimization Controller Technologies TechBook
  15. 15. Network and Deployment Topologies and ImplementationsDeployment topologies There are two basic topologies for deployment: ◆ In-path/in-line/bridge ◆ Out-of-path/routed An in-path/in-line/bridge deployment, as shown in Figure 1, means that the WOC is directly in the path between the source and destination end points where all inbound and outbound flows will pass through the WAN Optimization Controllers. The placement of the WOC devices at each site is typically placed as close as possible to the WAN circuit. Figure 1 In-path/in-line/bridge topology An out-of-path/routed deployment, as shown in Figure 2, means that the WOC is not in the direct path between the source and destination end points. The traffic must be routed/redirected to the WOC devices using routing features such as WCCP, PBR, VRRP, etc. Figure 2 Out-of-path/routed topology Deployment topologies 15
  16. 16. Network and Deployment Topologies and Implementations ◆ WCCPv2 (Web Cache Communication Protocol) is a content routing protocol that provides a mechanism to redirect traffic in real-time. WCCP also has built-in mechanisms to support load balancing, fault tolerance, and scalability. ◆ PBR (Policy Based Routing) is a technique used to make routing decisions based on policies or a combination of policies such as packet size, protocol of the payload, source, destination, or other network characteristics. ◆ VRRP (Virtual Router Redundancy Protocol) is a redundancy protocol designed to increase the availability of a default gateway. In the event of a power failure or WOC hardware or software failure, it is necessary for the WOC to provide some level of action. The WOC can either continue to allow data to pass through, unoptimized, or it can block all traffic from flowing through it. The failure modes typically offered by WAN optimizers are commonly referred to as: ◆ Fails-to-Wire The appliance will behave as a crossover cable connecting the Ethernet LAN switch directly to the WAN router and traffic will continue to flow uninterrupted and unoptimized. ◆ Fails-Open / Fails-to-Block The appliance will behave as an open port to the WAN router. The WAN router will recognize that the link is down and will begin forwarding traffic according to its routing tables. Depending upon your deployment topology, you may determine that one method may be better suited for your environment than the other.16 WAN Optimization Controller Technologies TechBook
  17. 17. Network and Deployment Topologies and ImplementationsStorage and replication application This section provides storage and replication application details for EMC® products: ◆ Symmetrix®/VMAX™ SRDF® ◆ RecoverPoint ◆ SAN Copy™ ◆ Celerra Replicator™ ◆ MirrorView™Configuration settings Configurations settings are as follows: ◆ Compression on GigE (RE) port = Disabled Note: For Riverbed Steelhead RiOS v6.1.1a or later, the compression setting could be Enabled on the Symmetrix system. The Steelhead automatically detects and disables compression on the Symmetrix system. ◆ SRDF Flow Control = EnabledNetwork topologies and implementations In general, it has been observed that optimization ratios are higher with SRDF/A than SRDF Adaptive Copy. There are many factors that impact how much optimization will occur, therefore results will vary.Notes Note the following:For Symmetrix configuration settings Compression Compression should be disabled on the GigE ports on the MPCD and the GigE director when a WAN optimization device employing data deduplication is used. If compression is enabled on the GigE ports on the MPCD and the GigE director, data deduplication benefits will be severely impacted, resulting in increased WAN bandwidth needs. Storage and replication application 17
  18. 18. Network and Deployment Topologies and Implementations SRDF Flow Control SRDF Flow Control should be enabled for increased stability of the SRDF links. Further tuning of SRDF flow control can be made to improve performance. For more information, please contact your EMC Customer Service representative. For SRDF modes and data reduction In general, it has been observed that optimization ratios are higher with GigE ports on the MPCD and the GigE director as opposed to FCIP. There are many factors that impact how much optimization will occur, therefore results will vary. References ◆ For further information, refer to the EMC Symmetrix Remote Data Facility (SRDF) Connectivity Guide, located on the E-Lab Interoperability Navigator at http://elabnavigator.EMC.com >PDFs and Guides.18 WAN Optimization Controller Technologies TechBook
  19. 19. 2 FCIP ConfigurationsThis chapter provides FCIP configuration information for:◆ Brocade FCIP ...................................................................................... 20◆ Cisco FCIP ........................................................................................... 22 FCIP Configurations 19
  20. 20. FCIP Configurations Brocade FCIP This section provides configuration information for Brocade FCIP. Configuration settings Configuration settings are as follows: ◆ FCIP Fastwrite = Enabled ◆ Compression = Disabled ◆ TCP Byte Streaming = Enabled ◆ Commit Rate = in Kbps (Environment dependent) ◆ Tape Pipelining = Disabled ◆ SACK = Enabled ◆ Min Retransmit Time = 100 ◆ Keep-Alive Timeout = 10 ◆ Max Re-Transmissions = 8 Brocade FCIP Tunnel settings Consider the following: ◆ FCIP Fastwrite This setting accelerates SCSI Write I/Os over the FCIP tunnel. This can not be combined with FC Fastwrites. ◆ Compression This simply compresses the data that flows over the FCIP tunnel. This should be disabled when using with WOC devices, thus allowing the WOC device to perform the compression and data de-duplication. ◆ Commit Rate This setting is environment dependent. This should be set in accordance with the WAN Optimization vendor. Considerations such as Data-to-be-Optimized, Available WAN circuit size and Data-Reduction ratio need to be taken into account. ◆ TCP Byte Streaming20 WAN Optimization Controller Technologies TechBook
  21. 21. FCIP Configurations This is a Brocade feature which allows a Brocade FCIP switch to communicate with a 3rd party WAN Optimization Controller. This feature supports a FCIP frame which has been split into a maximum of 8 separate TCP segments. If the frame is split into more than eight segments, it results in prematurely sending a frame to the FCIP layer with an incorrect size and the FCIP tunnel bounces.Rules and restrictions Consider the following rules and restrictions when using TCP byte streaming: ◆ Only one FCIP tunnel is allowed to be configured for a GigE port that has TCP Byte Streaming configured. ◆ FCIP tunnel cannot have compression enabled. ◆ FCIP tunnel cannot have FC Fastwrite enabled. ◆ FCIP tunnel must have a committed rate set. ◆ Both sides of the FCIP tunnel must be identically configured. ◆ TCP byte streaming is not compatible with older FOS revisions, which do not have the option available.References For further information, refer to https://support.emc.com and http://www.brocade.com. ◆ EMC Connectrix B Series Fabric OS Administrators Guide ◆ Brocade Fabric OS Administrator’s Guide Brocade FCIP 21
  22. 22. FCIP Configurations Cisco FCIP This section provides configuration information for Cisco FCIP. Configuration settings Configuration settings are as follows: ◆ Max-Bandwidth = Environment dependent (Default = 1000 Kb) ◆ Min-Available-Bandwidth = Recommended setting: 50-80% of Max-Bandwidth ◆ Estimated roundtrip time = Set to measured latency (round-trip time - RTT) between MDS switches ◆ IP Compression = Disabled ◆ FCIP Write Acceleration = Enabled ◆ Tape Accelerator = Disabled ◆ Encryption = Disabled ◆ Min Re-Transmit Timer = 200 ms ◆ Max Re-Transmissions = 8 ◆ Keep-Alive = 60 ◆ SACK = Enabled ◆ Timestamp = Disabled ◆ PMTU = Enabled ◆ CWM = Enabled ◆ CWM Burst Size = 50 KB Notes Consider the following information for Cisco FCIP tunnel settings: ◆ Max-Bandwidth The max-bandwidth-mbps parameter and the measured RTT together determine the maximum window size. This should be configured to match the worst-case bandwidth available on the physical link. ◆ Min-Available-Bandwidth22 WAN Optimization Controller Technologies TechBook
  23. 23. FCIP Configurations The min-available-bandwidth parameter and the measured RTT together determine the threshold below which TCP aggressively maintains a window size sufficient to transmit at minimum available bandwidth. It is recommend that you adjust this to 50-80% of the Max-Bandwidth. ◆ Estimated Roundtrip-Time This is the measured latency between the 2 MDS GigE interfaces. Ping can be used to determine the roundtrip-time. ◆ FCIP Write Acceleration Write Acceleration is used to help alleviate the effects of network latency. It can work with Port-Channels only when the Port-Channel is managed by Port-Channel protocol (PCP). FCIP write acceleration can be enabled for multiple FCIP tunnels if the tunnels are part of a dynamic Port-Channel configured with channel mode active. FCIP write acceleration does not work if multiple non-Port -Channel ISLs exist with equal weight between the initiator and the target port. ◆ Min Re-Transmit Timer This is the amount of time that TCP waits before retransmitting. In environments where there may be high packet loss / congestion, this number may need to be adjusted to 4x the measured roundtrip-time. Ping may be used to measure the round trip latency between the 2 MDS switches. ◆ Max Re-Transmissions The maximum number of times that a packet is retransmitted before the TCP connection is closed.Basic guidelines Consider the following guidelines when creating/utilizing multiple FCIP interfaces /profiles: ◆ Gigabit Ethernet Interfaces support a single IP address. ◆ Every FCIP profile must be uniquely addressable by an IP address and TCP port pair. Where FCIP profiles share a Gigabit Ethernet interface, the FCIP profiles must use different TCP port numbers. Cisco FCIP 23
  24. 24. FCIP Configurations ◆ A FCIP interface is linked to a single FCIP profile. Up to three FCIP interfaces can link to an FCIP profile, but only three FCIP interfaces can be active on any Gigabit Ethernet interface. ◆ A dedicated FCIP profile per FCIP link is recommended. Rules and restrictions Consider the following rules and restrictions when enabling FCIP Write Acceleration: ◆ It can work with Port-Channels only when the Port-Channel is managed by Port-Channel Protocol (PCP). ◆ FCIP write acceleration can be enabled for multiple FCIP tunnels if the tunnels are part of a dynamic Port-Channel configured with channel mode active. ◆ FCIP write acceleration does not work if multiple non-Port-Channel ISLs exist with equal weight between the initiator and the target port. ◆ Do not enable time stamp control on an FCIP interface with write acceleration configured. ◆ Write acceleration can not be used across FSPF equal cost paths in FCIP deployments. Also, FCIP write acceleration can be used in Port-Channels configured with channel mode active or constructed with Port-Channel Protocol (PCP). References For further information, refer to the following documentation on Ciscos website at http://www.cisco.com. ◆ Wide Area Application Services Configuration Guide ◆ Replication Acceleration Deployment Guide ◆ Q&A for WAAS Replication Accelerator Mode ◆ MDS 9000 Family CLI Configuration Guide24 WAN Optimization Controller Technologies TechBook
  25. 25. 3 WAN Optimization ControllersThis chapter provides information on the following WANOptimization Controller (WOC) appliances, along with RiverbedGranite, which is used in conjunction with Steelhead:◆ Silver Peak appliances ....................................................................... 26◆ Riverbed appliances........................................................................... 33 WAN Optimization Controllers 25
  26. 26. WAN Optimization Controllers Silver Peak appliances This section provides information on the Silver Peak appliances optimization controller. The following topics are discussed: ◆ “Overview” on page 26 ◆ “Terminology” on page 27 ◆ “Features” on page 29 ◆ “Deployment topologies” on page 30 ◆ “Failure modes supported” on page 30 ◆ “FCIP environment” on page 30 ◆ “GigE environment” on page 31 ◆ “References” on page 32 Overview Silver Peak appliances are interconnected by tunnels, which transport optimized traffic flows. Policies control how the appliance filters LAN side packets into flows and whether: ◆ an individual flow is directed to a tunnel, shaped, and optimized; ◆ processed as shaped, pass-through (unoptimized) traffic; ◆ processed as unshaped, pass-through (unoptimized) traffic; ◆ continued to the next applicable Route Policy entry if a tunnel goes down; or ◆ dropped. The appliance manager has separate policies for routing, optimization, and QoS functions. These policies prescribe how the appliance handles the LAN packets it receives. The optimization policy uses optimization techniques to improve the performance of applications across the WAN. Optimization policy actions include network memory, payload compression, and TCP acceleration. Silver Peak ensures network integrity by using QoS management, Forward Error Correction, and Packet Order Correction. When Adaptive Forward Error Correction (FEC) is enabled, the appliance introduces a parity packet, which helps detect and correct26 WAN Optimization Controller Technologies TechBook
  27. 27. WAN Optimization Controllers single-packet loss within a stream of packets, reducing the need for retransmissions. Silver Peak can dynamically adjust how often this parity packet is introduced in response to changing link conditions. This can help maximize error correction while minimizing overhead. To avoid retransmissions that occur when packets arrive out of order, Silver Peak appliances use Packet Order Correction (POC) to resequence packets on the far end of a WAN link, as needed.Terminology Consider the following terminology when using Silver Peak configuration settings: ◆ Coalescing ON — Enables/disables packet coalescing. Packet coalescing transmits smaller packets in groups of larger packets, thereby increasing performance and helping to overcome the effects of latency. ◆ Coalesce Wait — Timer (in milliseconds) used to determine the amount of time to wait before transmitting coalesced packets. ◆ Compression — Reduces the bandwidth consumed by traffic traversing the WAN. Payload compression is used in conjunction with network memory to provide compression on "first pass" data. ◆ Congestion Control — Techniques used by Silver Peak to manage congestion scenarios across a WAN. Configuration options are standard, optimized, and auto. Standard uses standard TCP congestion control. Optimized congestion control is the most aggressive mode of congestion control and should only be used in environments with point-to-point connections for a dedicated to single application. Auto congestion control aims to improve throughput over standard congestion control, but may not be suitable for all environments. ◆ FEC / FEC Ratio — Technique used by Silver Peak to recover from packet loss without the need for packet retransmissions. Hence, loss is corrected on the Silver Peak appliance resulting in higher throughout during the data transmission. ◆ IP Header Compression — Enables/disables compression of the IP header in order to reduce the packet size. Header compression can provide additional bandwidth gains by reducing packet header information using specialized compression algorithms. Silver Peak appliances 27
  28. 28. WAN Optimization Controllers ◆ Mode — Refers to the Silver Peak tunnel configuration. The default setting is GRE. Alternative option is UDP. ◆ MTU (Maximum Transmission Unit) — The size, in bytes, of the largest PDU that a given layer of a communications protocol can pass onwards. ◆ Network Memory — Silver Peaks implementation of real-time data reduction of network traffic. This de-duplication technology is used to inspect all inbound and outbound WAN traffic, storing a local instance of data on each appliance. The NX Series appliance compares real-time traffic streams with to patterns stored using Network Memory. If a match exists, a short reference pointer is sent to the remote Silver Peak appliance, instructing it to deliver the traffic pattern from its local instance. Repetitive data is never sent across the WAN. If the content is modified, the Silver Peak appliance detects the change at the byte level and updates the networks memory. Only the modifications are sent across the WAN. These are combined with original content by NX Series appliances at the destination location. Currently, it is recommended to enable network memory and set the network memory mode to 1. Mode 1 is referred to as "low latency mode" and enables network memory to better balance data reduction versus high throughput. While network memory can be enabled from the GUI, configuring it for mode 1 must be performed through the CLI. ◆ Payload Compression — Uses algorithms to identify relatively short byte sequences that are repeated frequently over time. These sequences are then replaced with shorter segments of code to reduce the size of transmitted data. Simple algorithms can find repeated bytes within a single packet; more sophisticated algorithms can find duplication across packets and even across flows. ◆ Reorder Wait — Time (in milliseconds) that the Silver Peak appliances will wait to reorder packets. This is a dynamic value that will change based on line conditions. Recommendation is to leave this as the default for SRDF traffic. ◆ RTP Header Compression — Used to compress the size of the RTP protocol packet header used in Voice over IP communications. Header compression can provide additional bandwidth gains by reducing packet header information using specialized compression algorithms.28 WAN Optimization Controller Technologies TechBook
  29. 29. WAN Optimization Controllers ◆ TCP Acceleration — References several techniques used by Silver Peak to accelerate the TCP protocol. TCP acceleration uses techniques such as selective acknowledgement, window scaling, and transaction size adjustment to compensate for poor performance on high latency links. ◆ Tunnel Auto Max BW — Allows the Silver Peak to automatically determine the maximum bandwidth available. Recommendation is to disable this in SRDF environments. ◆ Tunnel Max BW — For manually configuring the maximum bandwidth accessible to the Silver Peak. This is recommended in SRDF environments where bandwidth values are known. This is a static configuration. ◆ Tunnel Min BW — For manually configuring the maximum bandwidth accessible to the Silver Peak. This does not need to be set for proper operation. This is a static configuration. A value of 32kbps is recommended, which is the default. ◆ WAN Bandwidth — Applies to the WAN side of the appliance and should be set to the amount of bandwidth to be made available to the appliance on the WAN side. Inputting a value also configures the tunnel max bandwidth configuration variable. ◆ Windows Scaling — Used to overcome the effects of latency on single-flow throughput in a TCP network. The window-scale factor multiplies the standard TCP window of 64 KB by 2 to the power of the window-scale. Default window-scale is 6.Features Features include: ◆ Compression (payload and header) ◆ Network memory (data-deduplication) ◆ TCP acceleration ◆ QoS (Quality of Service) ◆ FEC (Forward Error Correction) ◆ POC (Packet Order Correction) ◆ Encryption - IPsec Silver Peak appliances 29
  30. 30. WAN Optimization Controllers Deployment topologies Deployment topologies include: ◆ In-line (bridge mode) • In-line ◆ Out-of-path (router) • Out-of-path with Policy-Based-Routing (PBR) redirection • Out-of-path with Web Cache Coordination Protocol (WCCPv2) • Out-of-path with VRRP peering to WAN router • Out-of-path with Policy-Based-Routing (PBR) and VRRP redundant Silver Peak appliances • Out-of-path with Web Cache Coordination Protocol (WCCP) redundant Silver Peak appliances ◆ The Silver Peak appliances can only be deployed in out-of-path (Router) mode when using 10 Gb Ethernet Fibre data ports as optical interfaces to do not fail to wire ◆ The Silver Peak NX-8700, NX-9700, and NX-10000 appliances support 10 Gb Ethernet Fibre data ports ◆ The SilverPeak VX (virtual appliances) and the Silver Peak VRX (virtual appliances) are supported when deployed on the VMWARE ESX or ESXi servers. The virtual appliances can only be deployed in out-of-path configurations. Failure modes supported The following failure modes are supported: • Fail-to-wire • Fail-open FCIP environment The following Silver Peak configuration settings are recommended in an FCIP environment: ◆ WAN Bandwidth = (Environment dependent) ◆ Tunnel Auto Max BW = Disabled (Unchecked)30 WAN Optimization Controller Technologies TechBook
  31. 31. WAN Optimization Controllers ◆ Tunnel Max BW = in Kb/s (Environment dependent) ◆ Tunnel Min BW = 32 Kb/s ◆ Reorder Wait = 100 ms ◆ MTU = 1500 (For 3.1 code and higher, maximum MTU = 2500) ◆ Mode = GRE ◆ Network Memory = Enabled ◆ Compression = Enabled ◆ TCP Acceleration = Enabled ◆ CIFS Acceleration = Disabled ◆ FEC = Enabled ◆ FEC Ratio = 1:5 (Recommended) ◆ Windows Scale Factor = 8 ◆ Congestion Control = Optimized ◆ IP Header Compression = Enabled ◆ RTP Header Compression = Enabled ◆ Coalescing On = Yes ◆ Coalesce Wait = 0 ms ◆ From the CLI run: "system network-memory mode 1"GigE environment The following Silver Peak configuration settings are recommended in a GigE environment: ◆ WAN Bandwidth = (Environment dependent) ◆ Tunnel Auto Max BW = Disabled (Unchecked) ◆ Tunnel Max BW = in Kbps (Environment dependent) ◆ Tunnel Min BW = 32 Kb/s ◆ Reorder Wait = 100 ms ◆ MTU = 1500 ◆ Mode = GRE ◆ Network Memory = Enabled ◆ Compression = Enabled Silver Peak appliances 31
  32. 32. WAN Optimization Controllers ◆ TCP Acceleration = Enabled ◆ CIFS Acceleration = Disabled ◆ FEC = Enabled ◆ FEC Ratio = 1:5 (Recommended) ◆ Windows Scale Factor = 8 ◆ Congestion Control = Optimized ◆ IP Header Compression = Enabled ◆ RTP Header Compression = Enabled ◆ Coalescing On = Yes ◆ Coalesce Wait = 0 ms ◆ From the CLI run: "system network-memory mode 1" References For more information, refer to Silver Peaks website at http://www.silver-peak.com. ◆ NX Series Appliance Operator Guide ◆ NX Series Appliance Network Deployment Guide ◆ Quick Start Guide, VX Virtual Appliance, VMware vSphere / vSphere Hypervisor for configuring the VX virtual appliance ◆ Quick Start Guide, VRX-8 Virtual Appliance, VMware vSphere / vSphere Hypervisor, for configuring the VRX-8 virtual appliance ◆ VX Host System Requirements ◆ VRX-8 Host System Requirements32 WAN Optimization Controller Technologies TechBook
  33. 33. WAN Optimization ControllersRiverbed appliances This section provides information on the Riverbed Steelhead WAN Optimization Controller and the Riverbed Granite system. The following topics are discussed: ◆ “Overview” on page 33 ◆ “Terminology” on page 34 ◆ “Notes” on page 38 ◆ “Features” on page 39 ◆ “Deployment topologies” on page 39 ◆ “Failure modes supported” on page 39 ◆ “FCIP environment” on page 40 ◆ “GigE environment” on page 42 ◆ “References” on page 44Overview RiOS is the software that powers the Riverbeds Steelhead WAN Optimization Controller. The optimization techniques RiOS utilizes are: ◆ Data Streamlining ◆ Transport Streamlining ◆ Application Streamlining, and ◆ Management Streamlining RiOS uses a Riverbed proprietary algorithm called Scalable Data Referencing (SDR) along with data compression when optimizing data across the WAN. SDR breaks up TCP data streams into unique data chunks that are stored in the hard disk (data store) of the device running RiOS. Each data chunk is assigned a unique integer label (reference) before it is sent to a peer RiOS device across the WAN. When the same byte sequence is seen again in future transmissions from clients or servers, the reference is sent across the WAN instead of the raw data chunk. The peer RiOS device uses this reference to find the original data chunk on its data store, and reconstruct the original TCP data stream. After a data pattern is stored on the disk of a Steelhead appliance, it can be leveraged for transfers to any other Steelhead appliance across Riverbed appliances 33
  34. 34. WAN Optimization Controllers all applications being accelerated by Data Streamlining. Data Streamlining also includes optional QoS enforcement. QoS enforcement can be applied to both optimized and unoptimized traffic, both TCP and UDP. Steelhead appliances also use a generic latency optimization technique called Transport Streamlining. Transport Streamlining uses a set of standards and proprietary techniques to optimize TCP traffic between Steelhead appliances. These techniques ensure efficient retransmission methods, such as TCP selective acknowledgements, are used, optimal TCP window sizes are used to minimize the impact of latency on throughput to maximize throughput across WAN links. Transport Streamlining ensures that there is always a one-to-one ratio for active TCP connections between Steelhead appliances, and the TCP connections to clients and servers. That is, Steelhead appliances do not tunnel or perform multiplexing and de-multiplexing of data across connections. This is true regardless of the WAN visibility mode in use. Terminology Consider the following terminology when using Riverbed configuration settings: ◆ Adaptive Compression — Detects LZ data compression performance for a connection dynamically and turns it off (sets the compression level to 0) momentarily if it is not achieving optimal results. Improves end-to-end throughput over the LAN by maximizing the WAN throughput. By default, this setting is disabled. ◆ Adaptive Data Streamlining Mode SDR-M — RiOS uses a Riverbed proprietary algorithm called Scalable Data Referencing (SDR). SDR breaks up TCP data streams into unique data chunks that are stored in the hard disk (data store) of the device running RiOS. Each data chunk is assigned a unique integer label (reference) before it is sent to a peer RiOS device across the WAN. When the same byte sequence is seen again in future transmissions from clients or servers, the reference is sent across the WAN instead of the raw data chunk. The peer RiOS device uses this reference to find the original data chunk on its data store, and reconstruct the original TCP data stream. SDR-M performs data reduction entirely in memory, which prevents the Steelhead appliance from reading and writing to and from the34 WAN Optimization Controller Technologies TechBook
  35. 35. WAN Optimization Controllers disk. Enabling this option can yield high LAN-side throughput because it eliminates all disk latency. SDR-M is most efficient when used between two identical high-end Steelhead appliance models; for example, 6050 - 6050. When used between two different Steelhead appliance models, the smaller model limits the performance.! IMPORTANT You cannot use peer data store synchronization with SDR-M. In code stream 5.0.x, this must be set from the CLI by running: "datastore anchor-select 1033" and then "restart clean." ◆ Compression Level — Specifies the relative trade-off of data compression for LAN throughput speed. Generally, a lower number provides faster throughput and slightly less data reduction. Select a data store compression value of 1 (minimum compression, uses less CPU) through 9 (maximum compression, uses more CPU) from the drop-down list. The default value is 1. Riverbed recommends setting the compression level to 1 in high-throughput environments such as data center to data center replication. ◆ Correct Addressing — Turns WAN visibility off. Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting. Also see "WAN Visibility Mode" on page 38. ◆ Data Store Segment Replacement Policy — Specifies a replacement algorithm that replaces the least recently used data in the data store, which improves hit rates when the data in the data store are not equally used. The default and recommended setting is Riverbed LRU. ◆ Guaranteed Bandwidth % — Specify the minimum amount of bandwidth (as a percentage) to guarantee to a traffic class when there is bandwidth contention. All of the classes combined cannot exceed 100%. During contention for bandwidth the class is guaranteed the amount of bandwidth specified. The class receives more bandwidth if there is unused bandwidth remaining. ◆ In-Path Rule Type/Auto-Discover — Uses the auto-discovery process to determine if a remote Steelhead appliance is able to optimize the connection attempting to be created by this SYN Riverbed appliances 35
  36. 36. WAN Optimization Controllers packet. By default, auto-discover is applied to all IP addresses and ports that are not secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting. ◆ Multi-Core Balancing — Enables multi-core balancing which ensures better distribution of workload across all CPUs, thereby maximizing throughput by keeping all CPUs busy. Core balancing is useful when handling a small number of high-throughput connections (approximately 25 or less). By default, this setting is disabled. In the 5.0.x code stream, this needs to be performed from the CLI by running: "datastore traffic-load rule scraddr all scrport 0 dstaddr all dstport "1748" encode "med". ◆ Neural Framing Mode — Neural framing enables the system to select the optimal packet framing boundaries for SDR. Neural framing creates a set of heuristics to intelligently determine the optimal moment to flush TCP buffers. The system continuously evaluates these heuristics and uses the optimal heuristic to maximize the amount of buffered data transmitted in each flush, while minimizing the amount of idle time that the data sits in the buffer. For different types of traffic, one algorithm might be better than others. The considerations include: latency added to the connection, compression, and SDR performance. You can specify the following neural framing settings: • Never — Never use the Nagle algorithm. All the data is immediately encoded without waiting for timers to fire or application buffers to fill past a specified threshold. Neural heuristics are computed in this mode but are not used. • Always — Always use the Nagle algorithm. All data is passed to the codec which attempts to coalesce consume calls (if needed) to achieve better fingerprinting. A timer (6 ms) backs up the codec and causes leftover data to be consumed. Neural heuristics are computed in this mode but are not used. • TCP Hints — This is the default setting which is based on the TCP hints. If data is received from a partial frame packet or a packet with the TCP PUSH flag set, the encoder encodes the data instead of immediately coalescing it. Neural heuristics are computed in this mode but are not used.36 WAN Optimization Controller Technologies TechBook
  37. 37. WAN Optimization Controllers • Dynamic — Dynamically adjust the Nagle parameters. In this option, the system discerns the optimum algorithm for a particular type of traffic and switches to the best algorithm based on traffic characteristic changes.◆ Optimization Policy — When configuring In-path Rules you have the option of configuring the optimization policy. There are multiple options that can be selected and it is recommended to set this option to "Normal" for EMC replication protocols, such as SRDF/A. The configurable options are as follows: • Normal — Perform LZ compression and SDR • SDR-Only — Perform SDR; do not perform LZ compression • Compression-Only — Perform LZ compression; do not perform SDR • None — Do not perform SDR or LZ compression◆ Queue - MXTCP — When creating QoS Classes you will need to specify a queuing method. MXTCP has very different use cases than the other queue parameters. MXTCP also has secondary effects that you need to understand before configuring, including: • When optimized traffic is mapped into a QoS class with the MXTCP queuing parameter, the TCP congestion control mechanism for that traffic is altered on the Steelhead appliance. The normal TCP behavior of reducing the outbound sending rate when detecting congestion or packet loss is disabled, and the outbound rate is made to match the minimum guaranteed bandwidth configured on the QoS class. • You can use MXTCP to achieve high-throughput rates even when the physical medium carrying the traffic has high loss rates. For example, MXTCP is commonly used for ensuring high throughput on satellite connections where a lower-layer-loss recovery technique is not in use. • Another usage of MXTCP is to achieve high throughput over high bandwidth, high-latency links, especially when intermediate routers do not have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high throughput rates. Riverbed appliances 37
  38. 38. WAN Optimization Controllers ! IMPORTANT Use caution when specifying MXTCP. The outbound rate for the optimized traffic in the configured QoS class immediately increases to the specified bandwidth, and does not decrease in the presence of network congestion. The Steelhead appliance always tries to transmit traffic at the specified rate. If no QoS mechanism (either parent classes on the Steelhead appliance, or another QoS mechanism in the WAN or WAN infrastructure) is in use to protect other traffic, that other traffic might be impacted by MXTCP not backing off to fairly share bandwidth. When MXTCP is configured as the queue parameter for a QoS class, the following parameters for that class are also affected: Link share weight — The link share weight parameter has no effect on a QoS class configured with MXTCP. Upper limit —The upper limit parameter has no effect on a QoS class configured with MXTCP. ◆ Reset Existing Client Connections on Start-Up — Enables kickoff. If you enable kickoff, connections that exist when the Steelhead service is started and restarted are disconnected. When the connections are retried they are optimized. If kickoff is enabled, all connections that existed before the Steelhead appliance started are reset. ◆ WAN Visibility Mode/CA — Enables WAN visibility, which pertains to how packets traversing the WAN are addressed. RiOS v5.0 or later offers three types of WAN visibility modes: correct addressing, port transparency, and full address transparency. You configure WAN visibility on the client-side Steelhead appliance (where the connection is initiated). The server-side Steelhead appliance must also support WAN visibility (RiOS v5.0 or later). ALso see "Correct Addressing" on page 35. Notes Consider the following when using Riverbed configuration settings: ◆ LAN Send and Receive Buffer Size should be configured to 2 MB38 WAN Optimization Controller Technologies TechBook
  39. 39. WAN Optimization Controllers ◆ WAN Send and Receive Buffer Size is environment dependent and should be configured with the result utilizing the following formula: WAN BW * RTT * 2 / 8 = xxxxxxx bytesFeatures Features include: ◆ SDR (Scalable Data Referencing) ◆ Compression ◆ QoS (Quality of Service) ◆ Data / Transport / Application / Management Streamlining ◆ Encryption - IPsecDeployment topologies Deployment topologies include: ◆ In-Path • Physical In-Path ◆ Virtual In-Path • WCCPv2 (Web Cache Coordination Protocol) • PBR (Policy-Based-Routing) ◆ Out-of-Path • Proxy ◆ Steelheads 7050 and 701 support 10 Gb Fibre data ports ◆ The virtual steelheads are supported when deployed on VMWARE ESX or ESXi servers. The virtual appliances can only be deployed in out-of-path configurations.Failure modes supported The following failure modes are supported: ◆ Fail-to-wire ◆ Fail-to-block Riverbed appliances 39
  40. 40. WAN Optimization Controllers FCIP environment The following Riverbed configuration settings are recommended in a FCIP environment: ◆ Configure > Networking > QoS Classification: • QoS Classification and Enforcement = Enabled • QoS Mode = Flat • QoS Network Interface with WAN throughput = Enabled for appropriate WAN interface and set available WAN Bandwidth • QoS Class Latency Priority = Real Time • QoS Class Guaranteed Bandwidth % = Environment dependent • QoS Class Link Share Weight = Environment dependent • QoS Class Upper Bandwidth % = Environment dependent • Queue = MXTCP • QoS Rule Protocol = All • QoS Rule Traffic Type = Optimized • DSCP = All • VLAN = All ◆ Configure > Optimization > General Service Settings: • In-Path Support = Enabled • Reset Existing Client Connections on Start-Up = Enabled • Enable In-Path Optimizations on Interface In-Path_X_X for appropriate In-Path interface • In RiOS v5.5.3 CLI or later: “datastore codec multi-codec encoder max-ackqlen 30" • In RiOS v6.0.1a or later: "datastore codec multi-codec encoder global-txn-max 128" • In RiOS v6.0.1a or later: "datastore sdr-policy sdr-m" • In RiOS v6.0.1a or later: " datastore codec multi-core-bal" • In RiOS v6.0.1a or later: "datastore codec compression level 1" ◆ Configure > Optimization > In-Path Rules: • Type = Auto Discovery • Preoptimization Policy = None40 WAN Optimization Controller Technologies TechBook
  41. 41. WAN Optimization Controllers • Optimization Policy = Normal • Latency Optimization Policy = Normal • Neural Framing Mode = Never • WAN Visibility = Correct Addressing • In RiOS v5.5.3 CLI or later for FCIP: “in-path always-probe enable” • In RiOS v5.5.3 CLI or later for FCIP: “in-path always-probe port 3225” • In RiOS v6.0.1a or later: "in-path always-probe port 0" • In RiOS v6.0.1a or later: "tcp adv-win-scale -1" • In RiOS v6.0.1a or later: "in-path kickoff-resume" • In RiOS v6.0.1a or later: "protocol FCIP enable" for FCIP • In RiOS v6.0.1a or later: "protocol srdf enable " for Symmetrix DMX and VMAX Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows: – Configure > Optimization > FCIP - FCIP Settings - Enable FCIP - FCIP Ports: 3225, 3226, 3227, 3228 • In RiOS v6.0.1a or later: "protocol fcip rule scr-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable" for EMC Symmetrix VMAX™ Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows: – Rules > Add a New Rule - Enable DIF if R1 and R2 are VMAX and hosts are Open Systems or IBM iSeries (AS/400) - DIF Data Block Size: 512 bytes (Open Systems) and 520 Bytes (IBM iSeries, AS/400) - No DIF setting is required if mainframe hosts are in use • In RiOS v6.0.1i or later: "sport splice-policy outer-rst-port port 3226" for Brocade FCIP only◆ Configure > Optimization > Performance: • High Speed TCP = Enabled • LAN Send Buffer Size = 2097152 • LAN Receive Buffer Size = 2097152 Riverbed appliances 41
  42. 42. WAN Optimization Controllers • WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 = xxxxxxx bytes) Note: BDP = Bandwidth delay product. • WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 = xxxxxxx bytes) • Data Store Segment Replacement Policy = Riverbed LRU • Adaptive Data Streamlining Modes = SDR-M Note: Adaptive Data Streamlining Modes = SDR-Default for the 7050/701 appliances. • Compression Level = 1 • Adaptive Compression = Disabled • Multi-Core Balancing = Enabled Note: Multi-Core Balancing should be disabled if the number of connections through the steelheads is greater than the number of cores on the Steelhead appliance. GigE environment The following are Riverbed configuration settings recommended in a GigE environment: In RiOS v6.1.1a or later, Steelheads will be able to automatically detect and disable the Symmetrix VMAX and DMX compression by default. Use show log from the Steelhead to verify that compression on the VMAX/DMX has been disabled. The "Native Symmetrix RE port compression detected: auto-disabling" message will display only on the Steellhead present on the Symmetrix local or remote side which initiates the connection. With Riverbed firmware v6.1.3a and above, the SRDF Selective Optimization feature is supported for SRDF group level optimization for end-to-end GigE environments with VMAX which have EMC Enginuity v5875 and later. Refer to the Riverbed Steelhead deployment and CLI guide for further instructions. ◆ Configure > Networking > Outbound QoS (Advanced): • QoS Classification and Enforcement = Enabled42 WAN Optimization Controller Technologies TechBook
  43. 43. WAN Optimization Controllers • QoS Mode = Flat • QoS Network Interface with WAN throughput = Enabled for appropriate WAN interfaces and set to available WAN Bandwidth • QoS Class Latency Priority = Real Time • QoS Class Guaranteed Bandwidth % = Environment dependent • QoS Class Link Share Weight = Environment dependent • QoS Class Upper Bandwidth % = Environment dependent • Queue = MXTCP • QoS Rule Protocol = All • QoS Rule Traffic Type = Optimized • DSCP = Reflect◆ Configure > Optimization > General Service Settings: • In-Path Support = Enabled • Reset Existing Client Connections on Start-Up = Enabled • Enable In-Path Optimizations on Interface In-Path_X_X • In RiOS v5.5.3 CLI and later: “datastore codec multi-codec encoder max-ackqlen 30 • In RiOS v6.0.1a CLI or later: "datastore codec multi-codec encoder global-txn-max 128"◆ Configure > Optimization > In-Path Rules: • Type = Auto Discovery • Preoptimization Policy = None • Optimization Policy = Normal • Latency Optimization Policy = Normal • Cloud Acceleration = Auto • Neural Framing Mode = Never • WAN Visibility =Correct Addressing • In RiOS v5.5.3 CLI or later for GigE: “in-path always-probe enable” • In RiOS v5.5.3 CLI or later for GigE: “in-path always-probe port 1748” • In RiOS v5.0.5-DR CLI or later for GigE: “in-path asyn-srdf always-probe enable” • In RiOS v6.0.1a or later: "in-path always-probe port 0" • In RiOS v6.0.1a or later: "tcp adv-win-scale -1" • In RiOS v6.0.1a or later: "in-path kickoff-resume" • In RiOS v6.0.1a or later: "protocol srdf enable " for Symmetrix DMX and VMAX Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows: – Configure > Optimization > SRDF Riverbed appliances 43
  44. 44. WAN Optimization Controllers – SRDF Settings – Enable SRDF – SRDF Ports: 1748 • In RiOS v6.0.1a or later: "protocol srdf rule src-ip 0.0.0.0 dst-ip 0.0.0.0 dif enable” for Symmetrix VMAX Or, in RiOS v6.1.1.a or later, you can use the GUI as follows: – Rules > Add a New Rule – Enable DIF if R1 and R2 are VMAX and hosts are Open Systems or IBM iSeries (AS/400) – DIF Data Block Size: 512 bytes (Open Systems) and 520 Bytes (IBM iSeries, AS/400) ◆ Configure > Optimization > Transport Settings: • High Speed TCP = Enabled • LAN Send Buffer Size = 2097152 • LAN Receive Buffer Size = 2097152 • WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 = xxxxxxx bytes) ◆ Configure > Optimization > Performance • WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 = xxxxxxx bytes) • Data Store Segment Replacement Policy = Riverbed LRU • Adaptive Data Streamlining Modes = SDR-M Note: Adaptive Data Streamlining Modes = SDR-Default for the 7050/701 appliances. • Compression Level = 1 • Adaptive Compression = Disabled • Multi-Core Balancing = Enabled Note: Multi-Core Balancing should be disabled if the number of connections through the steelheads is greater than the number of cores on the Steelhead appliance. References For more information, refer to Silver Peaks website at http://www.silver-peak.com. ◆ NX Series Appliance Operator Guide ◆ NX Series Appliance Network Deployment Guide44 WAN Optimization Controller Technologies TechBook
  45. 45. WAN Optimization Controllers◆ Quick Start Guide, VX Virtual Appliance, VMware vSphere / vSphere Hypervisor for configuring the VX virtual appliance◆ Quick Start Guide, VRX-8 Virtual Appliance, VMware vSphere / vSphere Hypervisor, for configuring the VRX-8 virtual appliance◆ VX Host System Requirements◆ VRX-8 Host System Requirements Riverbed appliances 45
  46. 46. WAN Optimization Controllers46 WAN Optimization Controller Technologies TechBook

×