wan optimization controller technologies
DESCRIPTION
Provides a high-level overview of wide area network (WAN) optimization controllers (WOC) including network and deployment topologies, storage and replication application, Fibre Channel over IP (FCIP) configurations, and WOC appliances.TRANSCRIPT
-
WAN Optimization Controller Technologies
Version 3.1
Network and Deployment Topologies
Storage and Replication
FCIP Configuration
WAN Optimization Controller Appliances
Vinay JonnakutiChuan LiuEric PunDonald RobertsonTom Zhao
-
WAN Optimization Controller Technologies TechBook2
Copyright 2012- 2014 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information issubject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NOREPRESENTATIONSORWARRANTIESOFANYKINDWITHRESPECTTOTHE INFORMATION INTHISPUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OFMERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the UnitedState and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulator document for your product line, go to EMC Online Support(https://support.emc.com).
Part number H8076.5
-
Preface.............................................................................................................................. 5
Chapter 1 Network and Deployment Topologies andImplementationsOverview............................................................................................ 12Network topologies and implementations ................................... 13Deployment topologies.................................................................... 15Storage and replication application................................................ 17
Configuration settings............................................................... 17Network topologies and implementations ............................ 18Notes............................................................................................ 19
Chapter 2 FCIP ConfigurationsBrocade FCIP ..................................................................................... 22
Configuration settings............................................................... 22
ContentsWAN Optimization Controller Technologies TechBook 3
Brocade FCIP Tunnel settings.................................................. 22Rules and restrictions................................................................ 23References ................................................................................... 24
Cisco FCIP.......................................................................................... 25Configuration settings............................................................... 25Notes............................................................................................ 26Basic guidelines.......................................................................... 27Rules and restrictions................................................................ 28References ................................................................................... 28
-
Contents
Chapter 3 WAN Optimization ControllersRiverbed Steelhead appliances ....................................................... 30
Overview .................................................................................... 30Terminology ............................................................................... 31Notes............................................................................................ 36Features ....................................................................................... 36Deployment topologies............................................................. 36Failure modes supported ......................................................... 37FCIP environment ..................................................................... 37GigE environment ..................................................................... 39References ................................................................................... 42
Riverbed Granite solution ............................................................... 43Overview .................................................................................... 43Features ....................................................................................... 45Configuring Granite Core High Availability ........................ 47Deployment topologies............................................................. 50Configuring iSCSI settings on EMC storage.......................... 51Configuring iSCSI initiator on Granite Core ......................... 52Configuring iSCSI portal .......................................................... 53Configuring LUNs..................................................................... 56Configuring local LUNs ........................................................... 58Adding Granite Edge appliances ............................................ 59Configuring CHAP users ......................................................... 60Confirming connection to the Granite Edge appliance ....... 61References ................................................................................... 62
Silver Peak appliances...................................................................... 63Overview .................................................................................... 63Terminology ............................................................................... 64Features ....................................................................................... 66Deployment topologies............................................................. 67Failure modes supported ......................................................... 67FCIP environment ..................................................................... 67GigE environment ..................................................................... 68References ................................................................................... 69WAN Optimization Controller Technologies TechBook4
-
Preface
This EMC Engineering TechBook provides a high-level overview of theWAN Optimization Controller (WOC) appliance, including network anddeployment topologies, storage and replication application, FCIPconfigurations, and WAN Optimization Controller appliances.
E-Lab would like to thank all the contributors to this document, includingEMC engineers, EMC field personnel, and partners. Your contributions areinvaluable.
As part of an effort to improve and enhance the performance and capabilitiesof its product lines, EMC periodically releases revisions of its hardware andsoftware. Therefore, some functions described in this document may not besupported by all versions of the software or hardware currently in use. Forthe most up-to-date information on product features, refer to your productrelease notes. If a product does not function properly or does not function asdescribed in this document, please contact your EMC representative.
Audience This TechBook is intended for EMC field personnel, includingtechnology consultants, and for the storage architect, administrator,and operator involved in acquiring, managing, operating, ordesigning a networked storage environment that contains EMC andhost devices.
EMC Support Matrixand E-Lab
InteroperabilityNavigator
For the most up-to-date information, always consult the EMC SupportMatrix (ESM), available through E-Lab Interoperability Navigator(ELN) at http://elabnavigator.EMC.com.WAN Optimization Controller Technologies TechBook 5
-
6Preface
All of the matrices, including the ESM (which does not include mostsoftware), are subsets of the E-Lab Interoperability Navigatordatabase. Included under this tab are:
The EMC Support Matrix, a complete guide to interoperable, andsupportable, configurations.
Subset matrices for specific storage families, server families,operating systems or software products.
Host connectivity guides for complete, authoritative informationon how to configure hosts effectively for various storageenvironments.
Consult the Internet Protocol pdf under the "Miscellaneous" headingfor EMC's policies and requirements for the EMC Support Matrix.
Relateddocumentation
The following documents, including this one, are available throughthe E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.
These documents are also available at the following location:
http://www.emc.com/products/interoperability/topology-resource-center.htm
Backup and Recovery in a SAN TechBook Building Secure SANs TechBook Extended Distance Technologies TechBook Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB)
Concepts and Protocols TechBook Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB)
Case Studies TechBook Fibre Channel SAN Topologies TechBook iSCSI SAN Topologies TechBook Networked Storage Concepts and Protocols TechBook Networking for Storage Virtualization and RecoverPoint TechBook EMC Connectrix SAN Products Data Reference Manual Legacy SAN Technologies Reference Manual Non-EMC SAN Products Data Reference Manual
EMC Symmetrix Remote Data Facility (SRDF) Connectivity Guide,located on the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.
EMC Support Matrix, available through E-Lab InteroperabilityNavigator at http://elabnavigator.EMC.com.
RSA security solutions documentation, which can be found atWAN Optimization Controller Technologies TechBook
http://RSA.com > Content Library
-
cePrefa
EMC documentation and release notes can be found at EMC OnlineSupport (https://support.emc.com).
For vendor documentation, refer to the vendors website.
Authors of thisTechBook
This TechBook was authored by Vinay Jonnakuti and Eric Pun, alongwith other EMC engineers, EMC field personnel, and partners.
Vinay Jonnakuti is a Sr. Corporate Systems Engineer in the UnifiedStorage division of EMC focusing on VNX and VNXe products,working on pre-sales deliverables including collateral, customerpresentations, customer beta testing and proof of concepts. Vinay hasbeen with EMC's for over 6 years. Prior to his current position, Vinayworked in EMC E-Lab leading the qualification and architecting ofsolutions with WAN-Optimization appliances from various partnerswith various replication technologies, including SRDF (GigE/FCIP),SAN-Copy, MirrorView, VPLEX, and RecoverPoint. Vinay alsoworked on Fibre Channel and iSCSI qualification on the VMAXStorage arrays.
Chuan Liu is a Senior Systems Integration Engineer with more than 6years of experience in the telecommunication industry. After joiningEMC, he worked in E-Lab qualifying IBM/HP/Cisco blade switchesand WAN Optimization products. Currently, Chuan focuses onqualifying SRDF with FCIP/GigE technologies used in the setup ofdifferent WAN Optimization products.
Eric Pun is a Senior Systems Integration Engineer and has been withEMC for over 13 years. For the past several years, Eric has worked inE-lab qualifying interoperability between Fibre Channel switchedhardware and distance extension products. The distance extensiontechnology includes DWDM, CWDM, OTN, FC-SONET, FC-GbE,FC-SCTP, and WAN Optimization products. Eric has been acontributor to various E-Lab documentation, including the SRDFConnectivity Guide.
Donald Robertson is a Senior Systems Integration Engineer and hasheld various engineering positions in the storage industry for over 18years. As part of the EMC E-Lab team, Don leads the qualificationand architecting of solutions with WAN-Optimization appliancesfrom various partners using various replication technologies,including SRDF (GigE/FCIP), VPLEX, RecoverPoint.WAN Optimization Controller Technologies TechBook 7
-
8Preface
Tom Zhao is a Systems Engineer Team Lead with over 6 years ofexperience in the IT industry, including over one year in storage atEMC. Tomworks in E-lab qualifying Symmetrix, RecoverPoint, WANOptimization, and cache- based products and solutions. Prior toEMC, Tom focused on developing management and maintenancetools for x86 servers and platforms.
Conventions used inthis document
EMC uses the following conventions for special notices:
Note:A note presents information that is important, but not hazard-related.
Typographical conventionsEMC uses the following type style conventions in this document.
Where to get help EMC support, product, and licensing information can be obtained asfollows:
Note: To open a service request through the EMC Online Support site, youmust have a valid support agreement. Contact your EMC sales representativefor details about obtaining a valid support agreement or to answer anyquestions about your account.
Bold Use for names of interface elements, such as names of windows, dialog boxes, buttons, fields, tab names, key names, and menu paths (what the user specifically selects or clicks)
Italic Use for full titles of publications referenced in text
Monospace Use for: System output, such as an error message or script System code Pathnames, filenames, prompts, and syntax Commands and options
Monospace italic Use for variables.
Monospace bold Use for user input.
[ ] Square brackets enclose optional values
| Vertical bar indicates alternate selections the bar means or
{ } Braces enclose content that the user must specify, such as x or y or z
... Ellipses indicate nonessential information omitted from the exampleWAN Optimization Controller Technologies TechBook
-
cePrefa
Product informationFor documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to theEMC Online Support site (registration required) at:
https://support.EMC.com
Technical supportEMC offers a variety of support options.
Support by Product EMC offers consolidated, product-specificinformation on the Web at:
https://support.EMC.com/products
The Support by Product web pages offer quick links toDocumentation, White Papers, Advisories (such as frequently usedKnowledgebase articles), and Downloads, as well as more dynamiccontent, such as presentations, discussion, relevant CustomerSupport Forum entries, and a link to EMC Live Chat.
EMC Live Chat Open a Chat or instant message session with anEMC Support Engineer.
eLicensing supportTo activate your entitlements and obtain your Symmetrix license files,visit the Service Center on https://support.EMC.com, as directed onyour License Authorization Code (LAC) letter e-mailed to you.
For help with missing or incorrect entitlements after activation (thatis, expected functionality remains unavailable because it is notlicensed), contact your EMCAccount Representative or AuthorizedReseller.
For help with any errors applying license files through SolutionsEnabler, contact the EMC Customer Support Center.
If you are missing a LAC letter, or require further instructions onactivating your licenses through the Online Support site, contactEMC's worldwide Licensing team at [email protected] or call:
North America, Latin America, APJK, Australia, New Zealand:SVC4EMC (800-782-4362) and follow the voice prompts.
EMEA: +353 (0) 21 4879862 and follow the voice prompts.WAN Optimization Controller Technologies TechBook 9
-
10
Preface
We'd like to hear from you!Your suggestions will help us continue to improve the accuracy,organization, and overall quality of the user publications. Send youropinions of this document to:
Your feedback on our TechBooks is important to us! We want ourbooks to be as helpful and relevant as possible. Send us yourcomments, opinions, and thoughts on this or any other TechBook to:
[email protected] Optimization Controller Technologies TechBook
-
1This chapter provides the following information for the WANOptimization Controller (WOC) appliance:
Overview............................................................................................. 12 Network topologies and implementations..................................... 13 Deployment topologies ..................................................................... 15 Storage and replication application................................................. 17
Network andDeployment
Topologies andImplementationsNetwork and Deployment Topologies and Implementations 11
-
12
Network and Deployment Topologies and Implementations
OverviewA WAN Optimization Controller (WOC) is an appliance that can beplaced In-line or Out-of-Path to reduce and optimize the data that isto be transmitted over the LAN/MAN/WAN. These devices aredesigned to help mitigate the effects of packet loss, networkcongestion, and latency while reducing the overall amount of data tobe transmitted over the network.
In general, the technologies utilized in accomplishing this areTransmission Control Protocol (TCP) acceleration,data-deduplication, and compression. Additionally, features such asQoS, Forward Error Correction (FEC), and Encryption may also beavailable.
Network links and WAN circuits can have high latency and/orpacket loss as well as limited capacity. WAN OptimizationControllers can be used to maximize the amount of data that can betransmitted over a link. In some cases, these appliances may be anecessity, depending on performance requirements.
WAN and data optimization can occur at varying layers of the OSIstack, whether it be at the network and transport layer, the session,presentation, and application layers, or just to the data (payload)itself.WAN Optimization Controller Technologies TechBook
-
nsNetwork and Deployment Topologies and Implementatio
Network topologies and implementationsTCP was developed as a local area network (LAN) protocol.However, with the advancement of the Internet it was expanded to beused over the WAN. Over time TCP has been enhanced, but evenwith these enhancements TCP is still not well-suited for WAN use formany applications.
The primary factors that directly impact TCP's ability to be optimizedover the WAN are latency, packet loss, and the amount of bandwidthto be utilized. It is these factors on which the layer 3/4 optimizationproducts focus. Many of these optimization products willre-encapsulate the packets into UDP or their proprietary protocol,while others may still use TCP, but optimize the connections betweena set of WAN Optimization Controllers at each end of the WAN.While some products create tunnels to perform their peer-to-peerconnection between appliances for the optimized data, others mayjust modify, or tag other aspects within the packet to ensure that thefar-end WOC captures the optimized traffic.
Optimization of the payload (data) within the packet focuses on thereduction of actual payload as it passes over the network through theuse of data compression and/or data de-duplication engines (DDEs).Compression is performed through the use of data compressionalgorithms, while DDE uses large data pattern tables and associatedpointers (fingerprints). Large amounts of memory and/or hard-drivestorage can be used to store these pattern tables and pointers.Identical tables are built in the optimization appliances on both sidesof the WAN, and as new traffic passes through the WOC patterns arematched, and only the associated pointers are sent over the network(versus resending data.) While typical LZ compression ratio is about2:1, DDE ratios can range greatly, depending on many factors. Ingeneral the combination of both of these technologies, DDE andcompression, will achieve around a 5:1 (and sometimes much higherratios) reduction level.
Layer 4/7 optimization is what is called the "application" layer ofoptimization. This area of optimization can take many approachesthat can vary widely, but are generally done through the use ofapplication-aware optimization engines. The actions taken by theseengines can result in benefits, including reductions in the number oftransactions that occur over the network or more efficient use ofbandwidth. It is also at this layer the TCP optimization occurs.Network topologies and implementations 13
-
14
Network and Deployment Topologies and Implementations
Overall, WAN optimizers can be aligned with customer networkingbest practices, and it should be made clear to the customer thatapplications using these devices can, and should, be prioritized basedon their WAN bandwidth/throughput requirements.WAN Optimization Controller Technologies TechBook
-
nsNetwork and Deployment Topologies and Implementatio
Deployment topologiesThere are two basic topologies for deployment:
In-path/in-line/bridge
Out-of-path/routed
An in-path/in-line/bridge deployment, as shown in Figure 1, meansthat the WAN Optimization Controller (WOC) is directly in the pathbetween the source and destination end points where all inboundand outbound flows will pass through the WAN OptimizationControllers. The placement of the WOC devices at each site istypically placed as close as possible to the WAN circuit.
Figure 1 In-path/in-line/bridge topology
An out-of-path/routed deployment, as shown in Figure 2, means thatthe WOC is not in the direct path between the source and destinationend points. The traffic must be routed/redirected to theWOC devicesusing routing features such as WCCP, PBR, VRRP, etc.Deployment topologies 15
Figure 2 Out-of-path/routed topology
-
16
Network and Deployment Topologies and Implementations
WCCPv2 (Web Cache Communication Protocol) is a contentrouting protocol that provides a mechanism to redirect traffic inreal-time. WCCP also has built-in mechanisms to support loadbalancing, fault tolerance, and scalability.
PBR (Policy Based Routing) is a technique used to make routingdecisions based on policies or a combination of policies such aspacket size, protocol of the payload, source, destination, or othernetwork characteristics.
VRRP (Virtual Router Redundancy Protocol) is a redundancyprotocol designed to increase the availability of a default gateway.
In the event of a power failure or WOC hardware or software failure,it is necessary for theWOC to provide some level of action. TheWOCcan either continue to allow data to pass through, unoptimized, or itcan block all traffic from flowing through it. The failure modestypically offered by WAN optimizers are commonly referred to as:
Fails-to-Wire
The appliance will behave as a crossover cable connecting theEthernet LAN switch directly to the WAN router and traffic willcontinue to flow uninterrupted and unoptimized.
Fails-Open / Fails-to-Block
The appliance will behave as an open port to the WAN router.The WAN router will recognize that the link is down and willbegin forwarding traffic according to its routing tables.
Depending upon your deployment topology, youmay determine thatone method may be better suited for your environment than theother.WAN Optimization Controller Technologies TechBook
-
nsNetwork and Deployment Topologies and Implementatio
Storage and replication applicationThis section provides storage and replication application details forEMC products:
Symmetrix/VMAX SRDF
RecoverPoint
SAN Copy
Celerra Replicator
MirrorView
Configuration settingsConfigurations settings are as follows:
Compression on GigE (RE) port = Enabled
Note: For Riverbed Steelhead RiOS v6.1.1a or later and Silver PeakNX-OS 4.4 or later, the compression setting should be Enabled on theSymmetrix storage system. The WAN optimization appliancesautomatically detect and disable compression on the Symmetrix system.In the event the WAN optimization appliances go down or are removed,the Symmetric REs will re-enable compression and provide some level ofbandwidth reduction, although likely not to the level provided by theWAN optimization appliances.
SRDF Flow Control = Enabled
Note: In a GigE WAN optimization environment, use the following:
For Riverbed, use legacy flow control for 5876.229.145 and older ucode.Use dynamic flow control for 5876.251.161 and later. Dynamic flowcontrol is only supported with Riverbed using RiOS 8.0.2 and later. Referto the WAN Optimization Controller table in the EMC Support Matrix forsupport RiOS revisions. In some instances when there is packet loss,legacy flow control may increase performance if customer requirementsare not being met.
For Silver Peak, dynamic flow control is the recommended flow controlsetting.Storage and replication application 17
-
18
Network and Deployment Topologies and Implementations
Note: In a GigE WAN optimization environment: If Legacy flow controlis used, set the JFC Windows buffer size to 2048(0x800).
In a FCIP WAN optimization environment, if Legacy flow control isused, set the JFC Windows buffer size to 49K (0xC400).
When upgrading from 5876.229.145 and older ucode (where legacy flowcontrol should be set) to 5876.251.161 and later, it is recommended toremain at legacy flow control.
Disable the speed limit for transmit rate on GigE(RE) ports.
GigE connection number
More connections bring more LAN throughput and a higherWAN compression ratio for Riverbed Steelhead deployments.Increasing the number of TCP connections through the number ofphysical ports is the recommended approach. This approach isbeneficial because it is commonly configured in the field and alsoadds CPU processing power. If additional TCP connections arerequired, increase the number of TCP connections per DID.However, be aware that too many TCP connections are notdesirable for a number of reasons. EMC recommends no morethan 32 TCP connections per group of meshed GigE links.
Network topologies and implementationsIn general, it has been observed that optimization ratios are higherwith SRDF/A than SRDFAdaptive Copy. There are many factors thatimpact how much optimization will occur, therefore results will vary.WAN Optimization Controller Technologies TechBook
-
nsNetwork and Deployment Topologies and Implementatio
NotesNote the following:
Symmetrix configuration settings
Compression Compression should always be enabled on the Symmetrix GigE portsif the WAN optimization controller performs dedupe and has thecapability of dynamically disabling compression for the SymmetrixGigE port. Riverbed Steelhead and Silver Peak WAN optimizationcontrollers support this feature. This ensures that dedup can alwaysbe applied to uncompressed data when a WAN optimizationcontroller is present, yet compression is also applied even if WANoptimization is bypassed.
SRDF Flow Control SRDF Flow Control is enabled by default for increased stability of theSRDF links. In some cases, further tuning of SRDF flow control andrelated settings can be made to improve performance. For moreinformation, refer to Storage and replication application on page 17or contact your EMC Customer Service representative.
Data reduction considerationsIn general, it has been observed that optimization ratios are higherwith GigE ports on the GigE director as opposed to FCIP. There aremany factors that impact how much optimization will occur, (forexample, SRDF mode or repeatability of data patterns); therefore,results will vary.Storage and replication application 19
-
20
Network and Deployment Topologies and ImplementationsWAN Optimization Controller Technologies TechBook
-
2This chapter provides FCIP configuration information for:
Brocade FCIP ...................................................................................... 22 Cisco FCIP ........................................................................................... 25
FCIP ConfigurationsFCIP Configurations 21
-
22
FCIP Configurations
Brocade FCIPThis section provides configuration information for Brocade FCIP.
Note: Support for Brocade FCIP with WAN Optimization Controllers islimited. Please check the WAN Optimization Controller table in the EMCSupport Matrix for supported configurations. The EMC Support Matrix isavailable at https://elabnavigator.emc.com.
Configuration settingsConfiguration settings are as follows:
FCIP Fastwrite = Enabled
Compression = Disabled
TCP Byte Streaming = Enabled
Commit Rate or Max/Min settings = in Kb/s (Environmentdependent)
Tape Pipelining = Disabled
SACK = Enabled
Min Retransmit Time = 100
Keep-Alive Timeout = 10
Max Re-Transmissions = 8
Brocade FCIP Tunnel settingsConsider the following:
FCIP Fastwrite
This setting accelerates SCSI Write I/Os over the FCIP tunnel.This cannot be combined with FC Fastwrites. FCIP Fastwriteshould be enabled and FC Fastwrite should be disabled whenusing WAN Optimization Controller (WOC) devices.
There are two different FastWrites: FC-FastWrite and FCIPFastWrite. FC FastWrite applies to FC ISLs, while FCIP FastWrite(same FC protocol) applies to FCIP tunnels.WAN Optimization Controller Technologies TechBook
-
nsFCIP Configuratio
Compression
This simply compresses the data that flows over the FCIP tunnel.This should be disabled when using with WAN OptimizationController (WOC) devices, thus allowing the WOC device toperform the compression and data de-duplication.
Commit Rate
This setting is environment dependent. This should be set inaccordance with the WAN Optimization vendor. Considerationssuch as data-to-be-optimized, available WAN circuit size anddata-reduction ratio need to be taken into account.
Adaptive Rate Limit (ARL)
Commit Rate is replaced by Minimum and Maximum rates sincenewer installations have the ARL feature. When used with WANOptimization, the maximum is always set to port link speed.Refer to the Brocade or WAN optimization vendordocumentation for more information.
TCP Byte Streaming
This is a Brocade feature which allows a Brocade FCIP switch tocommunicate with a third-party WAN Optimization Controller.This feature supports an FCIP frame which has been split into amaximum of 8 separate TCP segments. If the frame is split intomore than eight segments, it results in prematurely sending aframe to the FCIP layer with an incorrect size and the FCIP tunnelbounces.
Rules and restrictionsConsider the following rules and restrictions when using TCP bytestreaming:
Only one FCIP tunnel is allowed to be configured for a GigE portthat has TCP Byte Streaming configured.
FCIP tunnel cannot have compression enabled.
FCIP tunnel cannot have FC Fastwrite enabled.
FCIP tunnel must have a committed rate set.
Both sides of the FCIP tunnel must be identically configured.
TCP byte streaming is not compatible with older FOS revisions,Brocade FCIP 23
which do not have the option available.
-
24
FCIP Configurations
ReferencesFor further information, refer to https://support.emc.com andhttp://www.brocade.com.
EMC Connectrix B Series Fabric OS Administrator's Guide
Brocade Fabric OS Administrators GuideWAN Optimization Controller Technologies TechBook
-
nsFCIP Configuratio
Cisco FCIPThis section provides configuration information for Cisco FCIP.
Configuration settingsConfiguration settings are as follows:
Max-Bandwidth = Environment dependent (Default = 1000 Kb)
Min-Available-Bandwidth = Normally set to WAN bandwidth /number of GigE links using that bandwidth.
For example, if WAN = 1 Gb and using 2 GigE ports, then the Min= 480 Mb; if using 4 GigE, then Min = 240 Mb.
Estimated roundtrip time = Set to measured latency (round-triptime - RTT) between MDS switches
IP Compression = Disabled
FCIP Write Acceleration = Enabled
Tape Accelerator = Disabled
Encryption = Disabled
Min Re-Transmit Timer = 200 ms
Max Re-Transmissions = 8
Keep-Alive = 60
SACK = Enabled
Timestamp = Disabled
PMTU = Enabled
CWM = Enabled
CWM Burst Size = 50 KBCisco FCIP 25
-
26
FCIP Configurations
NotesConsider the following information for Cisco FCIP tunnel settings:
Max-Bandwidth
The max-bandwidth-mbps parameter and the measured RTTtogether determine the maximum window size. This should beconfigured to match the worst-case bandwidth available on thephysical link.
Min-Available-Bandwidth
The min-available-bandwidth parameter and the measured RTTtogether determine the threshold below which TCP aggressivelymaintains a window size sufficient to transmit at minimumavailable bandwidth. It is recommend that you adjust this to50-80% of the Max-Bandwidth.
Estimated Roundtrip-Time
This is the measured latency between the 2 MDS GigE interfaces.The following MDS command can be used to measure the RTT:
FCIPMDS2(config)# do ips measure-rtt 10.20.5.71interface GigabitEthernet1/1Roundtrip time is 106 micro seconds (0.11 milliseconds)
Only configure the measured latency when there is no WANoptimization appliance. When the MDS switch is connected to aWAN optimization appliance, leave the roundtrip setting at itsdefault (1000 msec in the Management Console, 1 ms in the CLI).
FCIP Write Acceleration
Write Acceleration is used to help alleviate the effects of networklatency. It can work with Port-Channels only when thePort-Channel is managed by Port-Channel protocol (PCP). FCIPwrite acceleration can be enabled for multiple FCIP tunnels if thetunnels are part of a dynamic Port-Channel configured withchannel mode active. FCIP write acceleration does not work ifmultiple non-Port -Channel ISLs exist with equal weight betweenthe initiator and the target port.WAN Optimization Controller Technologies TechBook
-
nsFCIP Configuratio
Min Re-Transmit Timer
This is the amount of time that TCP waits before retransmitting.In environments where there may be high packet loss /congestion, this number may need to be adjusted to 4x themeasured roundtrip-time. Ping may be used to measure theround trip latency between the two MDS switches.
Max Re-Transmissions
The maximum number of times that a packet is retransmittedbefore the TCP connection is closed.
Basic guidelinesConsider the following guidelines when creating/utilizing multipleFCIP interfaces /profiles:
Gigabit Ethernet Interfaces support a single IP address.
Every FCIP profile must be uniquely addressable by an IPaddress and TCP port pair. Where FCIP profiles share a GigabitEthernet interface, the FCIP profiles must use different TCP portnumbers.
FCIP Interface defines the physical FCIP link (local GigE port). Ifyou add an FCIP Profile for TCP parameters and a local GigE IPaddress plus peer (remote) IP address to the FCIP Interface, itforms an FCIP Link or Tunnel. There are always two TCPconnections (control plus data) and you can add one additionaldata TCP connection per FCIP link.
EMC recommends three FCIP interfaces per GigE port for bestperformance. More FCIP interfaces help improve SRDF linkstability when there is high latency and/or packet loss(>100ms/0.5%, regardless of whether latency and packet dropconditions exist together or only one exists). A dedicated FCIPprofile per FCIP link is recommended.Cisco FCIP 27
-
28
FCIP Configurations
Rules and restrictionsConsider the following rules and restrictions when enabling FCIPWrite Acceleration:
It can work with Port-Channels only when the Port-Channel ismanaged by Port-Channel Protocol (PCP).
FCIP write acceleration can be enabled for multiple FCIP tunnelsif the tunnels are part of a dynamic Port-Channel configured withchannel mode active.
FCIP write acceleration does not work if multiplenon-Port-Channel ISLs exist with equal weight between theinitiator and the target port.
Do not enable time stamp control on an FCIP interface with writeacceleration configured.
Write acceleration can not be used across FSPF equal cost paths inFCIP deployments. Also, FCIP write acceleration can be used inPort-Channels configured with channel mode active orconstructed with Port-Channel Protocol (PCP).
ReferencesFor further information, refer to the following documentation onCisco's website at http://www.cisco.com.
Wide Area Application Services Configuration Guide
Replication Acceleration Deployment Guide
Q&A for WAAS Replication Accelerator Mode
MDS 9000 Family CLI Configuration GuideWAN Optimization Controller Technologies TechBook
-
3This chapter provides information on the following WANOptimization Controller (WOC) appliances, along with RiverbedGranite, which is used in conjunction with Steelhead:
Riverbed Steelhead appliances ........................................................ 30 Riverbed Granite solution................................................................. 43 Silver Peak appliances ....................................................................... 63
WAN OptimizationControllersWAN Optimization Controllers 29
-
30
WAN Optimization Controllers
Riverbed Steelhead appliancesThis section provides information on the Riverbed Steelhead WANOptimization Controller and the Riverbed system. The followingtopics are discussed:
Overview on page 30
Terminology on page 31
Notes on page 36
Features on page 36
Deployment topologies on page 36
Failure modes supported on page 37
FCIP environment on page 37
GigE environment on page 39
References on page 42
OverviewRiOS is the software that powers the Riverbed's Steelhead WANOptimization Controller. The optimization techniques RiOS utilizesare:
Data Streamlining Transport Streamlining Application Streamlining, and Management Streamlining
RiOS uses a Riverbed proprietary algorithm called Scalable DataReferencing (SDR) along with data compression when optimizingdata across the WAN. SDR breaks up TCP data streams into uniquedata chunks that are stored in the hard disk (data store) of the devicerunning RiOS. Each data chunk is assigned a unique integer label(reference) before it is sent to a peer RiOS device across the WAN.When the same byte sequence is seen again in future transmissionsfrom clients or servers, the reference is sent across the WAN insteadof the raw data chunk. The peer RiOS device uses this reference tofind the original data chunk on its data store, and reconstruct theoriginal TCP data stream.
After a data pattern is stored on the disk of a Steelhead appliance, itWAN Optimization Controller Technologies TechBook
can be leveraged for transfers to any other Steelhead appliance across
-
rsWAN Optimization Controlle
all applications being accelerated by Data Streamlining. DataStreamlining also includes optional QoS enforcement. QoSenforcement can be applied to both optimized and unoptimizedtraffic, both TCP and UDP.
Steelhead appliances also use a generic latency optimizationtechnique called Transport Streamlining. Transport Streamlining usesa set of standards and proprietary techniques to optimize TCP trafficbetween Steelhead appliances. These techniques ensure efficientretransmission methods, such as TCP selective acknowledgements,are used, optimal TCP window sizes are used to minimize the impactof latency on throughput to maximize throughput across WAN links.
Transport Streamlining ensures that there is always a one-to-one ratiofor active TCP connections between Steelhead appliances, and theTCP connections to clients and servers. That is, Steelhead appliancesdo not tunnel or perform multiplexing and de-multiplexing of dataacross connections. This is true regardless of theWAN visibility modein use.
TerminologyConsider the following terminology when using Riverbedconfiguration settings:
Adaptive Compression Detects LZ data compressionperformance for a connection dynamically and turns it off (setsthe compression level to 0) momentarily if it is not achievingoptimal results. Improves end-to-end throughput over the LANby maximizing the WAN throughput. By default, this setting isdisabled.
Adaptive Data Streamlining Mode SDR-M RiOS uses aRiverbed proprietary algorithm called Scalable Data Referencing(SDR). SDR breaks up TCP data streams into unique data chunksthat are stored in the hard disk (data store) of the device runningRiOS. Each data chunk is assigned a unique integer label(reference) before it is sent to a peer RiOS device across the WAN.When the same byte sequence is seen again in futuretransmissions from clients or servers, the reference is sent acrossthe WAN instead of the raw data chunk. The peer RiOS deviceuses this reference to find the original data chunk on its datastore, and reconstruct the original TCP data stream. SDR-Mperforms data reduction entirely in memory, which prevents theRiverbed Steelhead appliances 31
Steelhead appliance from reading and writing to and from the
-
32
WAN Optimization Controllers
disk. Enabling this option can yield high LAN-side throughputbecause it eliminates all disk latency. SDR-M is most efficientwhen used between two identical high-end Steelhead appliancemodels; for example, 6050 - 6050. When used between twodifferent Steelhead appliance models, the smaller model limitsthe performance.
IMPORTANT
You cannot use peer data store synchronizationwith SDR-M. Incode stream 5.0.x, this must be set from the CLI by running:"datastore anchor-select 1033" and then "restart clean."
Compression Level Specifies the relative trade-off of datacompression for LAN throughput speed. Generally, a lowernumber provides faster throughput and slightly less datareduction. Select a data store compression value of 1 (minimumcompression, uses less CPU) through 9 (maximum compression,uses more CPU) from the drop-down list. The default value is 1.Riverbed recommends setting the compression level to 1 inhigh-throughput environments such as data center to data centerreplication.
Correct Addressing Turns WAN visibility off. Correctaddressing uses Steelhead appliance IP addresses and portnumbers in the TCP/IP packet header fields for optimized trafficin both directions across the WAN. This is the default setting.Also see "WAN Visibility Mode" on page 35.
Data Store Segment Replacement Policy Specifies areplacement algorithm that replaces the least recently used datain the data store, which improves hit rates when the data in thedata store are not equally used. The default and recommendedsetting is Riverbed LRU.
Guaranteed Bandwidth % Specify the minimum amount ofbandwidth (as a percentage) to guarantee to a traffic class whenthere is bandwidth contention. All of the classes combined cannotexceed 100%. During contention for bandwidth the class isguaranteed the amount of bandwidth specified. The class receivesmore bandwidth if there is unused bandwidth remaining.
In-Path Rule Type/Auto-Discover Uses the auto-discoveryprocess to determine if a remote Steelhead appliance is able tooptimize the connection attempting to be created by this SYNWAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
packet. By default, auto-discover is applied to all IP addressesand ports that are not secure, interactive, or default Riverbedports. Defining in-path rules modifies this default setting.
Multi-Core Balancing Enables multi-core balancing whichensures better distribution of workload across all CPUs, therebymaximizing throughput by keeping all CPUs busy. Corebalancing is useful when handling a small number ofhigh-throughput connections (approximately 25 or less). Bydefault, this setting is disabled. In the 5.0.x code stream, thisneeds to be performed from the CLI by running: "datastoretraffic-load rule scraddr all scrport 0 dstaddr all dstport "1748"encode "med".
Neural Framing Mode Neural framing enables the system toselect the optimal packet framing boundaries for SDR. Neuralframing creates a set of heuristics to intelligently determine theoptimal moment to flush TCP buffers. The system continuouslyevaluates these heuristics and uses the optimal heuristic tomaximize the amount of buffered data transmitted in each flush,while minimizing the amount of idle time that the data sits in thebuffer.
For different types of traffic, one algorithm might be better thanothers. The considerations include: latency added to theconnection, compression, and SDR performance.
You can specify the following neural framing settings:
Never Never use the Nagle algorithm. All the data isimmediately encoded without waiting for timers to fire orapplication buffers to fill past a specified threshold. Neuralheuristics are computed in this mode but are not used.
Always Always use the Nagle algorithm. All data is passedto the codec which attempts to coalesce consume calls (ifneeded) to achieve better fingerprinting. A timer (6 ms) backsup the codec and causes leftover data to be consumed. Neuralheuristics are computed in this mode but are not used.
TCP Hints This is the default setting which is based on theTCP hints. If data is received from a partial frame packet or apacket with the TCP PUSH flag set, the encoder encodes thedata instead of immediately coalescing it. Neural heuristicsare computed in this mode but are not used.Riverbed Steelhead appliances 33
-
34
WAN Optimization Controllers
Dynamic Dynamically adjust the Nagle parameters. In thisoption, the system discerns the optimum algorithm for aparticular type of traffic and switches to the best algorithmbased on traffic characteristic changes.
Optimization PolicyWhen configuring In-path Rules you havethe option of configuring the optimization policy. There aremultiple options that can be selected and it is recommended to setthis option to "Normal" for EMC replication protocols, such asSRDF/A. The configurable options are as follows:
Normal Perform LZ compression and SDR
SDR-Only Perform SDR; do not perform LZ compression
Compression-Only Perform LZ compression; do notperform SDR
None Do not perform SDR or LZ compression
Queue - MXTCP When creating QoS Classes you will need tospecify a queuing method. MXTCP has very different use casesthan the other queue parameters.
MXTCP also has secondary effects that you need to understandbefore configuring, including:
When optimized traffic is mapped into a QoS class with theMXTCP queuing parameter, the TCP congestion controlmechanism for that traffic is altered on the Steelheadappliance. The normal TCP behavior of reducing theoutbound sending rate when detecting congestion or packetloss is disabled, and the outbound rate is made to match theminimum guaranteed bandwidth configured on the QoS class.
You can use MXTCP to achieve high-throughput rates evenwhen the physical medium carrying the traffic has high lossrates. For example, MXTCP is commonly used for ensuringhigh throughput on satellite connections where alower-layer-loss recovery technique is not in use.
Another usage of MXTCP is to achieve high throughput overhigh bandwidth, high-latency links, especially whenintermediate routers do not have properly tuned interfacebuffers. Improperly tuned router buffers cause TCP toperceive congestion in the network, resulting in unnecessarilydropped packets, even when the network can support highthroughput rates.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
IMPORTANT
Use caution when specifying MXTCP. The outbound rate forthe optimized traffic in the configured QoS class immediatelyincreases to the specified bandwidth, and does not decrease inthe presence of network congestion. The Steelhead appliancealways tries to transmit traffic at the specified rate.
If no QoS mechanism (either parent classes on the Steelheadappliance, or another QoS mechanism in the WAN or WANinfrastructure) is in use to protect other traffic, that other trafficmight be impacted by MXTCP not backing off to fairly sharebandwidth. When MXTCP is configured as the queueparameter for a QoS class, the following parameters for thatclass are also affected:
Link share weight Prior to RiOS 8.0.x, the link share weightparameter has no effect on a QoS class configured withMXTCP.With RiOS 8.0.x and later, AdaptiveMXTCPwill allowthe link share weight settings to function for MXTCP QoSclasses.
Upper limit Prior to RiOS 8.0.x, the upper limit parameterhas no effect on a QoS class configured with MXTCP. WithRiOS 8.0.x and later, Adaptive MXTCP will allow the upperlimit settings to function for MXTCP QoS classes.
Reset Existing Client Connections on Start-Up Enables kickoff.If you enable kickoff, connections that exist when the Steelheadservice is started and restarted are disconnected. When theconnections are retried they are optimized. If kickoff is enabled,all connections that existed before the Steelhead appliance startedare reset.
WAN Visibility Mode/CA Enables WAN visibility, whichpertains to how packets traversing the WAN are addressed. RiOSv5.0 or later offers three types of WAN visibility modes: correctaddressing, port transparency, and full address transparency. Youconfigure WAN visibility on the client-side Steelhead appliance(where the connection is initiated). The server-side Steelheadappliance must also support WAN visibility (RiOS v5.0 or later).Also, see Correct Addressing on page 32.Riverbed Steelhead appliances 35
-
36
WAN Optimization Controllers
NotesConsider the following when using Riverbed configuration settings:
LAN Send and Receive Buffer Size should be configured to 2 MB
WAN Send and Receive Buffer Size is environment dependentand should be configured with the result utilizing the followingformula:
WAN BW * RTT * 2 / 8 = xxxxxxx bytes
FeaturesFeatures include:
SDR (Scalable Data Referencing)
Compression
QoS (Quality of Service)
Data / Transport / Application / Management Streamlining
Encryption - IPsec
Deployment topologiesDeployment topologies include:
In-Path
Physical In-Path
Virtual In-Path
WCCPv2 (Web Cache Coordination Protocol)
PBR (Policy-Based-Routing)
Out-of-Path
Proxy
Steelhead DX 8000, Steelhead CX 7055/5055/1555 and Steelhead7050/6050/5050 appliances also support 10 GbE Fibre ports
The virtual steelheads are supported when deployed onVMWARE ESX or ESXi servers. The virtual appliances can onlybe deployed in out-of-path configurations.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
Failure modes supportedThe following failure modes are supported:
Fail-to-wire
Fail-to-block
FCIP environmentThe following Riverbed configuration settings are recommended in aFCIP environment:
Configure > Networking > QoS Classification:
QoS Classification and Enforcement = Enabled
QoS Mode = Flat
QoS Network Interface with WAN throughput = Enabled forappropriate WAN interface and set available WAN Bandwidth
QoS Class Latency Priority = Real Time
QoS Class Guaranteed Bandwidth % = Environmentdependent
QoS Class Link Share Weight = Environment dependent
QoS Class Upper Bandwidth % = Environment dependent
Queue = MXTCP
QoS Rule Protocol = All
QoS Rule Traffic Type = Optimized
DSCP = All
VLAN =All
Configure > Optimization > General Service Settings:
In-Path Support = Enabled
Reset Existing Client Connections on Start-Up = Enabled
Enable In-Path Optimizations on Interface In-Path_X_X forappropriate In-Path interface
In RiOS v5.5.3 CLI or later: datastore codec multi-codecencoder max-ackqlen 30"
In RiOS v6.0.1a or later: "datastore codec multi-codec encoderglobal-txn-max 128"Riverbed Steelhead appliances 37
In RiOS v6.0.1a or later: "datastore sdr-policy sdr-m"
-
38
WAN Optimization Controllers
In RiOS v6.0.1a or later: " datastore codec multi-core-bal"
In RiOS v6.0.1a or later: "datastore codec compression level 1"
Configure > Optimization > In-Path Rules:
Type = Auto Discovery
Preoptimization Policy = None
Optimization Policy = Normal
Latency Optimization Policy = Normal
Neural Framing Mode = Never
WAN Visibility = Correct Addressing
In RiOS v5.5.3 CLI or later for FCIP: in-path always-probeenable
In RiOS v5.5.3 CLI or later for FCIP: in-path always-probeport 3225
In RiOS v6.0.1a or later: "in-path always-probe port 0"
In RiOS v6.0.1a or later: "tcp adv-win-scale -1"
In RiOS v6.0.1a or later: "in-path kickoff-resume"
In RiOS v6.0.1a or later: "protocol FCIP enable" for FCIP
In RiOS v6.0.1a or later: "protocol srdf enable " for SymmetrixDMX and VMAX
Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows:
Configure > Optimization > FCIP
- FCIP Settings- Enable FCIP- FCIP Ports: 3225, 3226, 3227, 3228
In RiOS v6.0.1a or later: "protocol fcip rule scr-ip 0.0.0.0 dst-ip0.0.0.0 dif enable" for EMC Symmetrix VMAX
Or, in RiOS v 6.1.1.a or later, you can use the GUI as follows:
Rules > Add a New Rule
- Enable DIF if R1 and R2 are VMAX and hosts are OpenSystems or IBM iSeries (AS/400)
- DIF Data Block Size: 512 bytes (Open Systems) and 520Bytes (IBM iSeries, AS/400)
- No DIF setting is required if mainframe hosts are in use
In RiOS v6.0.1i or later: "sport splice-policy outer-rst-port portWAN Optimization Controller Technologies TechBook
3226" for Brocade FCIP only
-
rsWAN Optimization Controlle
Configure > Optimization > Performance:
High Speed TCP = Enabled
LAN Send Buffer Size = 2097152
LAN Receive Buffer Size = 2097152
WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 =xxxxxxx bytes)
Note: BDP = Bandwidth delay product.
WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 =xxxxxxx bytes)
Data Store Segment Replacement Policy = Riverbed LRU
Adaptive Data Streamlining Modes = SDR-Default
Note: Latest appliances that use SSD-based data store will achievehigh throughput with standard SDR(SDR-Default). For legacy-baseddata store appliances, use SDR-M.
Compression Level = 1
Adaptive Compression = Disabled
Multi-Core Balancing = Enabled
Note: Multi-Core Balancing should be disabled if there are 16 orgreater data-bearing connections (i.e., exclusive of controlconnections, such as those commonly established by FCIP gateways).
Note: Maximum latency (round-trip time) and packet drop supported onCisco FCIP links are 100 ms round trip and 0.5% packet drop. The limit is thesame regardless whether latency and packet drop conditions exist together oronly one of them exists. This limitation only applies to the baseline (withoutWAN OPT appliances). With WAN OPT appliances and properconfigurations, RTT and packet loss can be extended beyond that limitation.Up to 200 ms round trip and 1% packet drop were qualified by EMC E-Lab.
GigE environmentThe following are Riverbed configuration settings recommended in aGigE environment:Riverbed Steelhead appliances 39
-
40
WAN Optimization Controllers
In RiOS v6.1.1a or later, Steelheads will be able to automaticallydetect and disable the Symmetrix VMAX and DMX compression bydefault. Use show log from the Steelhead to verify that compressionon the VMAX/DMX has been disabled. The "Native Symmetrix REport compression detected: auto-disabling" message will display onlyon the Steelhead adjacent to the Symmetrix (either the local or remoteside) that initiates the connection.
With Riverbed firmware v6.1.3a and above, the SRDF SelectiveOptimization feature is supported for SRDF group level optimizationfor end-to-end GigE environments with VMAX which have EMCEnginuity v5875 and later. Refer to the Riverbed Steelhead Deploymentand CLI Guide for further instructions.
Configure > Networking > Outbound QoS (Advanced):
QoS Classification and Enforcement = Enabled QoS Mode = Flat or hierarchical
QoS classes are configured in one of two different modes: flator hierarchical. The difference between the two modesprimarily consists of howQoS classes are created. In RiOS v8.0or later, the Hierarchical mode is recommended.
QoS Network Interface with WAN throughput = Enabled forappropriate WAN interfaces and set to available WANBandwidth
QoS Class Latency Priority = Real Time QoS Class Guaranteed Bandwidth % = Environment
dependent QoS Class Link Share Weight = Environment dependent QoS Class Upper Bandwidth % = Environment dependent Queue = MXTCP QoS Rule Protocol = All QoS Rule Traffic Type = Optimized DSCP = Reflect
Configure > Optimization > General Service Settings:
In-Path Support = Enabled Reset Existing Client Connections on Start-Up = Enabled Enable In-Path Optimizations on Interface In-Path_X_X In RiOS v5.5.3 CLI and later: datastore codec multi-codec
encoder max-ackqlen 30 In RiOS v6.0.1a CLI or later: "datastore codec multi-codec
encoder global-txn-max 128"WAN Optimization Controller Technologies TechBook
Configure > Optimization > In-Path Rules:
-
rsWAN Optimization Controlle
Type = Auto Discovery Preoptimization Policy = None Optimization Policy = Normal Latency Optimization Policy = Normal Cloud Acceleration = Auto Neural Framing Mode = Never WAN Visibility =Correct Addressing Auto Kickoff = Enabled In RiOS v5.5.3 CLI or later for GigE: in-path always-probe
enable In RiOS v5.5.3 CLI or later for GigE: in-path always-probe
port 1748 In RiOS v5.0.5-DR CLI or later for GigE: in-path asyn-srdf
always-probe enable In RiOS v6.0.1a or later: "in-path always-probe port 0" In RiOS v6.0.1a or later: "tcp adv-win-scale -1" In RiOS v6.0.1a or later: "protocol srdf enable " for Symmetrix
DMX and VMAXOr, in RiOS v 6.1.1.a or later, you can use the GUI as follows: Configure > Optimization > SRDF
SRDF Settings Enable SRDF SRDF Ports: 1748
In RiOS v6.0.1a or later: "protocol srdf rule src-ip 0.0.0.0 dst-ip0.0.0.0 dif enable for Symmetrix VMAXOr, in RiOS v6.1.1.a or later, you can use the GUI as follows: Rules > Add a New Rule
Enable DIF if R1 and R2 are VMAX and hosts are OpenSystems or IBM iSeries (AS/400)
DIF Data Block Size: 512 bytes (Open Systems) and 520Bytes (IBM iSeries, AS/400)
Configure > Optimization > Transport Settings:
Transport Optimization = Standard TCP LAN Send Buffer Size = 2097152 WAN Default Send Buffer Size = 2*BDP (BW * RTT * 2 / 8 =
xxxxxxx bytes)
Configure > Optimization > Performance
WAN Default Rcv Buffer Size = 2*BDP (BW * RTT * 2 / 8 =xxxxxxx bytes)
Data Store Segment Replacement Policy = Riverbed LRU Adaptive Data Streamlining Modes = SDR-DefaultRiverbed Steelhead appliances 41
-
42
WAN Optimization Controllers
Note: Latest appliances that use SSD-based data store will achievehigh throughput with standard SDR(SDR-Default). For legacy-baseddata store appliances, use SDR-M.
Compression Level = 1 Adaptive Compression = Disabled Multi-Core Balancing = Enabled
Note: Multi-Core Balancing should be disabled if there are 16 orgreater data-bearing connections (i.e., exclusive of controlconnections, such as those commonly established by FCIP gateways).
ReferencesFor more information about the Riverbed Steelhead WANOptimization Controller and the Riverbed system, refer to Riverbed'swebsite at http://www.riverbed.com.
Steelhead Appliance Deployment Guide
Steelhead Appliance Installation and Configuration Guide
Riverbed Command-Line Interface Reference ManualWAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
Riverbed Granite solutionThis section provides the following information on the RiverbedGranite solution.
Overview on page 43
Features on page 45
Configuring Granite Core High Availability on page 47
Deployment topologies on page 50
Configuring iSCSI settings on EMC storage on page 51
Configuring iSCSI initiator on Granite Core on page 52
Configuring iSCSI portal on page 53
Configuring LUNs on page 56
Configuring local LUNs on page 58
Adding Granite Edge appliances on page 59
Configuring CHAP users on page 60
Confirming connection to the Granite Edge appliance onpage 61
References on page 62
OverviewRiverbed Granite is a block storage optimization and consolidationsystem. It consolidates all storage at the data center and createsdiskless branches. Granite is designed to enable edge server systemsto efficiently access storage arrays over the WAN as if they werelocally attached.
The Granite solution is deployed in conjunction with Steelheadappliances and consists of two components:
Granite Core A physical or virtual appliance in the data center,and it mounts all the LUNs that need to be made available toapplications and servers at a remote location from the back-endstorage array. Granite Core appliances makes those LUNsavailable across the WAN in the branch via the Granite Edgemodule on a Steelhead EX or standalone Granite Edge appliance.Riverbed Granite solution 43
-
44
WAN Optimization Controllers
Granite Edge A module that runs on a Steelhead EX appliancein the branch office, it presents storage LUNs projected from thedata center as local LUNs to applications and servers on the localbranch network and operates as a block cache to ensure localperformance.
The Granite Edge appliance also connects to the blockstore, apersistent local cache of storage blocks. When the edge serverrequests blocks, those blocks are served locally from theblockstore (unless they are not present, in which case they arerequested from the data center LUN). Similarly, newly writtenblocks are spooled to the local cache, acknowledged to the edgeserver, and then asynchronously propagated to the data center.Because each Granite Edge appliance implementation is linked toa dedicated LUN at the data center, the blockstore is authoritativefor both reads and writes, and can tolerate WAN outages withoutworrying about cache coherency.
Blocks are communicated between Granite Edge appliance andGranite Core and the data center LUN via an internal protocol.(Optionally, this traffic can be further optimized by Steelheads forimproved performance.)
For SCSI writes Granite Edge acknowledges all writes locallyto ensure high-speed ("local") write performance. Granitemaintains a write block journal and preserves block-write orderto keep data consistent in case of a power failure or WANoutages.
For SCSI reads The Granite Edge cache is warmed with activedata blocks delivered by Granite Core in the data center, whichperforms predictive prefetch to ensure required data is quicklydelivered. Alternatively, a LUN can be "pinned" to the edge cacheand prepopulated with all data from a data center LUN.
Granite initially populates the blockstore in several possible ways:
Reactive prefetch The system observes block requests, appliesheuristics based on these observations to intelligently predict theblocks most likely to be requested in the near future, and thenrequests those blocks from the data center LUN in advance.
Policy-based prefetch Configured policies identify the set ofblocks that are likely to be requested at a given edge site inadvance, and then requests those blocks from the data centerLUN in advance.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
First request Blocks are added to the blockstore when firstrequested. Because the first request is cold, it is subject tostandard WAN latency.
Figure 3 shows an example of a multi-site Granite deployment.
Figure 3 Riverbed Granite example
FeaturesThis section briefly describes Riverbed Granite features.
Granite Predictionand Prefetch
To deliver high-performance when accessing block-storage over theWAN, Granite brings file system awareness to the block layer. Filesystem awareness enables intelligent block prefetch that addressesboth high latency and the inherently random nature of I/O at theblock layer, accelerating block storage access across distance. GraniteCore performs block-level prefetch from the back end storage arrayand actively pushes blocks to the Granite Edge to keep its cachewarmwith a working set of data. To accomplish this prefetch, Granitefirst establishes file system context at the block layer. For Windowsservers (physical or virtual) for instance, Granite Core traverses theRiverbed Granite solution 45
NTFS Master File Table (MFT) to build a two-way map of the file
-
46
WAN Optimization Controllers
system - blocks to files, and files to blocks. This map is used todetermine what to prefetch. By intelligently inspecting block accessrequests from the application/host file system iSCSI initiator, Granitealgorithms predict the next logical file system block or a cluster ofblocks to prefetch. This process provides seamless access to datacenter LUNs and ensures that operations like file access and largedirectory browsing are accelerated across a WAN.
Granite EdgeBlockstore cache
To eliminate latency introduced by the WAN, the Granite appliancein the branch presents a write-back block cache, called the blockstore.Block writes by applications and hosts at the edge are acknowledgedlocally by the blockstore and then asynchronously flushed back to thedata center. This enables application and file system initiator hosts inthe branch to make forward progress without being impacted byWAN latency. As blocks are received, written to disk andacknowledged, the written blocks are also journaled in write order toa log. This log file is used to maintain the block-write order to ensuredata consistency in case of a crash or WAN outage. When theconnection is restored, Granite Edge plays the blocks in loggedwrite-order to Granite Core, which commits the blocks to the physicalLUN on the back-end storage array. The combination of blockjournaling and write-order preservation enables Granite Edge tocontinue serving write functions in the branch during a WANdisconnection.
LUN pinning A storage LUN provisioned via Granite can be deployed in twodifferent modes, pinned and unpinned. Pinned mode caches 100% ofthe data blocks on the Steelhead EX appliance at the branch. Thisensures that the contents of specified storage LUNs are maintained atthe edge to support business operations in the event of a WANoutage. Unpinned mode maintains only a working set of the mostfrequently accessed blocks at the branch.
Disconnectedoperations
In some cases, the WAN connection may suffer an outage or may beunpredictable. That means that branch-resident applications mightbehave unpredictably in the case of a block cache miss which cannotbe serviced from the data center LUN due to the WAN outage. Forsuch cases, Granite LUN pinning is recommended. This will ensurethat the contents of an entire LUN are prefetched from the data centerand prepopulated at the edge to ensure a 100% hit rate on GraniteEdge. In LUN-pinning mode, no reads are sent across the wire to thedata center, although dirty blocks (changed/newly-written blocks)are preserved in a persistent log and flushed to the data center whenWAN Optimization Controller Technologies TechBook
WAN connectivity is restored, ensuring consolidation and protection
-
rsWAN Optimization Controlle
of newly created data. The result is high performance for applicationsduring a WAN outage, while extending continuous data protectionfor the edge.
Boot over the WAN VMware vSphere virtual server technology combined with Granitenow makes it possible to boot over the WAN to provide instantprovisioning and fast recovery capabilities for edge locations. Abootable LUN in the data center is mapped to a host in a branchoffice. The host can either be a separate ESXi server or the SteelheadEX embedded VSP. Granite Core detects the LUN as a VMFS filesystem with an embedded NTFS file system virtual machineworkload and upon further inspection learns the block sequence ofboth.
Once the boot process on the branch host starts, blocks for theWindows file server virtual machine are requested from across theWAN. Granite Core recognizes these requests and prefetches all ofthe required block clusters from the data center provisioned LUN andpushes them to the Granite Edge appliance at the branch, ensuringlocal performance for the boot operation.
Configuring Granite Core High AvailabilityYou can configure high availability between two Granite Coreappliances using the Management Console of either appliance.
When in a failover relationship, both appliances operateindependently, rather than either one being in standby mode. Wheneither appliance fails, the failover peer manages the traffic of bothappliances.
If you configure the current Granite Core appliance for failover withanother appliance, all storage configuration and storage report pagesinclude an additional feature that enables you to access and modifysettings for both the current appliance and the failover peer.
This feature appears below the page title and includes the text"Device failover is enabled. You are currently viewing configurationand reports for...." You can then select either Self (the currentappliance) or Peer from the drop-down list.Riverbed Granite solution 47
-
48
WAN Optimization Controllers
Figure 4 shows a sample storage configuration page with the featureenabled.
Figure 4 Sample storage configuration pages
You can configure two appliances for failover in the FailoverConfiguration page.
Note: For failover, Riverbed recommends connecting both failover peersdirectly with cables via two interfaces. If direct connection is not an option,Riverbed recommends that each failover connection use a different localinterface and reach its peer IP address via a completely separate route.
To configure a peer appliance, complete the following steps:
1. Log in to the Management Console of one of the appliances to beconfigured for high availability.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
2. Choose Configure > Failover Configuration to display theFailover Configuration page.
3. Enter the peer IP address in the Peer IP field.
4. Specify the failover peer using the controls described in thefollowing table.
After failover has been configured, you can access configurationsettings for both appliances from either appliance.
Component Description
Peer IP Address Specify the IP address of the peer appliance.
Local Interface Optionally, specify a local interface over which theheartbeat is monitored.
Second Peer IP Address Specify a second IP address of the peer appliance.
Second Local Interface Optionally, specify a second local interface over whichthe heartbeat is monitored.
Enable Failover Enables the new failover configuration.Riverbed Granite solution 49
-
50
WAN Optimization Controllers
Deployment topologiesFigure 5 illustrates a generic Granite deployment.
Figure 5 Granite deployment example
The basic system components are:
Microsoft Windows Branch Server The branch-side server thataccesses data from the Granite system instead of a local storagedevice.
Blockstore The blockstore is a persistent local cache of storageblocks. Because each Granite Edge appliance is linked to adedicated LUN at the data center, the blockstore is generallyauthoritative for both reads and writes.
In the above diagram, the blockstore on the branch side issynchronized with LUN1 at the data center.
iSCSI Initiator The iSCSI initiator is the branch-side client thatsends SCSI commands to the iSCSI target at the data center.
Granite-enabled Steelhead EX appliance Also referred to as aGranite Edge appliance, the branchside component of the Granitesystem links the edge server to the blockstore and links theblockstore to the iSCSI target and LUN at the data center. TheSteelhead provides general optimization services.
Data Center Steelhead appliance The data center-sideSteelhead peer for general optimization.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
Granite Core The data center component of the Granite system.Granite Core manages block transfers between the LUN and theGranite Edge appliance.
iSCSI Target The data center-side server that communicateswith the branch-side iSCSI initiator.
LUNs Each Granite Edge appliance requires a dedicated LUNin the data center storage configuration.
Configuring iSCSI settings on EMC storageFor instructions on how to configure iSCSI, refer to the iSCSI SANTopologies TechBook, available on the E-Lab Interoperability Navigator,Topology Resource Center tab, at http://elabnavigator.EMC.com.The Use Case Scenarios chapter describes the steps to configure iSCSIstorage on the EMC VNX and VMAX arrays. Follow the steps for aLinux iSCSI host.
After the arrays are configured, the Granite Cores are configuredusing the steps described next in Configuring iSCSI initiator onGranite Core on page 52.Riverbed Granite solution 51
-
52
WAN Optimization Controllers
Configuring iSCSI initiator on Granite CoreTo configure the iSCSI initiator, complete the following steps:
1. Choose Configure > Storage > iSCSI Configuration to displaythe iSCSI Configuration page.
2. Under iSCSI Initiator Configuration, configure authenticationusing the controls described in the following table.
Control Description
Initiator Name Specify the name of the initiator to be configured.
Enable Header Digest Includes the header digest data in the iSCSI PDU.
Enable Data Digest Includes the data digest data in the iSCSI PDU.
Enable Mutual CHAPAuthentication
Enables CHAP (Challenge-Handshake Authentication Protocol)authentication.If you select this option, an additional setting appears for specifyingthe mutual CHAP user. You can either select an existing user fromthe drop-down list or create a new CHAP user definition dynamically.Note: CHAP authenticates a user or network host to anauthenticating entity. CHAP provides protection against playbackattack by the peer through the use of an incrementally changingidentifier and a variable challenge value.WAN Optimization Controller Technologies TechBook
Apply Applies the changes to the running configuration.
-
rsWAN Optimization Controlle
Configuring iSCSI portalTo configure an iSCSI portal, complete the following steps:
1. Choose Configure > Storage > iSCSI Configuration to displaythe iSCSI Configuration page.
2. Under iSCSI Portal Configuration, add or modify iSCSI portalconfigurations using the controls described in the following table.
Control Description
Add an iSCSI Portal Displays controls for configuring and adding a newiSCSI portal.
IP Address Specify the IP address of the iSCSI portal.
Port Specify the port number of the iSCSI portal. Thedefault is 3260.
Authentication Select an authentication method (None or CHAP) fromthe drop-down list.Note: If you select CHAP, an additional field displays inwhich you can specify (or create) the CHAP username.
Add iSCSI Portal Adds the defined iSCSI portal to the runningconfiguration.Riverbed Granite solution 53
-
54
WAN Optimization Controllers
3. To view or modify portal settings, click the portal IP address inthe list to access the following set of controls.
4. To add a target to the newly configured portal:
a. Click the portal IP address in the list to expand the set ofcontrols.
Control Description
Portal Settings Specify the following: Port - The port setting for the selected iSCSI portal. Authentication - Specify either None or CHAP from the
drop-down list. Update iSCSI Portals - Updates the portal settings
configuration.
Offline LUNs Click Offline LUNs to take offline all LUNs serviced bythis selected iSCSI portal.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
b. Under Targets, add a target for the portal using the controlsdescribed in the following table.
5. To modify an existing target configuration:
a. Click the portal IP address in the list to expand the set ofcontrols.
Control Description
Add a Target Displays controls for adding a target.
Target Name Enter the target name or choose from available targets.Note: This field also enables you to rescan for availabletargets.
Port Specify the port number of the target.
Snapshot Configuration From the drop-down list, select an existing snapshotconfiguration.If the desired snapshot configuration does not appear onthe list, you can add a new one by clicking Add NewSnapshot Configuration. You will be prompted tospecify the following: Host Name or IP Address - Specify the IP address or
hostname of the storage array. Type - Select the type from the drop-down list. Username - Specify the username. Password/Confirm Password - Specify a password.
Retype the password in the Password Confirm textfield.
Protocol - Specify either HTTP or HTTPS. Port - Specify the HTTP or HTTPS port number.
Note: Note: The Protocol and Port fields are onlyactivated if NetApp is selected as Type.
Add Target Adds the newly defined target to the current iSCSI portalconfiguration.Riverbed Granite solution 55
-
56
WAN Optimization Controllers
b. Under Targets, click the target name in the Targets table toexpand the target settings using the controls described in thefollowing table.
Configuring LUNsTo configure an iSCSI LUN, complete the following steps.
1. Choose Configure > Storage > LUNs to display the LUNs page.
Control Description
Target Settings Open this tab to modify the port and snapshotconfiguration settings.Optionally, you can add a new snapshot configurationdynamically by clicking the Add New SnapshotConfiguration link adjacent to the setting field.
Offline LUNs Open this tab to access the Offline LUNs button.Clicking this button takes offline all configured LUNsserviced by the current target.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
2. Configure the LUN using the controls described in the followingtable.
3. To modify an existing iSCSI LUN configuration, click the name inthe LUN list to display additional controls.
Control Description
Add an iSCSI LUN Displays controls for adding an iSCSI LUN to the currentconfiguration.
LUN Serial Number Select from the drop-down list of discovered LUNs.The LUNs listed are shown using the following format: serialnumber (portal/target).Note: If the desired LUN does not appear, scroll to the bottomof the list and select Rescan background storage for new LUNs.
LUN Alias Specify an alias for the LUN.
Add iSCSI LUN Adds the new LUN to the running configuration.
Control Description
Details Displays online or offline status. Click Offline to take the LUN offline. Click Online to bring the LUN online.Additionally, displays the following information aboutthe LUN: Connection status Locally Assigned LUN Serial Origin LUN Serial Origin Portal Origin Target Size (in MB)
Alias Displays the LUN alias. Optionally, you can modifythe value and click Update Alias.
Edge Mapping Displays the Granite Edge appliance to which theLUN is mapped.To unmap, click Unmap.
Failover Displays whether the LUN is configured for failover:To enable or disable, click Disable or Enable.Riverbed Granite solution 57
-
58
WAN Optimization Controllers
Configuring local LUNsTo configure an iSCSI LUN, complete the following steps.
1. Choose Configure > Storage > LUNs to display the LUNs page.
2. Configure the LUN using the controls described in the followingtable.
MPIO Displays multipath information for the LUN.Additionally, the MPIO policy can be changed fromround-robin (default) to fixed-path.
Snapshots Displays the snapshot configurations for the selectediSCSI LUN.Additionally, controls link to the settings for modifyingand updating snapshots.
Pin/Prepop Displays the pin status (Pinned or Unpinned) andprovides controls for changing the status.When a LUN is pinned, the data is reserved and notsubject to the normal blockstore eviction policies.This tab also contains controls for enabling ordisabling the prepopulation service and forconfiguring a prepopulation schedule.Note: You can create a prepopulation schedule onlywhen the pin status is set and updated to pinned.
Control Description
Control Description
Add a Local LUN Displays controls for adding a local LUN to the currentconfiguration. Local LUNs consist of storage on the GraniteEdge only, there is no corresponding LUN on the GraniteCore.
Granite Edge Select a LUN from the drop-down list. This list displaysconfigured Granite Edge appliance.
Size Specify the LUN size, in MB.
Alias Specify the alias for the LUN.
Add a Local LUN Adds the new LUN to the running configuration.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
3. To modify an existing local LUN configuration, click the name inthe LUN list to display additional controls.
Adding Granite Edge appliancesTo add or modify Granite Edge appliances
1. Choose Configure > Storage > Granite Edges to display theGranite Edges page.
Control Description
LUN Status Displays online or offline status. Click Offline to take the LUN offline. Click Online to bring the LUN online.
LUN Details Displays the following information about the LUN: VE assigned serial number Granite Edge appliance Target
LUN Alias Displays the LUN alias, if applicable. Optionally, modify thevalue and click Update Alias.Riverbed Granite solution 59
-
60
WAN Optimization Controllers
2. Configure the Granite Edge using the controls described in thefollowing table.
3. To remove an existing Granite Edge configuration, click the trashicon in the Remove column.
Configuring CHAP usersYou can configure CHAP users in the CHAP Users page.
Note: You can also configure CHAP users dynamically in the iSCSIConfiguration page.
To configure CHAP users, complete the following steps.
1. ChooseConfigure > Storage > CHAPUsers to display theCHAPUsers page.
Control Description
Add a Granite Edge Displays controls for adding a Granite Edge appliance to thecurrent configuration.
Granite Edge Identifier Specify the identifier for the Granite Edge appliance. Thisvalue must match the same value configured on the GraniteEdge appliance.Note: Granite Edge identifiers is case-sensitive.
Blockstore encryption Changes the encryption used when writing data to theblockstore.
Add Granite Edge Adds the new Granite Edge appliance to the runningconfiguration. The newly added appliance appears in the list.WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
2. Add new CHAP users using the controls described in thefollowing table.
3. To modify an existing CHAP user configuration, click theusername in the User table to expand a set of additional controls.
New CHAP users are enabled by default.
4. To disable a CHAP user:
a. Click the username to expand the set of additional controls.
b. Clear the Enable check box.
c. Click Update CHAP User.
5. To change the user password, enter and confirm the newpassword and click Update CHAP user.
6. To remove an existing CHAP user configuration, click the trashicon in the Delete column.
7. Click Save to save your settings permanently.
Confirming connection to the Granite Edge applianceThis section describes how to confirm that the Granite Edgeappliance is communicating with the newly configured Granite Coreappliance.
To confirm connection, complete the following steps:
1. Log in to the Management Console of the Granite Edge appliance.
2. Choose Configure > Granite > Granite Storage to go to theGranite Storage page.
Control Description
Add a CHAP User Displays controls for adding a new CHAP user to the runningconfiguration.
Username Specify a CHAP username.
Password/ConfirmPassword
Specify and confirm a password for the new CHAP user.
Add CHAP User Adds the new CHAP user to the running configuration.Riverbed Granite solution 61
-
62
WAN Optimization Controllers
If the connection was successful, the page displays connection detailsincluding the iSCSI target configuration and LUN information.
ReferencesFor more information, refer to Riverbed's website athttp://www.riverbed.com.
Granite Core Deployment Guide
Granite Core Installation and Configuration Guide
Riverbed Branch Office Infrastructure for EMC Storage Systems(Reference Architecture)WAN Optimization Controller Technologies TechBook
-
rsWAN Optimization Controlle
Silver Peak appliancesThis section provides information on the Silver Peak appliancesoptimization controller. The following topics are discussed:
Overview on page 63
Terminology on page 64
Features on page 66
Deployment topologies on page 67
Failure modes supported on page 67
FCIP environment on page 67
GigE environment on page 68
References on page 69
OverviewSilver Peak appliances are interconnected by tunnels, which transportoptimized traffic flows. Policies control how the appliance filtersLAN side packets into flows and whether:
an individual flow is directed to a tunnel, shaped, and optimized;
processed as shaped, pass-through (unoptimized) traffic;
processed as unshaped, pass-through (unoptimized) traffic;
continued to the next applicable Route Policy entry if a tunnelgoes down; or
dropped.
The appliance manager has separate policies for routing,optimization, and QoS functions. These policies prescribe how theappliance handles the LAN packets it receives.
The optimization policy uses optimization techniques to improve theperformance of applications across the WAN. Optimization policyactions include network memory, payload compression, and TCPacceleration.
Silver Peak ensures network integrity by using QoS management,Forward Error Correction, and Packet Order Correction. WhenAdaptive Forward Error Correction (FEC) is enabled, the applianceSilver Peak appliances 63
introduces a parity packet, which helps detect and correct
-
64
WAN Optimization Controllers
single-packet loss within a stream of packets, reducing the need forretransmissions. Silver Peak can dynamically adjust how often thisparity packet is introduced in response to changing link conditions.This can help maximize error correction while minimizing overhead.
To avoid retransmissions that occur when packets arrive out of order,Silver Peak appliances use Packet Order Correction (POC) toresequence packets on the far end of a WAN link, as needed.
TerminologyConsider the following terminology when using Silver Peakconfiguration settings:
Coalescing ON Enables/disables packet coalescing. Packetcoalescing transmits smaller packets in groups of larger packets,thereby increasing performance and helping to overcome theeffects of latency.
Coalesce Wait Timer (in milliseconds) used to determine theamount of time to wait before transmitting coalesced packets.
Compression Reduces t