member sign-in
Forgot password? Create new account Close

WAN Optimization Controllers

Definition

WAN optimization, also known as WAN acceleration, has the goal of streamlining the application delivery over the WAN, via a series of technologies (techniques) that accelerate, deduplicate and compress the data sent across the WAN. The technologies encompass specific methods at all OSI Layers – data, transport and application layer:

  • Acceleration – optimize the TCP protocol
  • Compression – shrink the data sent across the WAN
  • Caching – store data locally, thus avoiding WAN re-transmission
  • Shaping – enforce bandwidth allocation for high-priority applications.

The hardware and software devices used to perform WAN optimization are called WAN Optimization Controllers (WOC)

User Benefits

The WAN Optimization technology is a symmetrical technology – has to be deployed at the both ends of a WAN link. The end of a WAN link can be either a hardware or software device (virtualized) but also a software client installed on end user endpoints.

There are 2 main benefits of WOC:

1. Application Acceleration

Traditionally, the WAN Optimization had the single purpose to speed-up application delivery over the WAN. Hence, the main purpose of the WOC:

  • Improved application response times over the WAN
  • Improved transfer speed for files over the WAN

Hence, the remote users (branch office and mobile) can access the WAN applications at LAN speeds.  This technology proves very helpful especially over high-latency links, like international VPN connections, satelite connections or mobile 3G connectivity.

The WOC enable also real-time sync between main DC and DRC, as well as decreasing the backup times over the WAN.

2. Consolidation

Once the virtualization technologies arose, a new functionality was added to the WOC in order to augment the application delivery functionality of the WOC. Thus, the organizations now have the possibility to consolidate branch connectivity and services onto the WOC – this new functionality is called Branch Office in the Box – BOB. Such functionality as BOB can consist of:

  • Connectivity – VPN connectivity to the HQ
  • Security – firewall, UTM and SWG technologies
  • Applications –  consolidation of all local branch services onto the WOC via virtualization – e.g.: local AD services, print servers, DHCP, online sharing and video-conferencing servers, video surveillance, etc.

Since the WOC optimize applications and understand application behavior, the close integration with Application Performance Monitoring solutions represents a natural step in measuring and cuantifying, as well as continuous optimization for the application delivery framework.

 

Business Impact

As the WOC enable the application delivery over the WAN as well as consolidation for branch office, there are several considerations that are to be taken into account when deploying WOC technologies:

  • As the WOC hardware appliances are deployed inline, the WOC should support transparent functionality – no IP addressing on the traffic interfaces, in order not to impact the traffic in case of device malfunctioning
  • The WOC appliances should be fault tolerant
  • If the WOC is used also as BOB, then the hardware appliance has to be adequately dimensioned in terms of hardware specifications
  • Rapid deployment functionality should also be available in order to optimize the remote location replacement
  • Centralized management platform should be employed for large deployments
  • Acceleration for SSL applications might require a more intrusive deployment

Products supporting this technology

Riverbed

The myth of bandwidth

Many times, especially where the WAN capacity is not a challenge, the natural and obvious solution to low application response times over the WAN would be increasing the bandwidth.

Only that this is not entirely true – increasing the bandwidth does not automatically increase the application response time over the WAN. Why? Because the theoretical TCP throughput of a WAN link is determined pretty much by the TCP/IP protocol itself together with the RTT (round trip time) and packet loss rate, rather than the acquired bandwidth from the service provider.

Tech Abstracts

1. TCP Maximum Throughput

The adopted formula for calculating the theoretical TCP rate of a WAN link is based on the Mathis et. al. formula, where C=1:

TCP rate < (MSS/RTT)*(1/sqrt(p)) (based on the Mathis et.al. formula, with C=1)

Where:

  • MSS (maximum segment size) = MTU (maximum transfer unit) – TCP/IP header
  • RTT: round trip time (ms)
  • p: packet loss rate (%)

The MSS (maximum segment size) is the actual data payload that would be transferred over the WAN link into a single TCP/IP packet. After substracting the TCP and IP headers of the packet from the MTU, we reach a theoretical limit of 1460 bytes of actual data that could be transmitted within a single packet.  That is, if the link is unencrypted and all other network devices between the 2 locations accept this MSS. If the link would be encrypted (IPSec VPN), or some networking devices along the way on the WAN do not accept this MSS size, than the MSS could "go" as low as 1300 bytes or even lower.

The tables below present the theoretical TCP throughput of an encrypted IPSec VPN WAN link with respect to RTT and packet loss.

T'put (Mbps) MSS (bytes) RTT(ms) p
5 1300 70 0.10%
3 1300 70 0.30%
2 1300 70 0.50%
1.5 1300 70 1.00%
0.8 1300 70 3.00%
0.6 1300 70 5.00%
T'put (Mbps) MSS (bytes) RTT(ms) p
2.9 1300 50 0.5%
2.4 1300 60 0.5%
2.1 1300 70 0.5%
1.8 1300 80 0.5%
1.6 1300 90 0.5%
1.4 1300 100 0.5%

 

 

 

 

 

 

 

 

Conclusion: The maximum TCP throughput of a WAN IPSec VPN encrypted link with 1% packet loss and 70 ms RTT will be maximum 1.5 Mbps, regardless of how large the WAN link is.

Theoretically, this is the maximum performance you might achieve – usually, in real life, this is much lower because we did not take into account the latency introduced by the applications and servers respectively, which is subject to the hardware and the applications used.

2. TCP Maximum Throughput for a given link

This example applies very well between Data Centers, where large amounts of data are to be carried over large WAN links, but also over long distances.

On such scenarios, if we consider that we could have dedicated fiber link (dark fiber) of 1 Gbps without packet loss, we would like to know what is the actual throughput between 2 servers for example, which perform replication over this link.

Hence, a more appropriate formula here would be:

Throughput (bps) = TCP-Window-Size (bits) / Latency (s), where:

  • Throughput is the actual measured throughput in bps
  • TCP-Window-Size is the TCP window size – for a Windows machine it is 64KB = 524,888 bits
  • Latency (RTT) – let’s assume 50 ms = 0.05 s

Now, we can calculate the maximum throughput between the 2 servers:

T (bps) = 524,888 bits / 0.05 seconds = 10,485,760 bps = 10.5 Mbps

Conclusion: the maximum amount of data which can be transferred between 2 Windows servers over a 1 Gbps dark fiber link, with 0% packet loss and 50 ms RTT is 10.5 Mbps.

The Overall Conclusion

It is clear that the TCP throughput depends on:

  • RTT – Round Trip Time
  • Packet Loss
  • TCP Window Size

On top of this, come into discussion also the application layers (OSI Layers 5-7), which induce more latency and determine actually the number of packets sent over the WAN link.

Thus, the need for a WAN Optimization solution is obvious, in order to overcome all these TCP and application shortages.

But is it feasible? Is it worthy to deploy such a solution? How and what can I measure in order to quantify the benefits?

 

The Solution

As I see it, would be simple and straightforward: measure and quantify.

Step 1. Measurement

Riverbed Cascade allows someone to measure the WAN performance of both network and applications: throughput and application response time, as it is before deploying a WAN optimization solution.

Step 2. Enhancement

Based on the findings from Step 1, Riverbed Steelhead can be deployed in order to address all the latency and throughput shortages by employing:

- Data Streamlining

o   Reduce WAN Bandwidth Utilization by 60 – 95%

o   Eliminate redundant data transfers at the byte-sequence level

o   Perform cross-application optimization

o   Provide Quality-of-Service marking and enforcement for all TCP and UDP applications 

 

·        Transport Streamlining

o   Applications run up to 100 times faster

o   Reduce transport protocol chattiness by 65% to 98%

o   Automatically adjust transfer parameters based on network conditions

o   Enable up to 95% utilization on high-bandwidth, high latency connections

o   Optimize and accelerate secure business applications via SSL support

 

·        Application Streamlining

o   Applications run up to 100 times faster

o   Reduce application protocol chattiness by 65% to 98%

o   Address the most important application protocols: CIFS, NFS, MAPI (2000 - 2007), HTTP(S), MS-SQL, Oracle 11i, etc

o   Identify and improve the handling of large-scale data transfers

 

Step 3. Measurement for Steelhead optimized environment

At this step, we run again the very same tests performed at Step 1. Based on the delta shown between the 2 measurements, we could quantify now the advantages of Steelhead WAN optimization solution.

Usually remote offices could involve small offices with less than 5 connected devices, for example. In such deployments, might be difficult to build a business case on WAN Optimization – the solution cost / connected device ratio might be quite high.

Here comes into play another advantage of Riverbed Steelhead.  

Step 4. BOB – Branch Office in a Box

Riverbed Services Platform (RSP) uses VMware to provide dedicated resource instances for certified software modules to run on. The RSP offers software vendors a unique development platform and easy interoperability with data and applications at the network level. For customers, the RSP is a protected partition on the Steelhead appliance to run best-of-breed services and applications while minimizing the branch office hardware infrastructure.

Thus, hardware appliances such as firewall/VPN, web security devices, Microsoft servers, multimedia and virtually any virtual service that could be deployed in a VMware environment.

With consolidation, the business case for WAN optimization is substantially augmented.

  • manufacturer