Describe the purpose of TCP optimization
TCP tuning techniques adjust the network congestion avoidance parameters of TCP connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. For enterprises delivering Internet and extranet applications, TCP/IP inefficiencies, coupled the effects of WAN latency and packet loss, all conspire to adversely affect application performance. The result of these inefficiencies has inflated the response times for applications, and significantly reduced bandwidth utilization efficiency (ability to “fill the pipe”).
F5’s BIG-IP® Local Traffic Manager provides a stat e-of-the-art TCP/IP stack that delivers dramatic WAN and LAN application performance improvements for real-world networks. These advantages cannot be seen in typical packet blasting test harnesses, rather they are designed to deal with real-world client and Internet conditions.
This highly optimized TCP/IP stack, called TCP Express, combines cutting-edge TCP/IP techniques and improvements in the latest RFCs with numerous improvements and extensions developed by F5 to minimize the effect of congestion and packet loss and recovery. Independent testing tools and customer experiences have shown TCP Express delivers up to a 2x performance gain for end users and a 4x improvement in bandwidth efficiency with no change to servers, applications, or the client desktops.
TCP Express White Paper
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent this link from being broken. The Hypertext Transfer Protocol supports explicit means for maintaining an active connection between client and server. HTTP persistent connection
, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.
Dynamic caching completely changes the caching model, making it possible to cache a much broader variety of content including highly dynamic Web pages, query responses, and XML objects. Dynamic caching is a patented technology unique to F5.
The F5 BIG-IP® WebAccelerator makes dynamic caching possible by implementing two key capabilities: a sophisticated matching algorithm that links fully qualified user queries to cached content, and a cache invalidation mechanism triggered by application and user events.
Describe the purpose of compression
In computer science and information theory, data compression
, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it.
increases application performance across a network. In contrast to packet-based compression, advanced compression operates at the session layer (layer 5 of the seven-layer OSI
model), compressing homogenous data sets while addressing all application types. This approach generates higher system throughput and minimizes latency.
F5 BIG-IP® WAN Optimization Module™ combines advanced compression with a system architecture built for high performance. BIG-IP is specifically designed to address the needs of bandwidth-intensive networks.
Intelligent compression removes redundant patterns from a data stream to improve application performance. This technique is commonly used for Web applications to help reduce bandwidth needs and improve end-user response times.
The F5 BIG-IP® product family can target specific applications for compression to give the greatest possible benefit to end users. The BIG-IP system monitors TCP round-trip times to calculate user latency, allowing BIG-IP to devote more power to compressing traffic for those who need it most.
Pipelining is a natural concept
in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be assembled at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.
HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent.