load balancer features

Upload: elango-sp

Post on 02-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 Load Balancer Features

    1/5

    Load balancer features

    Hardware and software load balancers may have a variety of special features. The fundamental

    feature of a load balancer is to be able to distribute incoming requests over a number of backend

    servers in the cluster according to a scheduling algorithm. Most of the following features arevendor specific:

    Asymmetric load:A ratio can be manually assigned to cause some backend servers to geta greater share of the workload than others. This is sometimes used as a crude way toaccount for some servers having more capacity than others and may not always work as

    desired.

    Priority activation:When the number of available servers drops below a certain number,or load gets too high, standby servers can be brought online.

    SSL Offload and Acceleration: Depending on the workload, processing the encryption

    and authentication requirements of an SSL request can become a major part of the

    demand on the Web Server's CP; as the demand increases, users will see slower responsetimes, as the SSL overhead is distributed among Web servers. To remove this demand on

    Web servers, a balancer can terminate SSL connections, passing HTTPS requests as

    HTTP requests to the Web servers. If the balancer itself is not overloaded, this does notnoticeably degrade the performance perceived by end users. The downside of thisapproach is that all of the SSL processing is concentrated on a single device (the

    balancer) which can become a new bottleneck. Some load balancer appliances include

    specialized hardware to process SSL. Instead of upgrading the load balancer, which isquite expensive dedicated hardware, it may be cheaper to forgo SSL offload and add a

    few Web servers. Also, some server vendors such as Oracle/Sun now incorporate

    cryptographic acceleration hardware into their CPUs such as the T2000. F5 Networksincorporates a dedicated SSL acceleration hardware card in their local traffic manager

    (LTM) which is used for encrypting and decrypting SSL traffic. One clear benefit to SSL

    offloading in the balancer is that it enables it to do balancing or content switching based

    on data in the HTTPS request. Distributed Denial of Service (DDoS) attack protection: load balancers can provide

    features such as SYN cookies and delayed-binding (the back-end servers don't see the

    client until it finishes its TCP handshake) to mitigate SYN flood attacks and generallyoffload work from the servers to a more efficient platform.

    HTTP compression: reduces amount of data to be transferred for HTTP objects by

    utilizing gzip compression available in all modern web browsers. The larger the response

    and the further away the client is, the more this feature can improve response times. Thetradeoff is that this feature puts additional CPU demand on the Load Balancer and could

    be done by Web servers instead.

    TCP offload:different vendors use different terms for this, but the idea is that normally

    each HTTP request from each client is a different TCP connection. This feature utilizesHTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP

    socket to the back-end servers.

    TCP buffering:the load balancer can buffer responses from the server and spoon-feed thedata out to slow clients, allowing the web server to free a thread for other tasks faster than

    it would if it had to send the entire request to the client directly.

    Direct Server Return: an option for asymmetrical load distribution, where request andreply have different network paths.

  • 8/11/2019 Load Balancer Features

    2/5

    Health checking: the balancer polls servers for application layer health and removes

    failed servers from the pool.

    HTTP caching: the balancer stores static content so that some requests can be handledwithout contacting the servers.

    Content filtering:some balancers can arbitrarily modify traffic on the way through.

    HTTP security:some balancers can hide HTTP error pages, remove server identificationheaders from HTTP responses, and encrypt cookies so that end users cannot manipulatethem.

    Priority queuing: also known as rate shaping, the ability to give different priority to

    different traffic.

    Content-aware switching: most load balancers can send requests to different servers

    based on the URL being requested, assuming the request is not encrypted (HTTP) or if it

    is encrypted (via HTTPS) that the HTTPS request is terminated (decrypted) at the load

    balancer.

    Client authentication: authenticate users against a variety of authentication sources

    before allowing them access to a website.

    Programmatic traffic manipulation: at least one balancer allows the use of a scriptinglanguage to allow custom balancing methods, arbitrary traffic manipulations, and more.

    Firewall: direct connections to backend servers are prevented, for network security

    reasons Firewall is a set of rules that decide whether the traffic may pass through an

    interface or not.

    Intrusion prevention system: offer application layer security in addition to

    network/transport layer offered by firewall security.

    Application Development & Application Server Load BalancingExisting middleware-based load balancing services do not adequately address several key

    requirements such as server-side transparency, centralized load balancing, support for stateless

    replication and load/health monitoring. This forces continuous re-development of application-

    specific load balancing services. Not only does re-development increase deployment costs ofdistributed applications, it also increases the potential of producing non-optimal load balancing

    implementations, since proven load balancing service optimizations cannot be reused directly.

    Application delivery controllers and server load balancers have emerged as one of the mostimportant technologies in solving the problem of performance and accessibility for distributed

    application systems. In its most basic form, a load balancer provides the ability to direct

    application users to the best performing, accessible server. Should one of the servers (orapplications on that server) become inaccessible, the load balancer will take that server off-line,

    while automatically re-routing users to other functioning servers. In addition, using various

    adaptive load balancing algorithms, an intelligent load balancer can distribute users to servers

    that offer the best possible performance by dynamically interrogating key server elements suchas number of concurrent connections and CPU/memory utilization. To further enhance the user

    experience, advanced load balancers can provide SSL acceleration by offloading

    encryption/decryption processes from the application servers, thereby dramatically increasing

    their performance, while decreasing the time and costs associated with certificate management.

  • 8/11/2019 Load Balancer Features

    3/5

    In general terms, use of external application delivery controllers and intelligent load balancers

    can provide the following benefits:

    Adaptive Load Balancing Appliance can be used for a larger range of distributed systems,since they need not be designed for any specific type of application.

    Since a single load balancing appliance can be used for many types of applications, the

    cost of developing a load balancing service for specific types of applications can beavoided, thereby reducing deployment costs.

    Server load balancing

    Availabili ty and scalabili ty

    Server load balancing distributes service requests across a group of real servers andmakes those servers look like a single big server to the clients. Often dozens of real

    servers are behind a URL that implements a single virtual service.

    How does this work? In a widely used server load balancing architecture, the incomingrequest is directed to a dedicated server load balancer that is transparent to the client.

    Based on parameters such as availability or current server load, the load balancer decides

    which server should handle the request and forwards it to the selected server. To provide

    the load balancing algorithm with the required input data, the load balancer also retrievesinformation about the servers' health and load to verify that they can respond to traffic.

    Figure 1 illustrates this classic load balancer architecture.

    Figure 1. Classic load balancer architecture (load dispatcher)

    The load-dispatcher architecture illustrated in Figure 1 is just one of several approaches.

    To decide which load balancing solution is the best for your infrastructure, you need toconsider availabilityandscalability.

    Availability is defined by uptime-- the time between failures. (Downtime is the time to

    detect the failure, repair it, perform required recovery, and restart tasks.) During uptime

    http://www.javaworld.com/javaworld/jw-10-2008/images/load-balancing1-fig1.gifhttp://www.javaworld.com/javaworld/jw-10-2008/images/load-balancing1-fig1.gifhttp://www.javaworld.com/javaworld/jw-10-2008/images/load-balancing1-fig1.gif
  • 8/11/2019 Load Balancer Features

    4/5

    the system must respond to each request within a predetermined, well-defined time. If

    this time is exceeded, the client sees this as a server malfunction. High availability,

    basically, is redundancy in the system: if one server fails, the others take over the failedserver's load transparently. The failure of an individual server is invisible to the client.

    Figure 14-2 Application Server Instance Failure in a Cluster

    Scalability means that the system can serve a single client, as well as thousands of

    simultaneous clients, by meeting quality-of-service requirements such as response time.

    Under an increased load, a high scalable system can increase the throughput almostlinearly in proportion to the power of added hardware resources.

  • 8/11/2019 Load Balancer Features

    5/5

    Weighted Round Robin predictor

    Like the Round Robin predictor, the Weighted Round Robin predictor treats all servers equally

    regardless of the number of connections or response time. It does however use a configuredweight value that determines the number of times within a sequence that the each server is

    selected in relationship to the weighted values of other servers. For example, in a simple

    configuration with two servers where the first server has a weight of 4 and the second server hasa weight of 2, the sequence of selection would occur as described in the following:1. The first request is sent to Server1.

    2. The second request is sent to Server2.

    3. The third request is sent to Server1.4. The fourth request is sent to Server2.

    5. The fifth request is sent to Server1.

    6. The sixth request is sent to Server1.