The Equalizer Application Delivery Controller (ADC) is a high-performance switch that offers
optimized availability, user experience, and performance of mobile, cloud-based and enterprise
applications while increasing server efficiency and reducing cost and complexity in the data
center. It features:
l Intelligent load balancing based on multiple, user-configurable criteria
l Non-stop availability with no single point of failure, through the use of redundant servers in
a cluster and the optional addition of a failover (or backup) Equalizer
l Layer 7 content-sensitive routing
l Connection persistence using cookies or IP addresses
l Real-time server and cluster performance monitoring
l Server and cluster administration from a single interface
l SSL acceleration (on Equalizer models with Xcel SSL Hardware Acceleration)
l Data compression (on Equalizer models with Express Hardware GZIP Compression)
The following typographical conventions appear throughout this guide:
l Text in “double quotes” indicates the introduction of a new term.
l Italic text is used primarily to indicate variables in command lines, and is also used to
emphasize concepts while discussing Equalizer operation.
l Boldface text highlights GUI interface screen elements: labels, buttons, tabs, icons, etc., as
well as data the user must type into a GUI element.
l Courier text denotes computer output: messages, commands, file names, directory
names, keywords, and syntax exactly as displayed by the system.
l Bold courier text is text the user must type at the CLI prompt. Bold courier text in brack-
ets -- indicates a keyboard key or key sequence that must be typed.
l Bold text sequences such as “Cluster > Configuration > Set tings” are used to indicate the GUI con-
trols a user needs to click to display the GUI form relevant to the task at hand. In the above
example, the user would click on the Equalizer host name displayed at the top of the left navigational tree , click on the Configuration tab in the right pane, and then click on the Set tings
tab.
1. Numbered lists show steps that you must complete in the numbered order.
l Bulleted lists identify items that you can address in any order.
Note - A note box in the margin cites the source of information or provides a brief explanation that supports a specific
statement but is not integral to the logical flow of the text.
The symbol on the left emphasizes a critical note or caution.
These instructions are part of the product documentation delivered with Equalizer’s browserbased GUI. You can display the appropriate manual section for any interface screen by selecting
Help > Context help from the menu at the top of the interface. The Help menu also contains links to
the Release Notes for the currently running software version, and other documentation.
Hard copy documentation provided with every Equalizer includes the Quick Start Guide and theBasic Configuration Guide. These two documents are designed to help you get Equalizer out of the
box and working with your first virtual clusters. The Basic Configuration Guide also contains a
Resource CD with copies of all product documentation, including support documents that help you
configure Equalizer for a variety of environments.
Register today to get access to the Fortinet Support Portal:
https://support.fortinet.com
Registration provides you with a login so you can access these benefits:
l Support FAQs: answers to our customer's most common questions.
l Moderated Customer Support Forum: ask questions and get answers from our support staff
and other Equalizer users.
l Software upgrades and security patches: access to the latest software updates to keep your
Equalizer current and secure.
l Online device manuals, supplements, and release notes: the latest Equalizer documentation
and updates.
l Links to additional resources, and more.
Registration details can be found in "Registering Your Product" on page 74.
Intelligent Load Balancing24
Real-Time Server Status Information26
Network Address Translation and Spoofing27
Load Balancing29
How a Server is Selected31
Layer 7 Load Balancing and Server Selection34
Persistence35
Why a Server May Not Be Selected38
The Equalizer appliance functions as a gateway to one or more sets of servers organized into
virtual clusters. When a client submits a request to a site that the appliance manages, it identifies
the virtual cluster for which the request is intended, determines the server in the cluster that will
be best able to handle the request, and forwards the request to that server for processing.
To route the request, the appliance modifies the header of the request packet with the appropriate
server information and forwards the modified packet to the selected server. Depending on the
cluster options chosen, it may also modify the headers in server responses on the way back to the
client.
Equalizer supports clusters that route requests based on either Layer 4 (TCP or UDP) or Layer 7
(HTTP or HTTPS) protocols. Layer 4 is also referred to as the Transport Layer, while Layer 7 is
referred to as the Application Layer. These terms come from the OSI and TCP/IP Reference
Models, abstract models for network protocol design.
In general, Layer 4 clusters are intended for configurations where routing by the destination IP
address of the request is sufficient and no examination of the request headers is required. Layer 7
clusters are intended for configurations where routing decisions need to be made based on the
content of the request headers. the appliance evaluates and can modify the content of request
headers as it routes packets to servers; in some cases, it can also modify headers in server
responses on their way back to the client.
Basic Capabilities of Cluster Types Supported by Equalizer
Cluster Type
Feature
L4 UDPL4, L7 TCPL7 HTTPL7 HTTPS
Load balancing
policies
Server failure
detection (probes)
PersistenceBased on IPUsing Cookies
Server selection by
request content
(i.e., Match Rules)
Load balanced
protocols
NAT and spoofing
ICMP, TCP, Health CheckICMP, TCP, ACV, Health Check
No; load is balanced according to current load balancing policy.
Regardless of cluster type, the appliance uses intelligent load balancing algorithms to determine
the best server to receive a request. These algorithms take into account the configuration options
set for the cluster and servers, real-time server status information, and information from the
request itself. For Layer 7 clusters, user-defined match rules can also be used to determine the
route a packet should take.
Equalizer gathers real-time information about a server’s status using ICMP Probes, TCP Probes,
Active Content Verification (ACV), and Server Agents. ICMP and TCP Probes are the default
probing methods.
ICMP Probes use Internet Control Message Protocol to send an "Echo request" to the server, and
then wait for the server to respond with an ICMP "Echo reply" message (i.e., the Unix ping
command). ICMP is a Layer 3 protocol. ICMP probes can be disabled via a global flag.
TCP Probes establish (and tear down) a TCP connection between the appliance and the server in a
typical Layer 4 exchange of TCP SYN, ACK, and FIN packets. If the connection cannot be
completed, the appliance considers the server down and stops routing requests to it. TCP probes
cannot be disabled.
Active Content Verification (ACV) provides an optional method for checking the validity of a
server’s response using Layer 7 network services that support a text-based request/response
protocol, such as HTTP. When you enable ACV for a cluster, the appliance requests data from each
server in the cluster (using an ACV Probe string)and verifies the returned data (against anACV
Response string). If it receives no response or the response string is not in the response, the
verification fails and it stops routing new requests to that server. See Active Content Verification
(ACV) Probes for more information.
Note - ACV is not supported for Layer 4 UDP clusters.
Server Agent Probes enable the appliance to communicate with a user-written program (the
agent) running on the server. A server agent is written to open a server port and, when the
appliance connects to the port, the server agent responds with an indication of the current server
load and performance. This enables adjustment of the dynamic weights of the server according to
detailed performance measurements performed by the agent, based on any metrics available on
the server. If the server is overloaded and you have enabled server agent load balancing, the
appliance reduces the server’s dynamic weight so that the server receives fewer requests. The
interface between the appliance and server agents is simple and well-defined. Agents can be
written in any language supported on the server (e.g., perl, C, shell script, javascript, etc.). See
Simple Health Checks and Load Balancing Policies for more information.
For those who have one or more VMware ESX Servers, VLB can be configured to use VMware’s
status reporting to determine server status, and can also be configured to automatically manage
VMware servers based on status information obtained from VMware.
The servers load balanced by Equalizer provide applications or services on specific IP addresses
and ports, and are organized into virtual clusters, each with its own IP address. Clients send
requests to the cluster IP addresses on the appliance instead of sending them to the IP addresses
of the servers.
Central to the operation of any load balancer is the Network Address Translation (NAT)
subsystem. On Equalizer, NAT is used as follows:
1. When Equalizer receives a client packet, it always translates the destination IP (the cluster
IP) to the IP address of one of the server instances in a server pool. The server IP used is
determined by the cluster’s load balancing settings.
2. Depending on the setting of the cluster spoof option, Equalizer may also perform Source
NAT, or SNAT.
When the spoof option is enabled on a cluster, then SNAT is disabled: the NAT subsystem
leaves the client IP address as the source IP address in the packet it forwards to the server.
For this reason, the servers in a cluster with spoof enabled are usually configured to use
Equalizer’s IP address as their default gateway to ensure that all responses go through the
appliance (otherwise, the server would attempt to respond directly to the client IP).
When the spoof option is disabled on a cluster, then SNAT is enabled. Equalizer translates
the source IP (the client IP) to one of the appliance’s IP addresses before forwarding packets to a server. The servers will send responses back to the appliance’s IP (so it is usually
not necessary to set the appliance as the default gateway on the servers when spoof is disabled).
Match rules can be used to selectively apply the spoof option to client requests. This is sometimes called selective SNAT. See "Creating a New Match Rule" on page 404.
3. When a server sends a response to a client request through Equalizer, the NAT subsystem
always translates the source IP in the response packets (that is, the server IP) to the cluster
IP to which the client originally sent the request. This is necessary since the client sent its
original request to the cluster IP and will not recognize the server’s IP address as a
response to its request -- instead, it will drop the packet.
4. NAT can also be enabled for packets that originate on the servers behind Equalizer and are
destined for subnets other than the subnet on which the servers reside -- on the appliance,
this is called outbound NAT. This is usually required in dual network mode when reserved IP
addresses (e.g., 10.x.x.x, 192.168.x.x) are being used on the internal interface, so that the
recipients do not see reserved IP addresses in packets originating from the servers. When
the global outbound NAT option is enabled, the appliance translates the source IP in packets
from the servers that are not part of a client connection to the the appliance’s Default VLAN
IP address (the external interface IP address on the E250GX and legacy ‘si’ systems), or to
the address specified in the server’s Out bound NAT tab. Enabling outbound NAT, as a result,
has a performance cost since the appliance is examining every outbound packet.
Note - When Equalizer is in single network mode, outbound NAT should be disabled. Since Equalizer resides on a
single subnet, outbound NAT is not needed, and may cause unexpected behavior.
When Equalizer receives a packet that is not destined for a virtual cluster IP address, a failover IP
address, a client IP address on an open connection, or one of its own IP addresses, the appliance
passes the packet through to the destination network unaltered.
Load balancing is based on the policy selected. The policies can be split up into two categories:
1. round robin
2. everything else
Round robin simply selects the next server in the list with no regard for how busy that server may
be.
Other load balancing policies use proprietary algorithms to compute the load of a server and then
select the server with the least load server.
Although the load balancing policies are proprietary, they use the following factors in their
calculation:
l Active connections - The number of connections a server currently has active and the number
of connections that it tends to have open.
l Connection latency - The amount of time that it takes a server to respond to a client request.
l Health check performance values - Depending on the health checks configured, this may be not
used at all, or it can completely define how the load is calculated.
Once a load is calculated, Equalizer distributes incoming requests using the relative loads as
weights.
sv00
Load = 50
sv01
Load = 50
sv02
Load = 50
Equalizer calculated loads, so the request distribution will be approximately equal
sv00
Load = 100
sv01
Load = 50
sv02
Load = 25
sv01 and sv02 above are uneven loads. sv01 is twice as loaded as sv02, so it will receive about
half the requests.
The load calculations happen approximately every 10 seconds and server weights are adjusted
accordingly. During that 10 second interval, the relative server loads remain the same, but probe
and health check information is collected about the servers so that it can be used for the next
calculation.
The load calculation works the same for Layer 4 and Layer 7 clusters (at the server-pool level –
and these can be shared between all cluster types).
There are two additional variables for load balancing:
l Hot spare - if a server instance (in a server pool) is marked as a Hot Spare, it is not included in
the pool of servers to select from unless every other non-hot-spare server is down. If a connection persists to this server, it will be placed back on this server.
l Quiesce - If a server instance (in a server pool) has been marked as Quiesce, it will not be
included in the pool of servers to select from. Only previously existing (persistent) connections will be made to this server.