The products, specifications, and other technical information regarding the products
contained in this document are subject to change without notice. All information in this
document is believed to be accurate and reliable, but is presented without warranty of any
kind, express or implied, and users must take full responsibility for their application of any
products specified in this document. Avaya disclaims responsibility for errors which may
appear in this document, and it reserves the right, in its sole discretion and without notice, to
make substitutions and modifications in the products and practices described in this
document.
Avaya™, Cajun™, CajunRules!™, CajunDocs™, OpenTrunk™, P550™, LANstack™, and
Avaya MultiService Network Manager are registered trademarks and trademarks of Avaya
Inc.
ALL OTHER TRADEMARKS MENTIONED IN THIS DOCUMENT ARE PROPERTY OF THEIR
RESPECTIVE OWNERS.
Welcome to Avaya P330 Load Balancing Manager. This chapter provides
an introduction to this guide. It includes the following sections:
•- A description of the goals of the guide.
•- The intended audience of this guide.
•Organization of This Guide - A brief description of the subjects
contained in the various sections of this guide.
The Purpose of This Guide
This guide contains the information needed to use Avaya P330 Load
Balancing Manager efficiently and effectively.
Who Should Use This Guide
This guide is intended for use by network managers familiar with
network management and its fundamental concepts.
Organization of This Guide
This guide is structured to reflect the following conceptual divisions:
•Preface - This section describes the guide’s purpose, intended
audience, and organization.
•Overview of Load Balancing - This section provides an
overview of the terms and concepts used in load balancing.
•Getting Started with Avaya Load Balancing Manager - This
section provides an overview of the user interface and
instructions on how to start and use Avaya P330 Load Balancing
Manager.
•Configuring Firewall Load Balancing - This section describes
how to configure Avaya P330 Load Balancing Manager to
perform Firewall Load Balancing.
Avaya P330 Load Balancing Manager User Guidevi
Preface
•Configuring Server Load Balancing - This section describes
how to configure Avaya P330 Load Balancing Manager to
perform Server Load Balancing.
•Configuring Application Redirection - This section describes
how to configure Avaya P330 Load Balancing Manager to
perform Application Redirection.
•Real Server Groups and Real Servers - This section describes
how to configure Real Server Groups and Real Servers for the
various load balancing applications.
•Application Editor Tool - This section provides instructions on
how to use the Application Editor Tool and how to customize
application protocols.
•Menus - The full structure of the menus in Avaya P330 Load
Balancing Manager.
•Error Messages - A full explanation of the error messages that
appear in Avaya P330 Load Balancing Manager.
Avaya P330 Load Balancing Manager User Guidevii
1
Overview of Load Balancing
This section describes load balancing and includes the following topics:
•What is Load Balancing - A general overview of load balancing.
•Load Balancing Elements - A description of the conceptual
load balancing elements.
•Firewall Load Balancing (FWLB) - An overview of Firewall
Load Balancing, including descriptions and configuration
examples for routing and bridging firewalls.
•Server Load Balancing (SLB) - An overview of Server Load
Balancing, including descriptions and examples of SLB with Full
and Half Network Address Translation (NAT).
•Application Redirection (AR) - An overview and description
of Application Redirection, including a description of Cache
Redirection.
•Combination of Applications - A description of how to
combine more than one load balancing application.
•Load Balancing Metrics - A description of the various metrics
used to direct traffic to different Real Servers.
•Health Check - A description of how health checks are
performed by the load balancer.
•Persistency - A description of session and client persistency and
how they are sustained.
•Additional Persistency Schemes - A description of backup
Real Servers and backup Real Server Groups.
Avaya P330 Load Balancing Manager User Guide1
What is Load Balancing
Load balancing technology allows system administrators to replace single
firewalls and servers with multiple firewall and server farms, achieving
the following goals:
•Improving resilience by removing single points of failure.
•Improving performance by utilizing multiple units instead of a
single one.
This improves the scalability and maintainability of the firewalls and
servers in the network.
The load balancer also serves as a ‘smart redirector’, allowing traffic
redirection, commonly known as Application Redirection. This allows
for:
•Invisibly intercepting web traffic and forwarding it to deployed
web caches.
•Redirecting specific application traffic to content inspection
engines.
•Policy based routing, providing routing based on application or
data source.
There are several different load balancing applications:
•Firewall Load Balancing (refer to “Firewall Load Balancing
(FWLB)” on page 4).
•Server Load Balancing (refer to “Server Load Balancing (SLB)” on
page 8).
•Application Redirection (refer to “Application Redirection (AR)”
on page 10).
Avaya P330 Load Balancing Manager User Guide2
Chapter 1
Load Balancing Elements
There are several abstract load balancing elements:
•Real Server (RS) - An RS is a physical server that is associated
with a Real IP address. One or more RSs may belong to an RSG.
•Real Server Group (RSG) - An RSG is a logical grouping of Real
Servers used for load balancing. For example, for SLB, the load
balancer distributes packets to Real Servers belonging to a specific
RSG.
•Virtual Service - Virtual Services are abstract links to the RSGs
provided by a Virtual Server. For example, load-balanced
forwarding of HTTP or FTP packets is a Virtual Service.
•Virtual Server - A Virtual Server represents the server to the
outside world. It is associated with a Virtual IP address and
provides Virtual Services. For example, a load balancer that
intercepts traffic from the WAN acts as a Virtual Server.
Traffic from the WAN is directed to the V irtual Server. The Virtual Server
provides Virtual Services when transferring packets to the RSG, which is
comprised of RSs. The following figure illustrates the conceptual load
balancing model.
Figure 1-1. The Conceptual Load Balancing Model
Virtual
Server
Virtual
Service
RSGRSGRSG
Real
Server
Virtual
Service
Real
Server
Virtual
Server
Virtual
Service
Real
Server
3Avaya P330 Load Balancing Manager User Guide
Firewall Load Balancing (FWLB)
This section provides information about Firewall Load Balancing,
including a general overview and detailed information about routing
and bridging firewalls.
FWLB Overview
Firewall Load Balancing intercepts all traffic between the LAN and the
WAN, and dynamically distributes the load among the available
firewalls, based on FWLB configuration. Using FWLB, all of the firewalls
are utilized concurrently, providing overall improved firewall
performance, scalability and availability.
The firewalls are the Real Servers, and the group of firewalls is the Real
Server Group. The firewall group is associated with a Virtual Service,
which is a routing or bridging firewall.
The load balancer:
•Balances traffic across two or more firewalls (up to 1024) in your
network, allowing the firewalls to work in parallel.
•Maintains state information about the traffic flowing through it
and ensures that all traffic between specific IP address source and
destination pairs flows through the same firewall.
•Performs health checks on all paths through the firewalls. If any
path is not operational, the load balancer diverts traffic away
from that path, maintaining connectivity across the firewalls.
Often, two load balancers are needed to support FWLB. One device is
deployed on the LAN side (internal) of the firewalls and another on the
WAN side (external). If a Demilitarized Zone (DMZ) is implemented to
allow remote access, a third load balancer must be deployed on the DMZ
side of the network. Additional devices can be added to provide
redundancy, eliminating any device or path as a single point of failure.
A vaya P330 Load Balancin g Manager supports both routing and bridging
firewalls. Routing firewalls may be transparent or non-transparent.
Avaya P330 Load Balancing Manager User Guide4
Chapter 1
Benefits of FWLB
FWLB allows you to:
•Maximize firewall productivity.
•Scale firewall performance.
•Eliminate the firewall as a single point of failure.
Transparent Routing Firewalls
For transparent FWLB, the load balancer receives a packet, makes a load
balancing decision, and forwards the packet to a firewall. The firewall
does not perform NAT on the packets; the source and destination IP
addresses are not changed.
Two load balancers are required for transparent FWLB, one on each side
of the firewalls. One device intercepts traffic between the WAN and the
firewall, and the second device intercepts traffic between the LAN and
the firewall.
Transparent routing firewalls act as a “next hop” device from the
perspective of the load balancer. After a firewall is selected in a load
balancing decision, normal routing to that firewall takes place.
The load balancers ensure that all packets belonging to a session pass
through the same firewall in both directions. The devices select a firewall
based on a symmetric hash function of the source and destination IP
addresses. This ensures that packets traveling between the same source
and destination IP addresses traverse the same firewall.
The following figure illustrates transparent FWLB.
Figure 1-2. Transparent Firewall Load Balancing
5Avaya P330 Load Balancing Manager User Guide
The load balancer enables you to route packets to a DMZ. A DMZ is a
portion of the client’s network, apart from the client’s LAN, where
remote access is allowed. After creating a DMZ, a third load balancer is
installed to route packets to the DMZ. The following figure illustrates
transparent FWLB with a DMZ.
Figure 1-3. Transparent FWLB With DMZ
Non-Transparent Routing Firewalls
Non-transparent routing firewalls are firewalls that support dynamic
NAT.
For non-transparent FWLB, the load balancer receives an outgoing
packet, makes a load balancing decision, and forwards the packet to a
firewall. The firewall keeps a bank of IP addresses and replaces the
source IP address of the outgoing packet with a unique, arbitrary IP
address from the bank. The firewall then forwards the packet to an edge
router which routes it to the correct destination on the WAN.
For incoming packets, the unique NA T address is use d as a destination IP
address to access the same firewall. The firewall performs reverse NAT by
replacing the NAT destination address with the actual destination
address (the client IP address), and then forwards the packet to the load
balancer, wh ich routes the packet to its destination. No load balancing is
performed on incoming packets.
For non-transparent FWLB, only one load balancer is required. The
device is positioned on the LAN (internal) side of the firewalls. Since the
firewalls perform NAT, a load balancer is not needed between the WAN
and the firewalls.
Avaya P330 Load Balancing Manager User Guide6
Chapter 1
In transparent FWLB, persistency is ensured by the load balancer. In
non-transparent FWLB, the firewalls ensure persistency through NAT,
and there is no need for the load balancer to intervene.
The following figure illustrates non-transparent FWLB.
Bridging Firewalls
Bridging firewalls are firewalls that do not perform forwarding at the IP
address layer , but rather appear as transparent bridges. Bridging firewalls
are transparent to devices inside and outside of the secured network.
The bridging firewalls do not have IP or MAC addresses to which traffic
is directed. Therefore, the firewalls must physically appear on the traffic
path.
For bridging FWLB, the load balancers must be positioned on both sides
of the firewalls. Each device load balances between IP address interfaces
of the peer device behind the firewall. For this to work, each firewall
must reside in a different VLAN and subnet, and the physical ports
connected to the firewalls must be on different VLANs as well. In
addition, for each VLAN, both load balancers must be in the same
subnet.
Each load balancer interface and the firewall connected to it reside in a
separate VLAN. This ensures persistency since all the traffic through a
particular firewall is contained in the firewall’s VLAN.
7Avaya P330 Load Balancing Manager User Guide
The following figure illustrates bridging FWLB.
Figure 1-5. Bridging Firewall Load Balancing
VLAN 1
LAN
Load Balancer
VLAN 2
Server Load Balancing (SLB)
This section provides information about Server Load Balancing,
including a general overview and detailed information about SLB.
SLB Overview
Server Load Balancing intercepts all traffic between clients and servers,
and dynamically distributes the load among the available servers, based
on the SLB configuration.
In a non-balanced network, each server provides access to specific
applications or data. Some of these applications may be in higher
demand than others. Servers that provide applications with higher
demand are over-utilized while other servers are under-utilized. This
causes the network to perform below its optimal level.
Firewall 1
Firewall2
Load Balancer
Access Router
Internet
Load balancing provides a solution by balancing the traffic among
several servers which all have access to identical applications and data.
This involves intercepting all traffic between clients and load-balanced
servers and dynamically distributing the load according to configured
schemes (metrics).
The load balancer acts as a Virtual Server to the outside world (the
WAN) and has a Virtual IP address.
Avaya P330 Load Balancing Manager User Guide8
Chapter 1
Benefits of SLB
SLB improves network performance by:
•Minimizing server response time.
•Maximizing server availability.
•Increasing server utilization and network bandwidth. This is
accomplished by balancing session traffic between the available
servers, according to rules established during configuration.
•Increasing reliability. If any server fails, the remaining servers
continue to provide services seamlessly.
•Increasing scalability. Server configuration can be performed
without disrupting the network.
Server Load Balancing
The server load balancer changes one of the source and destination IP
addresses. When a packet arrives from a client to a server, the load
balancer changes the destination IP from the Virtual IP address to the
Real IP address. When a packet is sent from a server to a client, the load
balancer changes the source IP address from the Real IP address to the
Virtual IP address.
The following figure illustrates Server Load Balancing:
Figure 1-6. Server Load Balancing
9Avaya P330 Load Balancing Manager User Guide
Direct Server Return (Triangulation)
Direct server return, or triangulation, is an additional implementation of
SLB. In standard SLB, the load balancer intercepts traffic between the
servers and clients in both directions. In triangulation, load balancing is
performed only on traffic from the clients to the server. Traffic from the
servers is returned to the client directly through a router without any
need for load balancing intervention.
For triangulation, the Real Servers must be specially configured. The
Real Servers must also be capable of receiving packets with the V irtual IP
address as the destination IP address, and of sending packets with the
Virtual IP address as the source IP address. The Virtual IP address should
be configured in the Real Servers as a “loopback” IP address, and the
router (not the load balancer) should be configured as the servers’
default gateway.
When the load balancer detects that a Real Server supports triangulation
and is configured properly, it does not change the destination IP address
of the packet. The Virtual IP address is left as the destination IP address,
and the packet does not undergo NAT.
Application Redirection (AR)
This section provides information about Application Redirection,
including a general overview, and detailed information about Cache
Redirection.
AR Overview
With the growing importance of the Internet as a source of information,
an organization's LAN may suffer from a degradation of performance
due to congestion of the router connecting the network to the Internet.
Since much information retrieved from the Web is either repeatedly
requested by a user or requested by multiple users, many organizations
implement a local caching mechanism to prevent unnecessary Internet
traffic. The local caches must be on the traffic path between the client
and the Internet router. As a result, all traffic, even traffic not intended
for the cache, passes through the cache.
Load balancing solves this problem by redirecting packets from their
original destination to an alternative server based on the Application
Redirection configuration. Cache Redirection is the most common
implementation of Application Redirection.
Avaya P330 Load Balancing Manager User Guide10
Chapter 1
Benefits of AR
By redirecting client requests to a local cache or application server, you
can increase the speed at which clients access information and free up
valuable network bandwidth.
•Directing only suitable traffic to the local cache.
•Connecting and load balancing multiple caches.
•Performing the redirection process in a way that is transparent to
the client.
•Allowing redundant caches to be configured.
Cache Redirection
For Cache Redirection, the load balancer is positioned on the traffic
route and redirects traffic from the original destination to an alternative
cache server. The redirection process involves the following steps:
1. The load balancer checks whether the packet characteristics
2. The load balancer checks whether the application port is suitable
3. The load balancer routes the packet to the cache server instead of
4. The cache checks if it has the relevant information. If it does, it
comply with one of the defined filter rules. The user configures
rules to define which clients or destinations are to be redirected to
the cache.
for redirection (i.e., HTTP).
to the original destination on the Internet.
forwards the cached information to the client. If it does not have
the information, it retrieves the information from the Internet,
saves it to the cache, and then forwards the information to the
client.
The load balancer supports transparent caches. A transparent cache is a
cache that is capable of accepting packets not a ddressed to its IP add res s.
The cache usually uses NAT in its IP address stack, so the higher layers
can process packets not addressed to the cache’s IP address.
11Avaya P330 Load Balancing Manager User Guide
The following figure illustrates Cache Redirection.
Figure 1-7. Cache Redirection
In this figure, the sequence of events is as follows:
1. The user issues an HTTP request. The source IP address is the
user’s IP address and the destination IP address is the Web server’s
IP address.
2. The load balancer routes the packet to the local cache. The packet
still has the Web server’s IP address as its destination IP address.
3. If the cache has the required page, the cache returns the page to
the load balancer with the destination IP address of the client and
the source IP address of the Web server. If the cache does not
have the required page, the cache returns the packet to the load
balancer, and the load balancer routes the packet to the Web
server.
4. On the way back from the Web server, the load balancer routes
the packet to the cache.
5. The cache saves the packet and routes it back to the load balancer.
6. The load balancer sends the page to the user.
A client's request for a Web page and the cache's request for a Web page
have the same source and destination IP addresses. To distinguish
between them, the load balancer uses separate VLANs for clients and the
cache. If the request is on the clients' VLAN, the load balancer forwards
the request to the cache. If the request is on the cache's VLAN, the load
balancer forwards the request to the WAN.
Similarly , the WAN’s return of a Web page and the cache's forwarding of
a Web page to a client have the same source and destination IP
addresses. To distinguish between them, the load balancer uses separate
VLANs for clients and the cache. If the response is on the cache’s VLAN,
the load balancer forwards the response to the cache. If the response is
on the clients' VLAN, the load balancer forwards the response to the
client.
Avaya P330 Load Balancing Manager User Guide12
Chapter 1
Combination of Applications
You can enable the P333R-LB to use various applications concurrently.
For example, it is possible to configure the same P333R-LB to perform
Server Load balancing for an Intranet web-server, Application
Redirection for web traffic that is Internet-bound, and Firewall Load
Balancing for traffic that is Internet-bound.
In some cases, the same “type” of traffic can be given two different
actions by the load balancer . In these situations, it is necessary to tell the
load balancer which action to choose. In the example described above,
web traffic to the intranet server can be configured to either be directed
to the web cache, or bypass the web cache and directly access the
Intranet server. The latter configuration will save the web cache
resources to deal with Internet-bound traffic.
You can specify the preferred action as one of the following:
•Configure SLB to take precedence over AR.
•AR can take precedence over SLB.
•Configure AR filters to redirect traffic from client/server
addresses, using wildcards.
•Configure AR filters to specify which traffic not to redirect
(“no-ar” as service) from specific client/server addresses, using
wildcards.
Load Balancing Metrics
There are several methods, or metrics, that a load balancer can use to
distribute traffic among multiple servers, firewalls or caches. These
metrics tell the load balancer which Real Server should receive each
session.
Some commonly used metrics are:
•Round Robin
•Hash
•MinMiss Hash
•Weighted Real Servers
13Avaya P330 Load Balancing Manager User Guide
Round Robin
Hash
Using Round Robin, the load balancer issues sessions to each RS in turn.
The first RS in the group receives the first session, the second RS receives
the next session, and so on. When all the RSs receive a session, the
issuing process starts over with the first RS. Round Robin ensures that
each RS receives an equal number of sessions.
Using the Hash metric, sessions are distributed to RSs using a predefined
mathematical hash function. The hash function is performed on a
specified parameter. The source IP address, destination IP address, or
both are used as the hash function input.
The load balancer creates a list of all the currently available RSs. The
result of the hash function is used to select an RS from the list. Any
given parameter always gives the same hash result, providing natural
persistency.
If an RS is removed or added to the group, persistency is broken. This
occurs since the order of the RSs in the list changes, but the hash still
points to the same list entries. The following figure illustrates how a loss
of persistency occurs when an RS becomes non-operational:
Figure 1-8. Hash Metric - Loss of Persistency
In the above figure, when Server 2 becomes non-operational, the list of
available servers is readjusted, causing a lack of persistency. However, if
Server 2 becomes operational again, the list of available servers is
restored to its original order, and persistency is recovered.
Avaya P330 Load Balancing Manager User Guide14
Chapter 1
MinMiss Hash
MinMiss hash distributes sessions to RSs in the same way as the Hash
metric. However, MinMiss hash retains persistency even when an RS is
removed from the group. When an RS fails or is removed, the load
balancer does not change the position of all the RSs in the list. Instead, it
redistributes the remaining RSs to the list entries freed by the failing RS.
The following figure illustrates how persistency is retained when an RS
becomes non-operational.
Figure 1-9. MinMiss Metric - Persistency Retained
In the above figure, when Server 2 becomes non-operational, the list of
available servers is not readjusted. Only the list entries that are now
empty are replaced with other available servers. Therefore, persistency is
retained for all available servers. However, if Server 2 becomes
operational again, the list of available servers is recalculated so that the
smallest number of servers is affected. The list is not restored to its
original configuration. As a result, persistency is only partially recovered.
Weighted Real Servers
You can assign weights to RSs to enable faster RSs to receive a larger
share of sessions. This minimizes overloading and maximizes
functionality.
If you assign a weight to an RS, the sessions are distribute d to the RSs in
the metric chosen (Round Robin, Hash or MinMiss). However, the
weighted RS is assigned a larger share of sessions. For example, if you
assign a weight of 20 to one RS and leave the default weight (10)on the
second RS, the weighted RS receives 2 sessions for each session directed
to the second RS. This is useful for RSs with different bandwidths or
processor speeds.
15Avaya P330 Load Balancing Manager User Guide
Health Check
The load balancer constantly checks the RSs to ensure that each RS is
accessible and operational. An RS that fails the health check is
automatically removed from the load balancer’s internal list of currently
available RSs, and traffic is redirected to other available RSs.
There are several types of health check methods that the load balancer
can use, including:
For FWLB, checking the firewalls is insufficient. The health checks must
be performed on the entities beyond the firewalls as well. In order to
ensure that the health check packets traverse the same firewall in both
directions, the packet’s source and destination IP addresses should be the
IP addresses of the load balancer interfaces on each side of the firewall.
For each load balancer, both the local and remote addresses must be
configured. In addition, the load balancers on both sides of the firewall
must be configured symmetrically.
•ICMP Ping - Each RS is periodically pinged. If no answer is
received, the RS is not operational.
•TCP Port Checking - A TCP connection is periodically opened
to each RS, checking for successful completion of the connection.
Persistency
For non-transparent FWLB (with NAT), there is only one load balancer.
In this case, you must configure an IP address beyond the firewall as the
health check address. Like other non-transparent FWLB sessions, the
health check session returns through the same firewall according to the
NAT address it was given.
Persistency is the maintenance of the connection between the server and
the client over multiple sessions. Persistency ensures that all traffic from
the client is directed to the same RS.
Persistency is achieved by using naturally persistent load balancing
metrics (such as Hash or MinMiss hash) or by forcing persistent load
balancing decisions on non-persistent load balancing metrics (such as
Round Robin). Persistency is forced by storing the history of the latest
decisions in a cache for a limited time, and then sending the packets to
the appropriate RS according to the previous load balancing decisions.
Avaya P330 Load Balancing Manager User Guide16
Chapter 1
Persistency is achieved by opening a new entry for a server group based
on the following:
•New entry on source IP address - All sessions from a specific
source are directed to the same RS. This is useful for applications
where client information must be retained on the RS between
sessions.
•New entry on destination IP address - All sessions to a
specific destination are directed to the same RS. This is useful for
caching applications to maximize successful cache hits when the
information is not duplicated between RSs.
•New entry on source IP and destination IPaddresses - All
sessions from a given source to a given destination are directed to
the same RS. This is useful for Firewall Load Balancing, since it
ensures that the two unidirectional flows of a given session are
directed through the same firewall.
Additional Persistency Schemes
Using the P333R-LB, you can configure a Real Server to backup one or
more primary Real Servers. A backup Real server is not used unless the
primary Real Server is down.
You can also configure a Real Server Group (RSG) to backup one or
more primary RSGs. A backup RSG can run a different service than the
primary RSG while providing backup to all of the primary RSG’ s services.
Similar to the Real Server, the backup RSG is not used unless all Real
Servers in the RSG are down.
17Avaya P330 Load Balancing Manager User Guide
2
Getting Started with Avaya
P330 Load Balancing Manager
This chapter provides instructions on how to start A vaya Load Balancing
Manager and an overview of the user interface. It includes the following
topics:
•Starting Avaya Load Balancing Manager - Instructions on
how to start Avaya P330 Load Balancing Manager.
•The User Interface - An introduction to Avaya P330 Load
Balancing Manager’s user interface.
•Saving Configuration Changes - Instructions for applying and
committing changes to the load balancing configuration.
•Searching for Load Balancing Components - Instructions on
how to search for RSs and RSGs in Avaya P330 Load Balancing
Manager.
Starting Avaya Load Balancing Manager
To start Avaya P330 Load Balancing Manager for the Avaya P330:
1. Click the
A list of P333R-LB module IP addresses appears in the Tree View
of the Avaya P330 Manager.
* Note: In order that the Load Balancing Manager tab appear, at least
one of the interfaces should be configured on the load
balancer. For more information , refer to P333R-LB User Guide
or P333R-LB Quick Start.
Load Balancing Manager
tab in the A vaya P330 Manager.
Avaya P330 Load Balancing Manager User Guide18
Chapter 2
The User Interface
The user interface consists of the following elements:
•Menu Bar - Menus for accessing Avaya Load Balancing Manager
functions (refer to Appendix A, Menus).
•Logical or Physical View - Depending on the tab selected, the
application displays one of the two views.
— Logical View - A logical representation of the network
showing Virtual Servers and Services and their associated
RSGs and RSs. The Logical View includes a hierarchical Tree
Area, Table Area, RSG Area, RS Area, and Form Area. The
various areas display information related to the element
selected in the Tree Area.
Menu Bar
Toolbar
Tree Area
Status Bar
The following figure shows the Logical View of the user
interface, with its various parts labeled.
Figure 2-1. The User Interface - Logical View
Table Area
Form Area
RSG Area
RS Area
19Avaya P330 Load Balancing Manager User Guide
— Physical View - A physical representation of the P333R-LB
devices in the network showing RSs and RSGs. The Physical
View includes a Tree Area and a Form Area. The Form Area
displays information related to the element selected in the
Tree Area.
The following figure shows the Physical View of the user
interface, with its various parts labeled.
Figure 2-2. The User Interface - Physical View
Menu Bar
Toolbar
Toolbar
Tree Area
Status Bar
Form Area
•Status Bar - An area at the bottom of the screen that displays the
communication status between Avaya Load Balancing Manager
and the network.
The toolbar provides shortcuts to Avaya Load Balancing Manager’s main
functions. The following table describes the buttons on the toolbar and
gives the equivalent menu options.
Table 2-1. Toolbar Buttons
ButtonsDescriptionMenu Item
Saves configuration changes to the
File > Commit
device.
Cuts a rule from a table to the
Edit > Cut
application clipboard
Copies a rule from a table to the
Edit > Copy
application clipboard.
Avaya P330 Load Balancing Manager User Guide20
Loading...
+ 62 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.