VMware vRealize Operations Manager - Load Balancing - 6.1 User Manual

vRealize Operations Manager Load Balancing
Configuration Guide Version 6.1 and 6.2
TECHNI CAL W HITE PAPER DECEMB E R 2 0 1 5 VERSION 1 . 0
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 2
Table of Contents
Introduction ...................................................................................................................................... 4
Load Balancing Concepts ............................................................................................................ 4
Selecting a Load Balancer ....................................................................................................... 4
How to Handle SSL UI Certificates with a Load Balancer ..................................................... 5
vRealize Operations Manager Overview ..................................................................................... 5
vRealize Operations Manager Architecture............................................................................. 6
Configuring End Point Operations Agents .............................................................................. 7
HAProxy Installation and Configuration ......................................................................................... 8
Installation and Configuration of Single-Node HAProxy on CentOS 6.5 or RHEL ................... 8
Install Single-Node HAProxy on CentOS 7.0 ........................................................................... 10
Configure Logging for HAProxy ............................................................................................... 10
Configure HAProxy ................................................................................................................... 11
Configure HAProxy for vRealize Operations Manager Analytics ........................................ 11
Configure EPOps HAProxy .................................................................................................. 13
Verify HAProxy Configuration ............................................................................................. 14
Advanced Cofiguration: HAProxy with Keepalived ................................................................. 15
Configure HAProxy with Keepalived ................................................................................... 16
F5 Big IP Installation & Configuration .......................................................................................... 20
Configure Custom Persistence Profile ....................................................................................... 20
Configure Health Monitors ........................................................................................................ 22
Configure Server Pools .............................................................................................................. 23
Configure Virtual Servers .......................................................................................................... 24
Verify Component and Pool Status ............................................................................................ 26
NSX 6.2.0 Installation & Configuration ........................................................................................ 27
Install and Configure Edge for Load Balancing ........................................................................ 27
Configure Application Profiles .................................................................................................. 28
Add Service Monitoring ............................................................................................................ 29
Add Pools .................................................................................................................................. 31
Add Virtual Servers ................................................................................................................... 32
Configure Auto Redirect from HTTP to HTTPS ....................................................................... 33
Configure Application Profile for HTTPS Redirect .............................................................. 33
Configure the Virtual Server for HTTPS Redirect ................................................................ 34
Verify Component and Pool Status ............................................................................................ 35
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 3
DATE
VERSION
DESCRIPTION
December 2015
1.0
Initial version.
February 2016
1.1
Minor updates to include vRealize Operations Manager version 6.2
Revision History
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 4
PRODUCT
VERSION
DOCUMENTATION
vRealize Operations Manager
6.1 and 6.2
http://pubs.vmware.com/vrealizeoperationsmanager-6/index.jsp
F5 BIG IP
11.5
https://support.f5.com/kb/en-us.html
NSX
6.1.3
https://pubs.vmware.com/NSX-6/index.jsp#Welcome/welcome.html
HA Proxy
1.5.x
http://www.haproxy.org/
CentOS
v6.x, v7,x
http://wiki.centos.org/Documentation
RHEL
v6.x
https://access.redhat.com/documentation/en-US/index.html
Keepalived
v1.2.13-4.el6
http://www.keepalived.org/
Introduction
This document describes the configuration of the load balancing modules of F5 Networks BIG-IP software (F5) and NSX load balancers for vRealize Operations Manager 6.1 and 6.2. This document is not an installation guide, but a load-balancing configuration guide that supplements the vRealize Operations Manager installation and configuration documentation available in the vRealize Operations Manager Documentation Center.
This information is for the following products and versions.
Load Balancing Concepts
Load balancers distribute work among servers in high availability (HA) deployments. The system administrator backs up the load balancers on a regular basis at the same time as other components.
Follow your site policy for backing up load balancers, keeping in mind the preservation of network topology and vRealize Operations Manager backup planning.
Following are the advantages of using a load balancer in front of the vRealize Operations Manager cluster:
Utilizing a load balancer ensures that the deployed cluster is properly balanced for performance of UI traffic.
Allows all nodes in the cluster to equally participate in the handling of UI sessions and traffic.
Provides high availability if any admin or data node fails, by directing UI traffic only to serving nodes in the cluster.
Provides simpler access for the users. Instead of accessing each node individually the user only needs one URL to
access the entire cluster and not be concerned with which node is available. Provides load balancing, high availability and ease of configuration for the End Point Operations (EPOps) agents.
Selecting a Load Balancer
There are no specific requirements for selecting a load balancer platform for vRealize Operations Manager. Majority of Load Balancers available today support complex web servers and SSL. You can use a load balancer in front of a vRealize Operations Manager cluster as long as certain parameters and configuration variables are followed. HAProxy was chosen for this example due to its ease of deployment, open source availability, stability, capability handling SSL sessions, and performance. Following are some of the parameters that should be considered for configuring other brands of load balancers:
You must use TCP Mode, HTTP mode is not supported.
It is not recommended to use round-robin balancing mode
Cookie persistence does not work
SSL pass-through is used, SSL termination is not supported
Hash type balancing is recommended to ensure that the same client IP address always reaches the same node, if the
node is available
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 5
NODE ROLE
FUNCTIONS
Master Node
It is the initial, required node in the cluster. All other nodes are managed by the master node. It contains the product UI.
In a single-node installation, the master node performs data collection and analysis as it is the only node where vRealize Operations Manager adapters are installed.
Health checks should be performed for at least 3 pages presented in the UI
How to Handle SSL UI Certificates with a Load Balancer
In all the default installations of vRealize Operations Manager nodes a default self-signed VMware certificate is included. You can implement your own SSL certificate from an internal Certificate Authority or external Certificate Authority. For more information on the certificate installation procedures, see Requirements for Custom vRealize
Operations Manager SSL Certificates.
In addition to these configuration variables it is important to understand how SSL certificates are distributed in a cluster. If you upload a certificate to a node in the cluster, for example: the master node, the certificate will then be pushed to all nodes in the cluster. To handle UI sessions by all the nodes in the cluster you must upload an SSL certificate that contains all of the DNS names (optional: IP addresses and DNS names) in the
field of the uploaded certificate. The common name should be the Load Balancer DNS name. The subject
Name
alternative names are used to support access to the admin UI page. When the certificate is uploaded to the master node, it is pushed to all the nodes in the cluster. Currently, when you use
a load balancer with vRealize Operations Manager, the only supported method is SSL pass-through, which means the SSL certificate cannot be terminated on the load balancer.
To change SSL certificate on a cluster deployment:
Subject Alternative
Log in to the master node by using the following link: https://<ipaddress>/admin. On the top right side, click the certificate button to change the certificate. Upload your PEM file and store it on the local node: /data/vcops/user/conf/ssl/uploaded_cert.pem Copy the PEM file to all the nodes. Unpack the PEM file contents on each node. Activate the new certificates by changing some symbolic links and restart the web server (apache httpd) on each
node in the cluster.
When you view the certificate on the node that you are accessing, you will see all nodes in the cluster listed in the certificate SAN.
vRealize Operations Manager Overview
The vRealize Operations Manager clusters consist of a master node, an optional replica node for high availability, optional data nodes, and optional remote collector nodes. You can access and interact with the product by using the product UI available on the master and data nodes. The remote collector nodes do not contain a product UI and are used for data collection only. The product UI is powered by a Tomcat instance that resides across each node, but is not load balanced out of the box. You can scale up vRealize Operations Manager environment by adding nodes when the environment grows larger.
vRealize Operations Manager supports high availability by enabling a replica node for the vRealize Operations Manager master node. A high availability replica node can take over the functions that a master node provides. When a problem occurs with the master node, fail-over to the replica node is automatic and requires only 2 to 3 minutes of vRealize Operations Manager downtime. Data stored on the master node is always backed up on the replica node. In addition, with high availability enabled, the cluster can survive the loss of a data node without losing any data.
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 6
Data Node
In larger deployments, only data nodes have adapters installed to perform collection and analysis. It contains the product UI.
Replica Node
To enable high availability, the cluster requires that you convert a data node in to a replica of the master node. It does not contain product UI.
vRealize Operations Manager Architecture
Currently, the vRealize Operations Manager 6.0 release supports the maximum of 8-nodes in the analytics cluster. Remote collectors are not considered part of the analytics clusters as they do not participate in any type of data calculations or processing. EPOps traffic is load balanced to the same cluster.
NOTE:
Following is a basic architecture overview of a vRealize Operations Manager 8-node cluster with high availability enabled.
The load balancer cannot decrypt the traffic, hence cannot differentiate between EPOps and analytics traffic.
FIGURE 1. VREALIZE OPERATIONS MANAGER 8-NODES CLUSTER WITH HIGH AVAILABILITY
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 7
Configuring End Point Operations Agents
End Point Operations agents are used to gather operating system metrics to monitor availability of remote platforms and applications. This metrics are sent to the vRealize Operations Manager server. You can configure additional load balancers to separate analytics traffic from EPOps traffic.
The steps to configure EPOps load balancer are described as required throughout this document. You must shut down that the load balancer while upgrading or shutting down vRealize Operations Manager cluster.
The load balancer should be restarted after the cluster is upgraded. In the case of EPOps balancing, the overall latency between agent, load balancer, and cluster should be lower than 20
millisecond. If the latency is higher, you must install a remote collector and direct the agents directly to it.
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 8
HAProxy Installation and Configuration
HAProxy offers high availability, load balancing, and proxying for TCP and HTTP-based applications.
Prerequisites
Following are the prerequisites to ensure a functional load balancer configuration and deployment.
Fully Patched CentOS or Redhat Linux VM
CPU: 2 or 4 vCPU
Memory: 4GB
Disk space: 50GB
HAProxy 1.5.x
: HAProxy 1.6 is supported, however it may require some changes that are out of scope for this document.
NOTE
Fully functioning DNS with both forward and reverse lookups
All nodes in the vRealize Operations Manager cluster operating correctly
HAProxy deployed in same datacenter and preferably on the same cluster as vRealize Operations Manager
HAProxy deployed on same subnet, also known as a one arm configuration, as vRealize Operations Manager cluster
: Multiple subnet deployment has not been tested.
NOTE
HAProxy not deployed on the same ESX hosts as vRealize Operations Manager cluster to ensure availability
Minimum 2-node deployment of vRealize Operations Manager cluster
Deployment does not require high availability to be enabled, but it is recommended that you enable high availability
One master node and at least one data node is required for using a load balancer beneficially
Installation and Configuration of Single-Node HAProxy on CentOS
6.5 or RHEL
A single-node HAProxy deployment is the basic model for majority of environments that require the use of a proxy server in front of vRealize Operations Manager cluster. For installing a single-node HAProxy deployment on single­node of CentOS, you must complete the following steps:
Perform a package update on the system to ensure all the packages are up-to-date:
yum update (update all packages)
Verify that the system Hostname is valid:
view /etc/sysconfig/network
Verify the network settings for the primary network interface:
view /etc/sysconfig/network-scripts/ifcfg-eth0
If the VM is cloned, ensure to clean the old persistent rules:
/etc/udev/rules.d/70-persistent-net.rules
Restart network service to make any additional changes on network settings:
service network restart
Download the HAProxy:
yum install wget
wget http://www.haproxy.org/download/1.5/src/haproxy-1.5.11.tar.gz
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 9
Install core build essentials for building and compiling HAProxy:
yum install build-essential openssl-devel make gcc-c++ gcc zlib-devel
Unzip HAProxy:
cd
Change directories to HAProxy extract location:
cd extracted directory
Compile HAProxy:
make TARGET=linux26 USE_OPENSSL=1 USE_ZLIB=1
(Optional) Add prefix for make install command if you want to install into a custom directory:
make PREFIX=/apps/opt/haproxy-ssl install
Install the binary:
make install
Create directory for configuration and executables:
mkdir /etc/haproxy
Move the initialization script example into startup directory:
cp ./examples/haproxy.init /etc/init.d/haproxy
Create the HAProxy configuration file:
touch /etc/haproxy/haproxy.cfg instead of:
vi /etc/haproxy/haproxy.cfg
:wq
Insert the HAProxy config and edit server lines with IP addresses of all nodes in the cluster:
vi /etc/haproxy/haproxy.cfg
:wq
Edit the initialization script to adjust installation location of the binary files as needed. For example, by default the file uses /usr/sbin/haproxy but in most of the cases it uses /usr/local/sbin/haproxy.
vi /etc/init.d/haproxy
wq
Change the ownership of the initialization script for correct access:
chmod 755 /etc/init.d/haproxy
Add the haproxy user:
useradd haproxy
Start the HAProxy Service:
service haproxy start
Configure HAProxy to start on reboot of server:
chkconfig haproxy on
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 10
Install Single-Node HAProxy on CentOS 7.0
HAProxy is also supported on CentOS 7.0 and can be obtained from yum repository already compiled or compile as shown in the Installation and Configuration of Single-Node HAProxy on CentOS 6.5 section. To install HAProxy on CentOS 7 by using yum package manager, which can then be used to configure the instance using the same configuration, complete the following steps:
Perform a package update on system to ensure all packages are up-to-date:
yum update (update all packages)
Install HAProxy:
yum -y install haproxy
Copy original HAProxy configuration to backup file:
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
Configure HAProxy configuration. To configure analytics balancer, see Configure HAProxy Analytics and to configure EPOps balancer, see Configure EPOps HAProxy.
Allow firewall traffic through for the ports needed for HAProxy to function:
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
Reload the firewall configuration:
systemctl reload firewalld
Enable HAProxy to connect to any interface:
setsebool -P haproxy_connect_any 1
Enable HAProxy service:
systemctl enable haproxy
Configure Logging for HAProxy
An administrator might want to configure logging of the HAProxy service to aid in monitoring and troubleshooting an environment. The HAProxy logger allows for the use rsyslog internally on the Linux installation to log to a local file. You can also utilize Log Insight integration to send this log to a Log Insight deployment by utilizing the new Log Insight Linux agent to greatly simplify the configuration and logging of Linux platforms. To configure basic applications logging using rsyslog locally on the server perform the following steps.
Configure the rsyslog configuration file to accept UDP syslog reception:
vi /etc/rsyslog.conf
Uncomment the following lines:
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
Save the file:
wq!
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 11
Create the HAProxy logging configuration file for specific application parameters
vi /etc/rsyslog.d/haproxy.conf
Add the following line:
if ($programname == 'haproxy') then -/var/log/haproxy.log
Save the file:
wq!
Create HAProxy Log file and set proper permissions:
touch /var/log/haproxy.log
chmod 755 /var/log/haproxy.log
Restart the rsyslog service:
Service rsyslog restart
Configure HAProxy
The HAProxy configuration has been tested against an 8-node vRealize Operations Manager cluster. Clusters with fewer nodes are also supported and require the same configuration. Every time the cluster is expanded and a new node is deployed you must edit the HAProxy configuration and add the IP address of the new node. After editing the configuration file the HAProxy service should always be restarted so the configuration is reloaded.
Configure HAProxy for vRealize Operations Manager Analytics
You can configure the HAProxy for vRealize Operations Manager analytics as follows:
# Configuration file to balance both web and epops
#global parameters
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 400
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
ssl-server-verify none
#default parameters unless otherwise specified
defaults
log global
mode http
option httplog
option tcplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
#listener settings for stats webpage can be opti onal but highly recommended
vRealize Operations Manager Load Balancing
T EC HN IC AL W H IT E PA P E R / 12
listen stats :9090
balance
mode http
stats enable
stats auth admin:admin
stats uri /
stats realm Haproxy\ Statistics
#automatic redirect for http to https connections
frontend vrops_unsecured_redirect *:80
redirect location https://<insert_fqdn_address_here>
#front settings in this case we bind to all addresses on system or specify an interface
frontend vrops_frontend_secure
bind <web dedicated ip>:443
mode tcp
option tcplog
default_backend vrops_backend_secure
#backend configuration of receiving servers containing tcp-checks health checks and
hashing
#needed for a proper configuration and page sessions
#adjust the server parameters to your environment
backend vrops_backend_secure
mode tcp
option tcplog
balance source
hash-type consistent
option tcp-check
tcp-check connect port 443 ssl
tcp-check send GET\ /suite-api/api/deployment/node/status\ HTTP/1.0\r\n\r\n
tcp-check expect rstring ONLINE
server node1 <Insert node1 ip address here>:443 check inter 60s check-ssl maxconn 140
fall 6 rise 6
server node2 <Insert node2 ip address here>:443 check inter 60s check-ssl maxconn 140
fall 6 rise 6
server node3 <Insert node3 ip address here>:443 check inter 60s check-ssl maxconn 140
fall 6 rise 6
server node4 <Insert node4 ip address here>:443 check inter 60s check-ssl maxconn 140
fall 6 rise 6
NOTE:
HAProxy 1.6 introduced strict checking of the configuration file, if you want to use HAProxy 1.6 you would
have to make some changes to support the new strict validation, such as BIND address. For example, you can use:
bind <web dedicated ip>:443
Loading...
+ 25 hidden pages