The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in
the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Warranty
WARRANTY STATEMENT: See the warranty information sheet provided in the product box and available online.
Table of Contents
About This Guide...................................................................................................................................... 6
Running and Configuring VRS .................................................................................................................. 59
Specifying the Active and Standby HP VSCs.......................................................................................... 59
7 Support and Other Resources...................................................................... 61
Gather information before contacting an authorized support..........................................................................61
How to contact HP ...................................................................................................................................61
Software technical support and software updates.........................................................................................61
Care Packs ....................................................................................................................................... 62
Related information................................................................................................................................. 62
9 Appendix: Emulated Ethernet NIC Notes ...................................................... 66
Table of Contents5
About This Guide
The scope of this manual is to describe the installation process for HP Distributed Cloud
Networking (DCN).
Audience
This manual is intended for system administrators who are responsible for installing and
configuring the HP DCN software.
6
1 HP DCN: Overview and Infrastructure
This chapter provides an overview of HP Distributed Cloud Networking (DCN) 3.0.R2 and of
the infrastructure required to implement the DCN solution. It also gives a brief overview of the
installation process itself.
Topics in this chapter include:
• HP DCN Overview
• HP DCN Infrastructure Requirements and Recommendations
• Data Center IP Network
• NTP Infrastructure
• Domain Name System
• Certificate Authority
• HP DCN Installation Overview
HP DCN Overview
HP DCN is a Software-Defined Networking (SDN) solution that enhances data center (DC)
network virtualization by automatically establishing connectivity between compute resources
upon their creation. Leveraging programmable business logic and a powerful policy engine,
HP DCN provides an open and highly responsive solution that scales to meet the stringent
needs of massive multi-tenant DCs. HP DCN is a software solution that can be deployed over
an existing DC IP network fabric. Figure 1 illustrates the logical architecture of the HP DCN
solution.
Figure1:HPDCNArchitectureandComponents
HP DCN Overview7
There are three main components in the HP DCN solution: HP Virtualized Services Directory
(HP VSD), HP Virtualized Services Controller (HP VSC) and HP Virtual Routing and Switching
(HP VRS).
HP Virtualized Services Directory
HP VSD is a programmable policy and analytics engine that provides a flexible and
hierarchical network policy framework that enables IT administrators to define and enforce
resource policies.
HP VSD contains a multi-tenant service directory which supports role-based administration of
users, computers, and network resources. It also manages network resource assignments such
as IP and MAC addresses.
HP VSD enables the definition of sophisticated statistics rules such as:
• collection frequencies
• rolling averages and samples
• threshold crossing alerts (TCAs).
When a TCA occurs it will trigger an event that can be exported to external systems
through a generic messaging bus.
Statistics are aggregated over hours, days and months and stored in a Hadoop® analytics
cluster to facilitate data mining and performance reporting.
HP VSD is composed of many components and modules, but all required components can run
on a single Linux server or in a single Linux virtual machine. Redundancy requires multiple
servers or VMs.
To get a license key to activate your HP VSD, contact your HP Sales Representative.
HP Virtualized Services Controller
HP VSC functions as the robust network control plane for DCs, maintaining a full view of pertenant network and service topologies. Through the HP VSC, virtual routing and switching
constructs are established to program the network forwarding plane, HP VRS, using the
OpenFlow™ protocol.
The HP VSC communicates with the VSD policy engine using Extensible Messaging and
Presence Protocol (XMPP). An ejabberd XMPP server/cluster is used to distribute messages
between the HP VSD and HP VSC entities.
Multiple HP VSC instances can be federated within and across DCs by leveraging MP-BGP.
The HP VSC is based on HP DCN Operating System (DCNOS) and runs in a virtual machine
environment.
HP Virtual Routing and Switching
HP VRS is an enhanced Open vSwitch (OVS) implementation that constitutes the network
forwarding plane. It encapsulates and de-encapsulates user traffic, enforcing L2-L4 traffic
policies as defined by the HP VSD. The HP VRS tracks VM creation, migration and deletion
events in order to dynamically adjust network connectivity.
8HP DCN: Overview and Infrastructure
HP VRS-G
For low volume deployments the software based HP VRS Gateway (VRS-G) module
incorporates bare metal as virtualized extensions to the datacenter.
HP DCN Infrastructure Requirements and
Recommendations
In order to make use of the HP DCN, the data center environment must meet some key
requirements as described in the following sections.
Data Center IP Network
HP VSP can be used in any data center with an IP network. HP VSC actively participates in the
IP routing infrastructure. HP VSCs can run OSPF or IS-IS for the IGP in addition to BGP, but
integration with the IGP is not mandatory.
BGP is used to form a federation of HP VSCs and synchronize the HP VSP network information.
In addition, BGP is also used to exchange routing information with the data center provider
edge router.
NTP Infrastructure
Because HP VSP is a distributed system, it is important that the different elements have a
reliable reference clock to ensure the messages exchanged between the elements have
meaningful timestamps. HP VSP relies on each of the elements having clocks synchronized with
NTP.
The HP VSD and HP VRS applications rely on the NTP facilities provided by the host operating
system. The HP VSC, which is based on HP DCN OS, has an NTP client.
HP recommends having at least three NTP reference clocks configured for each system.
Domain Name System
In scaled HP VSP deployments, the HP VSD functional elements can be distributed across
machines into clusters of machines where the failover and load sharing mechanisms for the
clusters rely on being referenced as a single DNS entity.
Certificate Authority
The northbound ReST API on HP VSD is accessed within an SSL session. The HP VSD is able to
use a self-signed certificate, but having a certificate from a certificate authority will enable
client applications to avoid processing security warnings about unrecognized certificate
authorities.
HP DCN Installation Overview
Installing HP DCN consists of installing the three software components (HP VSD, HP VSC, and
HP VRS) and configuring their interfaces to establish connectivity between them.
HP DCN Infrastructure Requirements and Recommendations9
Figure2:InstallationSetup
Figure 2 diagrams the installation of the HP VSP components and shows how they
communicate with each other. The labeled interfaces are referenced in the installation
instructions. The diagram could be used to map out the topology you plan to use for your own
installation.
The recommended order for installing the software is the order presented in this guide because
each newly installed software item component provides the infrastructure to communicate with
the next component on the list.
After installing HP DCN, configure policies in the HP VSD to derive full benefit from the system.
10HP DCN: Overview and Infrastructure
2 HP DCN Software Installation
Topics in this chapter include:
• HP VSD Hardware and Software Requirements
• HP VSD Installation Overview
• HP VSD Installation Using QCow2 Image
• HP VSD Installation Using ISO Disc Image
• Import Certificates on the Servers
• Example of Load Balancer Configuration
HP VSD Hardware and Software Requirements
Installing HP VSD software requires:
• A hypervisor of the specifications set out in the Release Notes
• A mechanism to access the graphical console of the HP VSD appliance (e.g. VNC)
• IP address for the HP VSD appliance(s) and host name(s) defined in DNS and accessible to
all VSP components.
For a license key to activate HP VSD once installed, contact your HP Sales Representative.
HP VSD Installation Overview
The procedures set out here assume installation on a hypervisor running KVM.
Installation Types
There are two types of installation, standalone and high availability.
High Availability
HP VSD High Availability is intended to guard against single-failure scenarios. High
availability for HP VSD is implemented as a 3 + 1 node cluster as shown in Figure 3.
For high availability of the HP VSD nodes, it is necessary to ensure each VSD node has
redundant network and power, so that no single failure can cause loss of connectivity to more
than one HP VSD node. Therefore, each HP VSD node should be installed on a different
hypervisor.
The cluster consists of three HP VSD nodes and one statistics master node (Name node). In
addition, a Load Balancer (not supplied) is optional to load balance across the HP VSD nodes
for the REST API.
Installation Methods
The standard method of installation of HP VSD uses the pre-installed appliance. This appliance
is distributed in four formats.
• a ready-to-use QCow2 VM image for KVM hypervisor deployment (see HP VSD Installation
Using QCow2 Image
)
• a ready-to-use image for VMWare hypervisor deployment
• a ready-to-use image for OVA hypervisor deployment
• an ISO disc image (see HP VSD Installation Using ISO Disc Image)
Table 1 provides an overview of the installation tasks with links to each.
Notes on Reinstallation: MySQL Root Password
The password for the MySQL root user is not set after installation, because the HP VSD
installation scripts require that the root user not have a MySQL password.
Reinstalling HP VSD
To reinstall HP VSD, before uninstalling:
1. Set the root password to ‘no password.’ On each node, run:
mysql -uroot -p<current password> -e “update mysql.user set
password=PASSWORD(‘’) where user=’root’; flush privileges;”
2. Uninstall all HP VSD nodes.
3. Install all HP VSD nodes following the procedure specified for your HP VSD version and
installation type.
4. Verify that installation was successful.
5. Set the root password:
To set the root password for the first time, on each node, run:
12HP DCN Software Installation
mysql -e “update mysql.user set password=PASSWORD <NEW PASSWORD>
WHERE USER=’ROOT’; FLUSH PRIVILEGES;”
To change the root password, on each node, run:
mysql -uroot -p<current password> -e “update mysql.user set
password=PASSWORD <new password> where user =’root’; flush privileges;”
Table1:HPVSDInstallationOverview
qcow2ISO
Set Up Appliance VMsSet Up VM for ISO
Extract and Mount ISO Image
Connect to Appliance VMs
Configure Networking
Configure DNS Server
Configure NTP Server
Install HP VSD using qcow2Install HP VSD Using ISO
HP VSD Installation Using QCow2 Image
The following instructions are for a High Availability installation. For a standalone installation,
use the same instructions to install one HP VSD on a single node.
1.
Set Up Appliance VMs
2. Connect to Appliance VMs
3. Configure Networking
4. Configure DNS Server
5. Configure NTP Server
Install HP VSD using qcow2
6.
Set Up Appliance VMs
1. Unzip all the HP VSD tar files to a temporary location.
2. If you do not already have virt-install on your hypervisor(s), run this command to put it in:
yum install virt-install
3. Copy the HP VSD qcow2 image to the KVM hypervisor image location <TTY>/var/lib/
libvirt/images/on each hypervisor.
4. Create appliance VMs.
In the example below, a VM is created for each of four HP VSD nodes. If you are doing a
standalone installation, create only
myh1.
Note: “listen=0.0.0.0” results in KVM responding to VNC connection requests on all IP
interfaces. Depending on your network configuration, this may be a security issue.
HP VSD Installation Using QCow2 Image13
Consider removing “listen=0.0.0.0” and using an alternative method (for
example,
The HP VSD appliance VM requires console access for initial configuration. Either:
14HP DCN Software Installation
• Connect Via VNC)
• Connect Via virsh Console).
Connect Via VNC
Using a VNC client (e.g. RealVNC, TightVNC) or other console access mechanism, connect to
the HP VSD appliance consoles and log in using the default username and password:
login: root
password: default password
Connect Via virsh Console
Using a virsh console domain command, connect to the HP VSD appliance consoles and
log in using the default username and password.
3. Ping the gateway (in this example, 192.168.100.1).
/etc/init.d/network restart
4. Ping the gateway (in this example, 192.168.100.1).
ping 192.168.100.1
Configure DNS Server
dhcp” with “static”.
Set up the fully qualified names for all the nodes in the cluster (unless you are doing a
standalone installation, in which case one FQDN is obviously sufficient). Reverse DNS lookup
for the HP VSD nodes should also be set up.
Note: If the Service Records (SRV) for the XMPP cluster are not in the Domain Name
Server (DNS), the script will generate them. An administrator must then load them
into the DNS server. The XMPP cluster name is typically xmpp host in the domain,
HP VSD Installation Using QCow2 Image15
for example, xmpp.example.com. To use a different host name run the install.sh
with the -x option.
TheDNSserverinthisexampleis10.10.10.100.
Test DNSandreverseDNSfromeachVSDnode(VM).
1. Set up the fully qualified names for the nodes in the DNS server forward named file as per
the following example:
myh1.myd.example.com. 604800 IN A 192.168.10.101
myh2.myd.example.com. 604800 IN A 192.168.10.102
myh3.myd.example.com. 604800 IN A 192.168.10.103
myname.myd.example.com. 604800 IN A 192.168.10.104
The installation script verifies the DNS forward named file records.
2. From the HP VSD node, verify the SRV record as follows:
; hosts
myh1 A 192.168.10.101
myh2 A 192.168.10.102
myh3 A 192.168.10.103
myname A 192.168.10.104
; xmpp nodes
xmpp A 192.168.10.101
xmpp A 192.168.10.102
xmpp A 192.168.10.103
; SRV records for xmpp.example.com
_xmpp-client._tcp.xmpp.example.com. IN SRV 10 0 5222 myh1.myd.example.com.
_xmpp-client._tcp.xmpp.example.com. IN SRV 10 0 5222 myh2.myd.example.com.
_xmpp-client._tcp.xmpp.example.com. IN SRV 10 0 5222 myh3.myd.example.com.
16HP DCN Software Installation
Configure NTP Server
Include one or more NTP servers in the /etc/ntp.conf file. For example, edit the NTP file and
add servers as follows, restarting the NTPD service to put these parameters into effect:
server 10.10.0.10
server 192.16.10.10
server 192.16.20.10
Install HP VSD using qcow2
The install script is interactive. Node 1 is the master node, and it serves as a template for the
other nodes.
Note: HP VSD consists of several components and providing high availability for each of
these components can be quite complex. It is imperative that the installation and
powering-on of each node be done in the order specified here.
1. Install HP VSD on Node 1.
The install script checks for the XMPP proxy entry in your DNS. Run
----------------------------------------------------| V I R T U A L S E R V I C E S D I R E C T O R Y |
| (c) 2014 HP Networks |
----------------------------------------------------VSD supports two configurations:
1) HA, consisting of 2 redundant installs of VSD with an optional statistics
server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s):
Is this install the first (1), second (2), third (3) or cluster name node (t)
[1|2|3|t]: 1
Please enter the fully qualified domain name (fqdn) for this node:
, substituting your own XMPP server name.
<TTY>/opt/vsd/
r
myh1.myd.example.com
Install VSD on the 1st HA node myh1.myd.example.com ...
What is the fully qualified domain name for the 2nd node of VSD:
myh2.myd.example.com
What is the fully qualified domain name for the 3rd node of VSD:
myh3.myd.example.com
What is the fully qualified domain name for the cluster name node of VSD:
myname.myd.example.com
What is the fully qualified domain name for the load balancer (if any)
(default=none):
Node 1: myh1.myd.example.com
Node 2: myh2.myd.example.com
Node 3: myh3.myd.example.com
Name Node: myname.myd.example.com
XMPP: xmpp.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
Please install VSD on myh2.myd.example.com to complete the installation.
y
HP VSD Installation Using QCow2 Image17
2. Install VSD on Node 2:
[root@myh2 ~]# <TTY>/opt/vsd/install.sh
----------------------------------------------------| V I R T U A L S E R V I C E S D I R E C T O R Y |
| (c) 2014 HP Networks |
----------------------------------------------------VSD supports two configurations:
1) HA, consisting of 3 redundant installs of VSD with a cluster name node
server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s):
Is this install the first (1), second (2), third (3) or cluster name node (t)
[1|2|3|t]:
Please enter the fully qualified domain name for the 1st node of VSD:
myh1.myd.example.com
Install VSD on the 2nd HA node myh2.myd.example.com ...
Node 2: myh2.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
2
r
3. Follow the interactive script to install HP VSD on Node 3.
4. Follow the interactive script to install HP VSD on the Name Node.
5. Verify that your HP VSD(s) are up and running by using the following command:
service vsd status
6. See Import Certificates on the Servers.
HP VSD Installation Using ISO Disc Image
Note: Consult the Release Notes for the ISO installation requirements.
The following instructions are for a High Availability installation. For a standalone installation,
use the same instructions to install one HP VSD on a single node.
1.
Set Up VM for ISO
2. Extract and Mount ISO Image
3. Configure Networking
4. Configure DNS Server
5. Configure NTP Server
6. Install HP VSD Using ISO
Set Up VM for ISO
Note: “listen=0.0.0.0” results in KVM responding to VNC connection requests on
all IP interfaces. Depending on your network configuration, this may be a security
issue. Consider removing “listen=0.0.0.0” and using an alternative method
(for example,
18HP DCN Software Installation
virt-manager or SSH tunnel) to obtain console access.
1. Bring up a VM named myh1 using 24 GB RAM and 6 logical cores with the following
commands:
----------------------------------------------------| V I R T U A L S E R V I C E S D I R E C T O R Y |
| (c) 2014 HP Networks |
----------------------------------------------------VSD supports two configurations:
1) HA, consisting of 2 redundant installs of VSD with an optional statistics
server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s):
Is this install the first (1), second (2), third (3) or cluster name node (t)
[1|2|3|t]:
Please enter the fully qualified domain name (fqdn) for this node:
myh1.myd.example.com
Install VSD on the 1st HA node myh1.myd.example.com ...
What is the fully qualified domain name for the 2nd node of VSD:
myh2.myd.example.com
What is the fully qualified domain name for the 3rd node of VSD:
myh3.myd.example.com
What is the fully qualified domain name for the cluster name node of VSD:
myname.myd.example.com
What is the fully qualified domain name for the load balancer (if any)
1
, substituting your own XMPP server name.
r
HP VSD Installation Using QCow2 Image19
(default=none):
Node 1: myh1.myd.example.com
Node 2: myh2.myd.example.com
Node 3: myh3.myd.example.com
Name Node: myname.myd.example.com
XMPP: xmpp.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
Please install VSD on myh2.myd.example.com to complete the installation.
y
2. Install HP VSD on Node 2:
[root@myh2 ~]# /media/CDROM/install.sh
----------------------------------------------------| V I R T U A L S E R V I C E S D I R E C T O R Y |
| (c) 2014 HP Networks |
----------------------------------------------------VSD supports two configurations:
1) HA, consisting of 3 redundant installs of VSD with a cluster name node
server.
2) Standalone, where all services are installed on a single machine.
Is this a redundant (r) or standalone (s) installation [r|s]? (default=s):
Is this install the first (1), second (2), third (3) or cluster name node (t)
[1|2|3|t]:
Please enter the fully qualified domain name for the 1st node of VSD:
2
r
myh1.myd.example.com
Install VSD on the 2nd HA node myh2.myd.example.com ...
Node 2: myh2.myd.example.com
Continue [y|n]? (default=y):
Starting VSD installation. This may take as long as 20 minutes in some
situations ...
A self-signed certificate has been generated to get you started using VSD.
You may import one from a certificate authority later.
VSD installed on this host and the services have started.
3. Follow the interactive script to install VSD on Node 3.
4. Follow the interactive script to install VSD on the Name Node.
5. Verify that your VSD(s) are up and running by using the following command:
service vsd status
Import Certificates on the Servers
On each HP VSD host, installation generates a self-signed certificate. If you want to import an
official certificate signed by a certificate authority, use the
• Import a certificate generated by a Certificate Authority:
# ./set-cert.sh -r -i certificateFilename
• Generate and use a self-signed certificate if you do not run a proxy:
# ./set-cert.sh -r
20HP DCN Software Installation
set-cert.sh script:
• Generate and use a self-signed certificate if you run a proxy:
# ./set-cert.sh -r -p proxyHostname
Select an option and generate or import the certificate to Node 1. If you are running HA VSD,
import it to Nodes 2 and 3 as well.
LDAP Store
If you are using an LDAP store, see Using an LDAP Store.
Example of Load Balancer Configuration
frontend vsdha *:443
default_backend vsdhaapp
backend vsdhaapp
mode tcp
balance source
server c1 myh1.myd.example.com:8443 check
server c2 myh2.myd.example.com:8443 check
server c3 myh3.myd.example.com:8443 check
This chapter provides installation instructions and the basic configuration for the HP VSC.
Topics in this chapter include:
• HP VSC Installation Notes
• HP VSC Software Installation Procedure on KVM
• Emulated Disks Notes
• Emulated Ethernet NIC Notes
• HP VSC Software Installation Procedure on VMware
• Installing HP VSC on ESXI Using OVA
• HP VSC Basic Configuration
• HP VSC Boot Options File Configuration
• HP VSC System and Protocol Configuration
• System-level HP VSC Configuration
• In-band and Loopback IP Interfaces
• Post -install Security Task s
HP VSC Installation Notes
Part of the XML definition of the HP VSC virtual machine is to “pin” the virtual CPUs (vCPUs) to
separate CPU cores on the hypervisor. These settings are required for stable operation of the
HP VSC to ensure internal timers do not experience unacceptable levels of jitter.
Hyperthreading must be disabled to achieve the best use of the physical cores.
For the HP VSC hardware and software requirements, consult the current HP Distributed Cloud
Networking Release Notes.
HP VSC Software Installation Procedure on KVM
This section describes the process of loading the HP VSC software onto the dedicated server. At
the end of the procedure, the HP VSC image will be running on the server, and HP VSC
prompts you to log in.
There are two types of deployment, with a single qcow2 disk or (legacy) with two qcow2 disks
(see
Emulated Disks Notes).
This installation procedure assumes:
• The Linux server is a clean installation with a minimum of configuration and applications.
22HP VSC Software Installation
• An IP address is already assigned for the management network.
• The user has root access to the console of the Linux server.
• Either one or three NTP servers have been configured and NTP has synchronized with
them.
• The user has a means of copying the HP VSC software files to the server.
• Two independent network interfaces for management and data traffic, connected to two
Linux Bridge interfaces.
Once these requirements have been met, install the required dependencies (the following lines
refer to RHEL; substitute the appropriate Ubuntu references):
yum install kvm libvirt bridge-utils
When you set up a server, you must set up an NTP server for all the components. When you
define a VM, it gets a timestamp which cannot deviate more than 10 seconds.
Note: Intel Extended Page Tables (EPT) must be disabled in the KVM kernel module.
If EPT is enabled, it can be disabled by updating modprobe.d and reloading the kernel module
with:
4. (Optional) Modify the HP VSC XML configuration to rename the VM or the disk files.
5. Define VM:
virsh define vsc.xml
6. Configure VM to autostart:
virsh autostart vsc
7. Start the VM:
virsh start vsc
8. Connect to the HP VSC console using libvirt:
virsh console vsc
HP VSC should boot to a login prompt on the console.
9. From the console, log in and configure the HP VSC. Default login:
login: admin
password: admin
Emulated Disks Notes
There are two types of HP VSC deployment:
• Single disk configuration requires one QEMU emulated disk in the qcow2 format
(vsc_singledisk.qcow2) configured as IDE 0/1 (bus 0, master). This emulated disk is
accessible within the HP VSC as device “CF1:”
• Two disk configuration requires two QEMU emulated disks in the qcow2 format:
• IDE 0/1 (bus 0, master) must be configured as the “user” disk. The HP VSC
configuration, logs and other user data reside on this disk. This emulated disk is
accessible within the HP VSC as device “CF1:”. A minimum of 1GB is recommended
for this disk (a reference user disk is provided).
• IDE 0/2 (bus 0, slave) must be configured as the “image” disk. This disk contains HP
VSC binaries and a default boot options file. This emulated disk is accessible within the
HP VSC as device “CF2:”. The user should treat this disk as “read only” and essentially
dedicated to use by the image file. After the user customizes the boot options file, the
modified file should be stored on the user disk CF1:.
• It is possible to interchangeably boot different HP VSC versions by using the
corresponding image disk qcow2 file via the libvirt XML.
It is highly recommended to host the “user” disk locally (on CompactFlash, SSD or hard drive
storage as available). Likewise, to achieve the best boot times, it is recommended the “image”
disk be hosted locally on the hypervisor as well.
24HP VSC Software Installation
Emulated Ethernet NIC Notes
Two emulated e1000 Ethernet NICs are required. The HP VSC expects the first NIC to be
connected to the management network and the second NIC to be connected to the data
network.
The recommended configuration is to set up two independent bridges (br## devices in Linux)
and attach the emulated NICs and the corresponding physical NICs to each of these bridges.
See
Appendix: Emulated Ethernet NIC Notes.
HP VSC Software Installation Procedure on VMware
Starting with VSP 3.0, the HP ESXi implementation will provide a new mode of operation that
enables leveraging the underlying ESXi standard Vswitch or distributed Vswitch. As a result,
multiple VMs on the same ESXi host will be able to communicate directly without bridging over
the HP VRS-VM. This brings a tradeoff between performance, use of the underlying Vswitch
(VMware standard vSwitch or dvS) and flow controls inside the same port-group.
The HP implementation is based on VMware's networking paradigm. That is, when multiple
virtual NICs (VNICs) are put together on the same port-group they are able to communicate
with each other (in much the same way that multiple ports on the same VLAN are able to
exchange frames with each other).
When starting a VM, you choose the port-group in which to place the VNICs. Typically, VMs
are placed in the same port-group when they belong to the same subnet. However, there are
other reasons why VNICS might be put together on the same port-group. In any case,
communication is allowed in the same port-group.
The general user workflow for the standard Vswitch mode is the following:
1. Hypervisor installation
a. A Vswitch is defined with at least one port group.
b. The VRS-VM is installed on the hypervisor and the access VNIC is placed on the
standard Vswitch, on a special port-group configured in trunk mode (VLAN 4095). The
VRS-VM is configured at installation time in standard Vswitch mode.
2. Hypervisor usage
a. A new VM A is defined with one VNIC. The VNIC is put into one of the port-groups of
the standard Vswitch (your choice).
b. The VRS-VM receives an event and knows on which VLAN to receive that VM traffic on
its trunk port.
c. The whole instantiation process continues and the VRS-VM hands on the IP on that
specific VLAN.
d. The VNIC is able to communicate through the VRS-VM in a standard HP fashion AND
is also able to communicate with any other VNIC on the same port-group.
HP VSC Software Installation Procedure on VMware25
Installing HP VSC on ESXI Using OVA
Note: It is presumed that vCenter and ESXi are correctly installed.
1. Enable SSH on the ESX hypervisor. You can do this over the ESX screen or from vCenter.
2. Disable firewall on the ESXi. Run the following CLI on the ESXi host that will run the HP
VSC:
esxcli network firewall set --enabled false
3. Select the host:
4. Select Edit > Deploy OVF template:
5. In the Deploy OVF Template window that appears, click Browse and select the source
location of the OVF file, and
thenclickNext.
26HP VSC Software Installation
6. Specify a name and location for the deployed template, and then clickNext.:
7. Select a resource pool within which to deploy the template, and
HP VSC Software Installation Procedure on VMware27
thenclickNext.
8. Select the format in which to store the virtual disks, and thenclickNext.
9. Map the networks used in this OVF template to networks in your inventory (select the port
groups), and
thenclickNext.
10. Enter the HP VSC configuration information.
28HP VSC Software Installation
Note: Note that you must enter the control IP addresses of the HP VSC peers in the BGP
peer fields.
HP VSC Software Installation Procedure on VMware29
Then click Next. A summary is displayed.
11. To close the summary, click Finish.
12. Before powering on the VM, add a serial port. Connect via Network, Network Backing to
Server, Port URI to telnet://:2500 (this can be any port number).
13. Connect to the serial console of the TIMOS VM using a terminal application, such as PuTTY.
14. (Optional) Select one of the three boot options:
• HP VSC
• Update HP VSC configuration and reboot
• Update HP VSC configuration
If you do not make a choice within 20 seconds, the first option—HP VSC— is automatically
selected and the VM boots from the vApp properties that you gave initially.
To boot up the VSC VM implementing the new information, use the second option—Update
HP VSC configuration and reboot.
To make changes inside the VM before booting SROS, use the third option—Update HP
VSC configuration. Instructions for making such changes are beyond the scope of this
document. Do not make such changes unless you know what you are doing.
30HP VSC Software Installation
HP VSC Basic Configuration
This section describes the intial configuration steps necessary to get the HP VSC up and
running and able to communicate with other elements in the VSP.
The procedures described include:
• HP VSC Boot Options File Configuration
• HP VSC System and Protocol Configuration
HP VSC Boot Options File Configuration
The HP VSC uses a Boot Options File (BOF) named bof.cfg that is read on system boot and is
used for some basic, low-level system configuration needed to successfully boot the HP VSC.
Table 5 lists the configuration paramaters that are set in the BOF that are needed for proper
The following procedure updates the BOF and save the updated bof.cfg file on the “user” disk
CF1:.
Note: The “image” disk CF2: has a default bof.cfg file, but any user modified bof.cfg
should be stored on the “user” disk CF1.
This installation procedure assumes:
1. The HP VSC software has been successfully installed.
2. The user is at the HP VSC console and waiting to log in for the first time.
The information that is configured in the BOF is the following:
• The IP address of the Management IP interface (192.168.1.254 in the example below).
• As appropriate, the IP addresses of the primary, secondary and tertiary DNS servers
(10.0.0.1, 10.0.0.2 and 10.0.0.3 respectively in the example below).
• The DNS domain of the HP VSC (example.com in the example below).
• The IP next hop of any static routes that are to be reached via the Management IP interface
(one static route to subnet 192.168.100.0/24 via next hop 192.168.1.1 in the example
below).
• [Optional] Index persistence file for SNMP managed HP VSCs
1. Log in to the HP VSC console as administrator
At the “login as:” prompt, use the default administrator username (“admin”) and password
(“admin”) to log into the system and be at the root CLI context:
*A:NSC-vPE-1#
2. Assign the Management IP address
To navigate to the Boot Options File context, enter “bof<Enter>” and the prompt will
indicate a change to the bof context:
32HP VSC Software Installation
*A:VSC-1>bof#
The management IP address is configured using the address command which has a syntax
of:
where keywords are in bold, parameters are in italics and optional elements are enclosed
in square brackets. “[ ]”. Typically, the no form of the command will remove the configured
parameter or return it to its default value.
In the input below, the management IP is set to 192.168.1.254/24:
*A:VSC-1>bof# address 192.168.1.254/24
3. Configure DNS servers
The HP VSC allows for up to three DNS servers to be defined that will be contacted in
order: primary, secondary and tertiary. If one DNS is not reachable, the next DNS is
contacted.
The DNS servers are configured with the following command syntax:
primary‐dnsip‐address
noprimary‐dns
secondary‐dnsip‐address
nosecondary‐dns
tertiary‐dnsip‐address
notertiary‐dns
The primary, secondary and tertiary DNS servers are configured to 10.0.0.1, 10.0.0.2 and
10.0.0.3, respectively, with the following commands:
*A:VSC-1>bof# primary-dns 10.0.0.1
*A:VSC-1>bof# secondary-dns 10.0.0.2
*A:VSC-1>bof# tertiary-dns 10.0.0.3
4. Configure the DNS domain
The HP VSC DNS domain is set with the dns-domain command which has the following
syntax:
dns‐domaindns‐name
nodns‐domain
The DNS domain is set to example.com with the command below:
*A:VSC-1>bof# dns-domain example.com
5. Configure static routes for the management IP network
A static route is configured for the management IP interface with the static-route command
which has the following syntax:
6. [Optional] Enable index persistence for SNMP managed HP VSCs
If the HP VSC is going to be managed using SNMP, it is recommended that index
persistence be enabled using the persist command to ensure that MIB objects, like IP
interfaces, retain their index values across a reboot. The .ndx file that saves all of the
indexes in use is saved on the same device as the configuration file whenever a save
command is issued to save the HP VSC configuration.
The persist command has the following syntax:
persist{on|off}
To enable index persistence, the command is:
*A:VSC-1>bof# persist on
7. Save the configuration to cf1:
The BOF file is normally saved in the same directory as the image file for DCNOS , but for
the HP VSC, it is recommended that the bof.cfg file be saved to the cf1: “user” emulated
disk.
Note: The “image” disk CF2: has a default bof.cfg file, but any user modified bof.cfg
should be stored on the “user” disk CF1:.
The command to save the BOF to cf1: is:
*A:VSC-1>bof# save cf1:
8. Reboot the HP VSC to load the saved boot options
After saving the BOF, the system needs to rebooted because the bof.cfg is only read on
system initialization.
To reboot the HP VSC, issue the following commands:
*A:VSC-1>bof# exit
*A:NSC-vPE-1# admin reboot
WARNING: Configuration and/or Boot options may have changed since the last
save.
Are you sure you want to reboot (y/n)? y
The exit command returns the CLI to the root context so that the admin reboot command
can be issued to reboot the system. Answer in the affirmative to reboot.
34HP VSC Software Installation
After rebooting, the IP management interface for the HP VSC is configured along with
DNS.
HP VSC System and Protocol Configuration
In addition to the (“out-of-band”) Management IP interface, the HP VSC has an (“in-band”)
network interface for the data center’s data network.
In order to utilize the in-band network interface and provide connectivity with the other VSP
elements, the HP VSC requires some additional system-level configuration as well as in-band
data network configuration.
The system-level configuration required includes:
• Assigning a system name.
• Defining NTP servers to be used by the system.
• Configuring the system time zone.
• Configuring the XMPP client and OpenFlow in the HP VSC.
Configuring the IP interfaces and network protocols:
• Creating the in-band IP interface and assigning an IP address (interface name control and
IP address 10.9.0.7/24 in the example configuration below).
• Creating a system loopback IP interface for use by network protocols (interface name
system and IP address 10.0.0.7/32 in the example configuration below).
• Configure network protocols, for example, OSPF, IS-IS and BGP.
The sections below describe the configuration required by highlighting the relevant commands
of a HP VSC configuration file. The HP VSC configuration file contains the CLI commands
where the commands are formatted for enhanced readability.
After configuration, use the following command to save the configuration:
vsc# admin save
System-level HP VSC Configuration
Information on the XMPP server and OpenFlow commands on the VRS can be found in the
current HP Distributed Cloud Networking User Guide.
System Name
The config>system>name command is used to configure the system name. In the excerpt below,
the system name is set to NSC-vPE-1.
Having the different VSP elements time synchronized with NTP is essential to ensure that the
messages passed between the VSD, HP VSC and VRS elements are appropriately timestamped
to ensure proper processing.
Specify one or more (and preferrably three) NTP servers should be defined like in the example
below (10.0.0.123, 10.10.10.18 and 10.200.223.10).
The time zone is set with the zone command (PST) with the daylight savings time zone set with
the dst-zone command (PDT). The dst-zone will automatically complete the start and end dates
and times, but can be edited if needed.
exit all
configure
system
time
ntp
server 10.0.0.123
server 10.10.10.18
server 10.200.223.10
no shutdown
exit
sntp
shutdown
exit
dst-zone PDT
start second sunday march 02:00
end first sunday november 02:00
exit
zone PST
exit
exit all
XMPP and OpenFlow
Specify the xmpp server (xmpp.example.com) and username (NSC-vPE-1) and password
(password). The ejabberd server is configured to auto-create the user on the server with the
supplied username and password.
For OpenFlow, optional subnets can be specified with the auto-peer command which restricts
inbound OpenFlow connections from that subnet. If no auto-peer stanza is configured,
OpenFlow will sessions will be accepted on all interfaces, both in-band and out-of-band.
The excerpt below shows how to configure the in-band interface IP (name control with IP
address 10.9.0.7) as well as the loopback (name system with IP address 10.0.0.7) IP
interfaces. The loopback IP is needed if any IGP or BGP routing protocols will be configured. If
using BGP, an autonomous system needs to be configured (65000). Optionally, static routes
can be configured as well (for the in-band) routing table.
BGP needs to be configured if there are multiple HP VSCs that will be operating as a
federation. The following is just a sample configuration and should be adapted according to
the existing BGP infrastructure (for example, the use of Route Reflectors, the bgp group
neighbor IP addresses and family types should be specified, etc.).
After installing the HP VSC software, there are a number of tasks that should be performed to
secure the system. Most of these tasks are obvious, but worth mentioning as a reminder.
• Change HP VSC “admin” password
By default, the HP VSC administrator username and password are “admin”. Finding the
default credentials for most systems and software is not difficult and is an easy security
exploit.
• Centralized HP VSC authentication and authorization
The HP VSC software is based on DCNOS and inherits many of the platform and security
features supported in DCNOS. Rather than rely on users defined locally on each VRS,
RADIUS and TACACS+ can be used to centralize the authentication and authorization for
VRS administrative users.
Post-install Security Tasks39
• Secure Unused TCP/UDP Ports
After installing and configuring the HP VSC, the user should take all steps necessary to
ensure the network security of the HP VSC system through the use of ACLs and/or firewalls
and by disabling any unneeded network services on the node.
Table 6 lists the required and optional UDP/TCP ports for particular services for inbound
connections to the HP VSC.
Table 7 lists required and optional UDP/TCP ports for particular services for outbound
connections from the HP VSC.
Optional ports are only required if the network service is in use on the HP VSC.
This chapter provides installation instructions and the basic configuration for HP Virtual Routing
and Switching (VRS) and HP Virtual Routing and Switching Gateway (VRS-G).
Topics in this chapter include:
• VRS and VRS-G Installation Overview
• Preparing the Hypervisor
• Installing the VRS or VRS-G Software
• Configuring and Running VRS or VRS-G
VRS and VRS-G Installation Overview
VRS—The VRS component is a module that serves as a virtual endpoint for network services.
Through VRS, changes in the compute environment are immediately detected, triggering
instantaneous policy-based responses in network connectivity to ensure that application needs
are met.
VRS is an enhanced Open vSwitch (OVS) implementation that constitutes the network
forwarding plane. It encapsulates and de-encapsulates user traffic, enforcing L2-L4 traffic
policies as defined by the VSD. The VRS includes a Virtual Agent (VA) that tracks VM creation,
migration and deletion events in order to dynamically adjust network connectivity.
VRS‐G—The VRS-G component is a software gateway between the HP DCN networks and
legacy VLAN-based networks. It can be installed either on a bare metal server or within a VM.
For optimum performance, bare metal is recommended.
OperatingSystemandHardwareRequirements—See the Release Notes.
InstallationProcedure—Installation is essentially a three (or four) phase operation:
1.
Preparing the Hypervisor.
2.
Installing the VRS or VRS-G Software: The procedures are slightly different for the two
components and for each supported operating system, therefore each procedure is given
separately.
3. If you need MPLS over GRE:
4. Configuring and Running VRS or VRS-G.
Installing the VRS Kernel Module for MPLS over GRE.
Preparing the Hypervisor
Before installation of VRS/VRS-G, the following requirements must be met for all operating
systems:
42HP VRS and VRS-G Software Installation
• The Linux server must be a clean installation with a minimum of configuration and
applications.
• An IP address must already have been assigned to the server.
• DNS must have already been configured and must be operational.
• At least two NTP servers must have been configured and NTP must have been synchronized
with them.
• There must be root access to the console of the Linux server.
• You must have the ability to download and install software from remote archives, or have a
local repository mirror for the required repositories.
• The VRS software files must have been copied to the server.
Installing the VRS or VRS-G Software
This section contains:
• VRS on RHEL
• VRS on Ubuntu 12.04 LTS with Ubuntu 12.04 Cloud Packages
VRS on RHEL
• VRS-G on RHEL or Ubuntu 12.04
• Installing the VRS Kernel Module for MPLS over GRE
• Installing the VRS Kernel Module for MPLS over GRE
• Installing VRS Kernel Module On RHEL
• Installing VRS Kernel Module On Ubuntu 12.04
Note: For the currently supported software versions and hardware, consult the release
notes for the current version of HP DCN.
The HP VRS .tar.gz file contains the additional HP-specific packages. Install them following the
process below.
8. Edit /etc/default/openvswitch to achieve the desired VRS configuration. The comments
in the file are self-explanatory. Add the VSC controller's IP addresses:
vi /etc/default/openvswitch
9. Restart the service to pick up the changes in /etc/default/openvswitch:
# service hp-openvswitch-switch restart
Stopping hp system monitor: * Killing hp-SysMon (21054)
Stopping hp rpc server: * Killing hp-rpc (21083)
Stopping hp monitor: * Killing hpMon (21086)
Stopping openvswitch: * ovs-brcompatd is not running
* Killing ovs-vswitchd (21038)
* Killing ovsdb-server (21019)
* Removing openvswitch module
* Inserting openvswitch module
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* Configuring Open vSwitch personality
* Starting ovs-vswitchd
Starting hp system monitor: * Starting hp-SysMon
Starting hp rpc server: * Starting hp-rpc
Starting hp monitor: * Starting hpMon
VRS-G on RHEL or Ubuntu 12.04
1. Install VRS following the instructions in either of the following:
• VRS on RHEL.
• VRS on Ubuntu 12.04 LTS with Ubuntu 12.04 Cloud Packages
2. Edit /etc/default/openvswitch-switch by setting PERSONALITY=vrs-g.
3. Restart the VRS service:
service openvswitch restart
Installing the VRS Kernel Module for MPLS over GRE
This section contains the following subsections:
• Installing VRS Kernel Module On RHEL
46HP VRS and VRS-G Software Installation
• Installing VRS Kernel Module On Ubuntu 12.04
Installing VRS Kernel Module On RHEL
1. Install VRS following the instructions in VRS on RHEL.
6. Do a yum localinstall of the hp-openvswitch-dkms package.
7. Verify that the VRS processes restarted correctly:
# service openvswitch restart
Stopping hp monitor:Killing hpMon (6912) [ OK ]
Stopping vm-monitor:Killing vm-monitor (6926) [ OK ]
Stopping openvswitch: Killing ovs-brcompatd (6903) [ OK ]
Killing ovs-vswitchd (6890) [ OK ]
Killing ovsdb-server (6877) [ OK ]
Removing brcompat module [ OK ]
Removing openvswitch module [ OK ]
Starting openvswitch:Inserting openvswitch module [ OK ]
Inserting brcompat module [ OK ]
Starting ovsdb-server [ OK ]
Configuring Open vSwitch system IDs [ OK ]
Installing the VRS or VRS-G Software47
Configuring Open vSwitch personality [ OK ]
Starting ovs-vswitchd [ OK ]
Starting ovs-brcompatd [ OK ]
Starting hp monitor:Starting hpMon [ OK ]
Starting vm-monitor:Starting vm-monitor [ OK ]
Installing VRS Kernel Module On Ubuntu 12.04
1. Install VRS following the instructions in VRS on Ubuntu 12.04 LTS with Ubuntu 12.04 Cloud
Packages
2. Install dependencies for DKMS:
apt-get install dkms linux-headers-`uname -r`
3. Reboot to pick up correct kernel:
reboot
4. Install the hp-openvswitch-datapath-dkms package using the dpkg -i command.
5. Verify that the VRS processes restart correctly:
# service openvswitch restart
.
Configuring and Running VRS or VRS-G
The HP startup script that is provided takes care of starting all the components as well as the
basic configuration of VRS. This is mainly the creation of a bridge into VRS and the assignment
of an OpenFlow controller to that bridge. The configuration is loaded upon startup of the
openvswitch script according to a configuration file.
1. Edit the configuration file at
the active and standby VSC:
ACTIVE_CONTROLLER=1.2.3.4
STANDBY_CONTROLLER=1.2.4.5
2. Restart the VRS or VRS-G.
3. Verify VRS connected to VSC successfully:
ovs-vsctl show
To customize, use scripts that you run after bootup.
/etc/default/openvswitch by specifying the IP addresses of
48HP VRS and VRS-G Software Installation
5 VMware VRS VM Deployment
Topics in this chapter include:
• Introduction
• Prerequisites
• Creating the dVSwitch
• Verifying the Creation of the dVSwitch
• vSphere vSwitch Configurations
• Deployment of dVRS
• Information Needed
• Verifying Deployment
Introduction
Prerequisites
This chapter describes the integration of the Virtual Routing and Switching (VRS) VM with
VMware that is required for all VMware deployments with VMware vSphere Hypervisor (ESXi).
The integration requires creating the dVSwitch, configuring vSphere vSwitch, and deploying the
dVSwitch.
Note: Workflow and VSD must be NTP synced. Lack of synchronization could lead to
failure of operations on VSD.
Procure the following packages:
• CloudMgmt-vmware
• VRS OVF Templates for VMware
For Multicast to work on ESXi:
• Before installation, a new port-group (for example, Multicast-source) should be created on
the vSwitch which connects to the external network (SR-1) and promiscuous mode should be
allowed by default.
After the XenServer comes up, in addition to the usual verification such as interface status,
management network connectivity etc., perform the following verification checks:
1. Ensure that the bridge corresponding to HPManagedNetwork does not have any PIF
attached to it.
[root@acs-ovs-3 ~]# ovs-vsctl show
016cccd2-9b63-46e1-85d1-f27eb9cf5e90
~Snip~
The HP startup script takes care of starting all the components as well as the basic configuration
of VRS, which is primarily the assignment of OpenFlow controller(s) to that bridge.
One mandatory basic configuration task is manual—specifying active and standby controllers.
There are two methods of doing this:
• Editing the configuration file loaded by the OpenvSwitch script when it starts
• Running the CLI command ovs‐vsctladd‐controller
The preferred method is the first, i.e., editing the configuration file. Specify the controllers by
means of IP addresses in dotted decimal notation (see Specifying the Active and Standby HP
To learn how to contact HP, obtain software updates, submit feedback on documentation, and
locate links to HP SDN websites and other related HP products, see the following topics.
Gather information before contacting an authorized
support
If you need to contact an authorized HP support representative, be sure to have the following
information available:
• If you have a Care Pack or other support contract, either your Service Agreement Identifier
(SAID) or other proof of purchase of support for the software
• The HP Distributed Cloud Networking version and installed licenses
• The HP SDN application product names, versions, and installed licenses
• If you use a virtual machine for the operating system, the hypervisor virtualization platform
and version
• Messages generated by the software
• Other HP or third-party software in use
How to contact HP
• See the Contact HP Worldwide website to obtain contact information for any country:
• See the contact information provided on the HP Support Center website: http://
www8.hp.com/us/en/support.html
• In the United States, call +1 800 334 5144 to contact HP by telephone. This service is
available 24 hours a day, 7 days a week. For continuous quality improvement,
conversations might be recorded or monitored.
Software technical support and software updates
HP provides 90 days of limited technical support with the purchase of a base license for
the
HP Distributed Cloud Networking software.
Some HP SDN applications have a trial period, during which limited technical support is
provided for 90 days. Other HP SDN applications do not have a trial period and you must
purchase a base license for the application to receive 90 days of limited support. Support for
the controller and each HP SDN application is purchased separately, but you must have a base
license for the controller to receive support for your licensed HP SDN application.
Gather information before contacting an authorized support61
Care Packs
• For information about licenses for the controller, see the HP VAN SDN Controller
Administrator Guide.
• For information about licenses for HP SDN applications, see the information about
licensing in the administrator guide for the application.
To supplement the technical support provided with the purchase of a license, HP offers a wide
variety of Care Packs that provide full technical support at 9x5 or 24x7 availability with annual
or multi-year options. To purchase a Care Pack for an HP SDN application, you must have a
license for that application and a license for the controller.
For a list of Care Packs available for the controller and HP SDN applications, see:
http://www.hp.com/go/cpc
Enter the SDN license product number to see a list of Care Packs offered. Once registered, you
receive a service contract in the mail containing the customer service phone number and your
Service Agreement Identifier (SAID). You need the SAID when you phone for technical support.
To obtain full technical support prior to receiving the service contract in the mail, please call
Technical Support with the proof of purchase of the Care Pack.
Obtaining software updates
The software for HP Distributed Cloud Networking can be downloaded from the HP
Networking support lookup tool:
http://www8.hp.com/us/en/support.html
This website also provides links for manuals, electronic case submission, and other support
functions.
Warranty
For the software end user license agreement and warranty information for HP Networking
products, see http://www8.hp.com/us/en/drivers.html
Related information
Documentation
• HP SDN information library
http://www.hp.com/go/sdn/infolib
Product websites
• HP Software-Defined Networking website:
62Support and Other Resources
• Primary website:
http://www.hp.com/go/sdn
• Development center:
http://www.sdndevcenter.hp.com
• User community forum:
http://www.hp.com/networking/sdnforum
• HP Open Source Download Site:
http://www.hp.com/software/opensource
• HP Networking services website:
http://www.hp.com/networking/services
Related information63
64Support and Other Resources
8 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hp.com). Include the document title and part number, version number, or the
URL when submitting your feedback.
65
9 Appendix: Emulated Ethernet NIC Notes
A hypervisor hosting a VSC VM is expected to have two bridge interfaces used to attach the
VSC management and datapath NICs. This appendix shows an example configuration for the
bridge interfaces and associated NICs.
In the procedure and sample output below, eth0 is associated with br0, and eth1 is associated
with br1. The Ethernet to bridge mappings can be customized according to your hardware and
network configuration. If the device associations are different, make appropriate adjustments to
the procedure.
The information needed for the installation is:
• The interface names for the management and datapath interfaces on the hypervisor
• The IP addresses and network information (including default route) for the management
and datapath interfaces on the hypervisor
The files that will be modified are:
• /etc/sysconfig/network-scripts/ifcfg-eth0
• /etc/sysconfig/network-scripts/ifcfg-eth1
• /etc/sysconfig/network-scripts/ifcfg-br0
• /etc/sysconfig/network-scripts/ifcfg-br1
The procedures are:
• Modify the eth0 configuration
• Modify the eth1 configuration
• Edit (or create) the br0 configuration
• Edit (or create) the br1 configuration
Modifytheeth0configuration
Edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 to match the information below.