Enterprise products and services are set forth in the express warranty statements acco mpanying such
products and services. Nothing herein should be construe d as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions co ntained herein.
Confidential computer software. V alid license from Hewlett Packard Enterprise required for possession, use, or
copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and T e chnical Data for Commercial Items are licensed to the U.S. Government under vendor’s
standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in the
United States and other countries.
Microsoft® and Windows® are trademarks of the Microsoft group of companies.
Adobe® and Acrobat® are trademarks of Adobe Systems In corporated.
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
IRF link redundancy ································································································································· 11
IRF physical port restrictions and cabling requirements ·········································································· 11
IRF port binding restrictions ····················································································································· 12
FIPS mode requirement ··························································································································· 13
Other configuration guidelines ················································································································· 13
Setup and configuration task list ······················································································································ 14
Planning the IRF fabric setup ··························································································································· 15
Assigning a member ID to each IRF member switch ······················································································· 15
Specifying a priority for each member switch ·································································································· 16
Connecting physical IRF ports ························································································································· 16
Binding physical ports to IRF ports ·················································································································· 17
Accessing the IRF fabric ·································································································································· 18
Accessing the CLI of the master switch ··································································································· 19
Accessing the CLI of a subordinate switch ······························································································ 19
Assigning an IRF domain ID to the IRF fabric ·································································································· 19
Configuring a member switch description ········································································································ 20
Configuring IRF link load sharing mode ··········································································································· 20
Configuring the global load sharing mode ································································································ 20
Configuring a port-specific load sharing mode ························································································· 20
Configuring IRF bridge MAC persistence ········································································································ 21
Enabling software auto-update for system software image synchronization ··················································· 22
Setting the IRF link down report delay ············································································································· 22
Configuring MAD ·············································································································································· 23
Remote support ········································································································································ 40
Index ············································································································· 42
ii
IRF overview
The HPE Intelligent Resilient Framework (IRF) technology creates a large IRF fabric from multiple
switches to provide data center class availability and scalability. IRF virtualization technology offers
processing power, interaction, unified management, and uninterrupted maintenance of multiple
switches.
This book describes IRF concepts and guides you through the IRF setup procedure.
Hardware compatibility
All HPE 5800 and 5820X switches support IRF.
An IRF fabric can contain both HPE 5800 and 5820X switches.
IRF benefits
IRF delivers the following benefits:
•Simplified topology and easy management—An IRF fabric appears as one node and is
accessible at a single IP address on the network. You can use this IP address to log in at any
member device to manage all the members of the IRF fabric. In addition, you do not need to run
the spanning tree feature among the IRF members.
• 1:N redundancy—In an IRF fabric, one member works as the master to manag e and control
the entire IRF fabric, and all the other members process services while backing up the master.
When the master fails, all the other member devices elect a new master from among them to
take over without interrupting services.
•IRF link aggregation—Y ou can assign several physical lin ks between neighboring members to
their IRF ports to create a load-balanced aggregate IRF connection with redundancy.
•Multiple-chassis link aggregation—You can use the Ethernet link aggregation feature to
aggregate the physical links between the IRF fabric and its upstream or downstream devices
across the IRF members.
•Network scalability and resiliency—Processing capacity of an IRF fabric equals the total
processing capacities of all the members. You can increase ports, network bandwidth, and
processing capacity of an IRF fabric simply by adding member devices without changing the
network topology.
Application scenario
Figure 1 shows an IRF fabric that has two switches, which appear as a sin gle node to the uppe r and
lower layer devices.
1
Figure 1 IRF application scenario
Basic concepts
This section describes the basic concepts that you might encounter when working with IRF.
IRF member roles
IRF uses two member roles: master and slave (called "subordinate" throughout the documentation).
When switches form an IRF fabric, they elect a master to manage the IRF fabric, and all other
switches back up the master. When the master switch fails, the other switches automatically elect a
new master from among them to take over . For more information about maste r election, see "Master
n."
electio
IRF member ID
An IRF fabric uses member IDs to uniquely identify and manage its members. This member ID
information is included as the first part of interface numbers and file paths to uniquely identify
interfaces and files in an IRF fabric. For more information about interface and file path naming, see
"Interface naming conventions" an
If two switches have the same IRF member ID, they cannot form an IRF fabric.
IRF port
An IRF port is a logical interface for the connection between IRF member devices. Every
IRF-capable device supports two IRF ports. The IRF ports are named IRF-port n/1 and IRF-port n/2,
where n is the member ID of the switch. The two IRF ports are referred to as "IRF-port 1" and
"IRF-port 2" in this book for simplicity.
d "File system naming conventions."
To use an IRF port, you must bind at least one physical port to it. The physical ports assigned to an
IRF port automatically form an aggregate IRF link. An IRF port goes down only if all its physical IRF
ports are down.
2
For two neighboring devices, their IRF physical links must be bound to IRF-port 1 on one device and
to IRF-port 2 on the other.
Physical IRF port
Physical IRF ports connect IRF member devices and must be bound to an IRF port. They forward
IRF protocol packets between IRF member devices and data packets that must travel across IRF
member devices.
For more information about physical ports that can be used for IRF links, see "General restrictions
and configu
ration guidelines."
IRF domain ID
One IRF fabric forms one IRF domain. IRF uses IRF domain IDs to uniquely identify IRF fabrics and
prevent IRF fabrics from interfering with one another.
As shown in Figure 2, Swit
fabric 2. The fabrics have LACP MAD detection links between them. When a member switch
receives an extended LACPDU for MAD, it checks the domain ID to see whether the packet is from
the local IRF fabric. Then, the switch can handle the packet correctly.
Figure 2 A network that contains two IRF domains
ch A and Swit ch B form IRF fabric 1, and Switch C and Switch D form IRF
IRF split
IRF split occurs when an IRF fabric breaks up into two or more IRF fabrics because of IRF link
failures, as shown in Figure 3. Th
routing and forwarding problems on the network. To quickly detect a multi-active collision, configure
at least one MAD mechanisms (see "IRF multi-active detection")
e split IRF fabrics operate with the same IP address and cause
.
3
Figure 3 IRF split
IRF merge
IRF merge occurs when two split IRF fabrics reunite or when two independent IRF fabrics are united,
as shown in Figure 4.
Figure 4
IRF merge
Member priority
Member priority determines the possibility of a member device to be elected the master. A member
with higher priority is more likely to be elected the master.
The default member priority is 1. You can change the member priority of a member device to affect
the master election result.
Interface naming conventions
An interface is named in the format of chassis-id/slot-number/port-index, where:
• chassis-id—IRF member ID of the switch. This argument defaults to 1.
• slot-number—Represents the slot number of the interface card. This argument takes 0 for the
fixed ports on the front panel. If the switch has one expansion interface slot, this argument takes
1 for the slot. If the switch has two expansion interface slots, this argument takes 1 and 2 for the
slots from left to right.
•port-index—Port index depends on the number of ports available on the switch. To identify the
index of a port, look at its port index mark on the chassis.
For one example, on the standalone switch Sysname, GigabitEthernet 1/0/1 represents the first fixed
port on the front panel. Set its link type to trunk, as follows:
<Sysname> system-view
[Sysname] interface gigabitethernet 1/0/1
[Sysname-GigabitEthernet1/0/1] port link-type trunk
For another example, on the IRF fabric Master, GigabitEthernet 3/0/1 represents the first fixed port
on the front panel of member switch 3. Set its link type to trunk, as follows:
<Master> system-view
[Master] interface gigabitethernet 3/0/1
[Master-GigabitEthernet3/0/1] port link-type trunk
4
File system naming conventions
On a standalone switch, you can use the name of storage device to access its file system. For more
information about storage device naming conventions, see Fundamentals Configuration Guid e.
On an IRF fabric, you can use the name of storage device to access the file system of the master. To
access the file system of any other member switch, use the name in the
slotmember-ID#storage-device-name format. For example:
To access the test folder under the root directory of the Flash on the master switch:
<Master> mkdir test
...
%Created dir flash:/test.
<Master> dir
Directory of flash:/
0 -rw- 10105088 Apr 26 2000 13:44:57 test.bin
1 -rw- 2445 Apr 26 2000 15:18:19 config.cfg
2 drw- - Jul 14 2008 15:20:35 test
515712 KB total (505812 KB free)
To create and access the test folder under the root directory of the Flash on member switch 3:
<Master> mkdir slot3#flash:/test
%Created dir slot3#flash:/test.
<Master> cd slot3#flash:/test
<Master> pwd
slot3#flash:/test
Or:
<Master> cd slot3#flash:/
<Master> mkdir test
%Created dir slot3#flash:/test.
To copy the file test.bin on the master to the root directory of the Flash on member switch 3:
# Display the current working path. In this example, the current working path is the root directory of
the Flash on member switch 3.
<Master> pwd
slot3#flash:
# Change the current working path to the root directory of the Flash on the master switch:
<Master> cd flash:/
<Master> pwd
flash:
# Copy the file to member switch 3.
<Master> copy test.bin slot3#flash:/
Copy flash:/test.bin to slot3#flash:/test.bin?[Y/N]:y
%Copy file flash:/test.bin to slot3#flash:/test.bin...Done.
Configuration synchronization mechanism
IRF uses a strict running-configuration synchronization mechanism so all chassis in an IRF fabric
can work as a single node, and after the master fails, other members can operate normally.
5
In an IRF fabric, all chassis get and run the running configuration of the master. Any configuration
you have made is propagated to all members.
When you execute the save [ safely ] [ backup | main ] [ force ] command or the save file-url all
command, the system saves the running configuration, as follows:
•If the configuration auto-update function (the slave auto-update config command) is en abled,
saves the configuration as the startup configuration on all member switches for the next startup .
•If the configuration auto-update function is disabled, saves the configuration as the startup
configuration on the master for the next startup.
By default, configuration auto-update is enabled.
For more information about configuration management, see Fundamentals Configuration Guide.
Master election
Master election is held each time the IRF fabric topology changes, for example, when the IRF fabric
is established, a new member device is plugged in, the master device fails or is removed, the IRF
fabric splits, or IRF fabrics merge.
Master election uses the following rules in descending order:
1. Current master, even if a new member has higher priority.
When an IRF fabric is being formed, all member switches consider themselves as the master.
This rule is skipped.
2. Member with higher priority.
3. Member with the longest system uptime.
4. Member with the lowest bridge MAC address.
The IRF fabric is formed on election of the master.
During an IRF merge, the switches of the IRF fabric that fails the master election must reboot to
rejoin the IRF fabric that wins the election.
After a master election, all subordinate switches reboot with the configuration on the master. Their
original configuration, even if it has been saved, does not take effect.
IRF multi-active detection
An IRF link failure causes an IRF fabric to split in two IRF fabrics operating with the same Layer 3
configurations, including the same IP address. To avoid IP address collision and network problems,
IRF uses multi-active detection (MAD) mechanisms to detect the presence of multiple identical IRF
fabrics, handle collisions, and recover from faults.
Multi-active handling procedure
The multi-active handling procedure includes detection, collision handling, and failure recovery.
Detection
The MAD implementation of the switch detects active IRF fabrics with the same Layer 3 global
configuration by extending the LACP, BFD, or gratuitous ARP protocol.
These MAD mechanisms identify each IRF fabric with a domain ID and an active ID (the member ID
of the master). If multiple active IDs are detected in a domain, MAD determines that an IRF collision
or split has occurred.
Y ou can use at least one of these mechanisms in an IRF fabri c, depending on your network topology .
For a comparison of these MAD mechanisms, see "Configuring MAD."
6
Collision handling
When multiple identical active IRF fabrics are detected, MAD compares the member IDs of their
masters. If the master in one IRF fabric has the lowest member ID among all the masters, the
members in the fabric continue to operate in Active state and forward traffic. MAD sets all the other
IRF fabrics in Recovery (disabled) state and shuts down all their physical ports but the console ports,
physical IRF ports, and any ports you have specified with the mad exclude interface command.
Failure recovery
To merge two split IRF fabrics, first repair the failed IRF link and remove the IRF link failure.
If the IRF fabric in Recovery state fails before the failure is recovered, repair the failed IRF fabric and
the failed IRF link.
If the IRF fabric in Active state fails before the failure is recovered, first enable the IRF fabric in
Recovery state to take over the active IRF fabric and protect the services from being affected. After
that, recover the MAD failure.
LACP MAD
LACP MAD requires that every IRF member have a link with an intermediate device, and all these
links form a dynamic link aggregation group, as shown in Figure 5. In
device must be an HPE device that supports extended LACP for MAD.
The IRF member switches send extended LACPDUs with TLVs that convey the domain ID and the
active ID of the IRF fabric. The intermediate device transparently forwards the extended LACPDUs
received from one member switch to all the other member switches:
•If the domain IDs and the active IDs in the extended LACPDUs sent by all the member devices
are the same, the IRF fabric is integrated.
•If the extended LACPDUs convey the same domain ID but different active IDs, a split has
occurred. To handle this situation, LACP MAD sets the IRF fabric with higher active ID in
Recovery state, and shuts down all its physical ports but the console port, IRF ports, and any
ports you have specified with the mad exclude interface command. The IRF fabric with lower
active ID is still in Active state and forwards traffic.
addition, the intermediate
7
Figure 5 LACP MAD application scenario
BFD MAD
BFD MAD can work with or without intermediate devices. Figure 6 shows a typical BFD MAD
application scenario.
To use BFD MAD:
•Set up dedicated BFD MAD link between each pair of IRF members or between each IRF
member and the intermediate device. Do not use the BFD MAD links for any other purpose.
•Assign the ports connected by BFD MAD links to the same VLAN. Create a VLAN interface for
the VLAN, and assign a MAD IP address to each member on the VLAN interface.
The MAD addresses identify the member switches and must belong to the same subnet.
With BFD MAD, the master tries to establish BFD sessions with other member switches by using its
MAD IP address as the source IP address:
•If the IRF fabric is integrated, only the MAD IP address of the master is effective. The master
cannot establish a BFD session with any other member. If you execute the display bfd session command, the state of the BFD sessions is Down.
•When the IRF fabric splits, the IP addresses of the masters in the split IRF fabrics take effect.
The masters can establish a BFD session. If you execute the display bfd session command,
the state of the BFD session between the two devices is Up.
8
Figure 6 BFD MAD application scenario
ARP MAD
ARP MAD detects multi-active collisions by using extended gratuitous ARP packets that convey the
IRF domain ID and the active ID.
You can set up ARP MAD links between neighbor IRF member devices, or between each IRF
member device and an intermediate device (see Figure 7). If an i
must also run the spanning tree feature between the IRF fabric and the intermediate device.
ntermediate device is used, you
9
Loading...
+ 30 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.