Hp CCI Network Considerations Guide

CCI Network Considerations Guide
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Anatomy of the CCI solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
CCI, network speed, and the connection protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
CCI network traffic patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Network reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Network bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
Connection methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The HP BladeSystem PC Blade Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Internal Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
External Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Port speed, duplex, and flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
Maximizing network cable reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Virtual Local Area Network (VLAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Management IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Switch diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Switch management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Embedded Web System interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Link aggregation groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spanning tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
PVST interoperability mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Disabling the spanning tree PVST Interoperability mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
SNMP and remote monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Configuring the switch to send SNMPv1traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Configuring HP SIM to work with the switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
HP Session Allocation Manager (SAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
HP SAM minimum network requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1
The blade PC allocation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Blade image management considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
The blade PC image PXE boot process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
CCI solution component dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
General notes regarding solution Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
CCI network topology reference designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
General notes regarding Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
Flat network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Redundant connectivity with IEEE 802.1D or 802.1w . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Flat network with Per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Basic connectivity with per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
Redundant connectivity with per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Standards based redundancy with IEEE 802.1s (MSTP) . . . . . . . . . . . . . . . . . . . . . . . . . .29
VLAN load balancing with IEEE 802.1s (MSTP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
For more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Appendix A: HP PC Blade Switch startup-config samples . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Default (canned startup-config) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
No STP (canned startup-config) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
STP/RSTP (canned startup-config) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
MSTP (canned startup-config) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
Flat network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Redundant connectivity with IEEE 802.1D or 802.1w . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Flat Network with Per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Basic Connectivity with Per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
Redundant Connectivity with Per-VLAN Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Standards Based Redundancy with IEEE 802.1s (MSTP) . . . . . . . . . . . . . . . . . . . . . . . . . . 37
VLAN Load Balancing with IEEE 802.1s (MSTP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Appendix B: Running the HP PC Blade Switch and the C-GbE Interconnect Switch
in a co-existing network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Before you deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Switch default configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
PVST interoperability - theory of operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Disabling the spanning tree PVST Interoperability mode . . . . . . . . . . . . . . . . . . . . . . . . . .42
Appendix C: Remote desktop considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Console display settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Streaming video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Network printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Local printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2
This document provides an overview of the HP BladeSystem PC Blade Switch, as well as an explanation of network terminology, configuration concepts, and examples relevant to the HP CCI solution. This docu­ment is intended for those seeking technical networking knowledge of how CCI is applied to the enter­prise network. To learn even more about the HP CCI Solution, refer to the CCI documentation or visit the CCI Web site at www.hp.com/go/cci.
The HP PC Blade Switch is an industry-standard managed layer-2 Ethernet switch that is designed to dra­matically reduce the number of Ethernet network cables required to provide redundant network connectiv­ity to HP Blade PCs hosted by the HP PC Blade Enclosure. HP has tuned the default configuration of the HP PC Blade Switch beyond a typical network edge switch to facilitate integrating the CCI solution into an existing network, typically with minimal effort.
Existing networks that adequately support a traditional distributed desktop PC environment are fully capa­ble of supporting a CCI implementation. However, because every network environment and CCI imple­mentation will differ, important variables must be considered on a case-by-case basis. Even though CCI is a proscribed solution, integrating it into an enterprise network takes skill, careful thought, and planning. This document is designed to assist in that effort. Additionally, HP Services is available to assist with such planning and design efforts.
Introduction
The HP CCI solution includes a rack mountable 3U (5.25 inch) HP PC Blade Enclosure that hosts up to 20 HP blade PCs. Each enclosure has a switch tray which includes an Integrated Administrator and redun­dant, hot-pluggable power supplies and cooling fans. Each blade PC is populated with two Broadcom 5705F Fast Ethernet Embedded 10/100Megabits per second (Mbps) network controllers. A fully popu­lated HP PC Blade Enclosure can require up to five Ethernet connections, four of which can be either giga­bit copper or fiber, and the fifth fast Ethernet copper only. A typical 42U rack can house as many as 14 HP PC Blade Enclosures for a maximum of 280 blade PCs. Without the HP PC Blade Switch, the number of Ethernet connections within this space can quickly become difficult to manage.
3
The HP PC Blade Switch is a managed Layer 2+ Ethernet switch that provides up to a 41-to-1 reduction in Ethernet network cables and provides network connection redundancy for the HP PC Blade Enclosure. Cable reduction can help to reduce the time required to deploy, manage, and service the HP CCI solu­tion.
Anatomy of the CCI solution
As shown in Figure 1, the CCI topology is as follows:
In the Access Tier, a user accesses an available blade PC from a remote access device.
In the Compute Tier, for a dynamic environment implementation, the Session Allocation Manager
allocates the user direct access to either a new or an existing blade PC session.
A new session consists of a user without either an active or disconnected remote desktop session
running on a blade PC within the CCI solution.
An existing session consists of a blade PC resource that is either:
Actively hosting the user’s session from a different access device.
or
A session that the user has disconnected from (either the same or different access device) without first logging off.
In the Resource Tier, users authenticate with the directory server each time they attempt to log onto
a blade PC. After successful authentication, users are logged onto the blade PC with access to the resources in the Resource Tier allowed by their security level.
The Management Tier is reserved for the tools necessary to manage the CCI solution, such as HP
Systems Insight Manager and the HP Rapid Deployment Pack.
4
Figure 1 CCI topology
CCI, network speed, and the connection protocol
Network speed is really all about “how much longer than instantaneous does it take a signal to travel a given route?” Anything more than instantaneous is “latency.” So, the question is how much latency is in a network supporting CCI, and at what point does this latency impact user experience?
One of the advantages of CCI over competitive offerings and traditional computer systems is that the remote host does not download files to the local access device. Instead, the remote host transmits only a bitmap representation of the blade PC screen to the access device. Even at a dial-up network speed of 48 Kilobits per second (Kbps), you can open large files utilizing the blade PC almost instantly while viewing them remotely. For example, if a user on a traditional system wants to open a 10-MB email attachment, the user must launch an email application on the remote computer to gain access to email, and then liter­ally wait for hours for the email attachment to download over a dial-up connection. This download of the file to the remote computer introduces a security risk. Alternatively, the CCI solution allows the remote user
5
to access a blade PC located in the corporate data center to run an email application and manipulate the 10-MB file without download the file to the remote access device. In this case, access to read and edit such a file is achievable within a reasonably expected amount of time.
An understanding of the transport software used by CCI is imperative before trying to understand how network latency can impact end users experience. The transport software used to move signals back and forth between the access device and the blade PC in the data center is called the Remote Desktop Con­nection (RDC). This software uses Microsoft Remote Desktop Protocol (RDP) or any other remote desktop connection client compatible with Microsoft Terminal Services.
RDP is a multi-channel protocol that allows you to connect to a computer running Microsoft Terminal Ser­vices. RDP software exists for most versions of Windows, as well as other operating systems such as Linux. The first version of RDC (version 4.0), introduced with Terminal Services in Windows NT 4.0 Server, was based on the ITU T.share protocol (T.128). Terminal Server Edition Version 5.0, introduced with Windows 2000 Server, included support for a number of features including localized printer support and was intended to improve utilization of network bandwidth. Version 5.1 was introduced with Windows XP and includes features such as support for 24-bit color and sound.
Microsoft’s Remote Desktop Protocol software is available as a free download for other Microsoft operat­ing systems at: http://www.microsoft.com/downloads/details.aspx?FamilyID=33ad53d8-9abc-4e15-
a78f-eb2aabad74b5&DisplayLang=en).
You can find additional information about RDP at:
Remote Desktop Protocol - from the Microsoft Developer Network
(http://msdn.microsoft.com/library/en-us/termserv/termserv/remote_desktop_protocol.asp)
Understanding the Remote Desktop Protocol - from support.microsoft.com
(http://support.microsoft.com/default.aspx?scid=kb;EN-US;q186607)
Remote Desktop Connection for Windows Server 2003 - latest version of Microsoft’s free client for
Windows 95 and upwards (http://www.microsoft.com/downloads/details.aspx?displaylang=en&familyid=a8255ffc-
4b4a-40e7-a706-cde7e9b57e79)
Remote Desktop Connection Client for Mac - Microsoft’s free client for Mac OS X
(http://www.microsoft.com/mac/otherproducts/otherproducts.aspx?pid=remotedesktopclient)
rdesktop - free open source client for Unix platforms
(http://www.rdesktop.org/)
You can optimize RDP for a variety of connection speeds. For instance, You can define less rich color and screen resolution to reduce network traffic. For more information about optimizing your RDP settings, refer­ence Microsoft RDP settings for HP’s Consolidated Client Infrastructure Environment.
6
CCI network traffic patterns
Traditional distributed PCs have historically created peak network traffic at specific times. Traffic tradition­ally peaks within the first 30 minutes of the workday as users log in and large amounts of data move between the datacenter and end users. During this time, MS Exchange data, system hot fixes, etc., traverse the network. Afterward, network demand tends to drop substantially.
In contrast, because CCI computers (blade PCs) are powered on continuously in the datacenter under control of the datacenter manager, there is a very different pattern of network traffic for the following rea­sons:
All traditional network traffic from datacenter to distributed PC is internal to the datacenter.
Data center managers deploy image updates including application updates, operating system ser-
vice packs, and hot fixes. Managers can schedule these updates at times when network traffic is low.
Most datacenters have a higher capacity internal network infrastructure compared to the infrastruc-
ture between the datacenter and users.
The typical CCI network traffic pattern from the datacenter to the user is very steady and much lower
than the peaks seen with traditional PCs. Instead of sending files to the user, CCI uses RDC to render what the blade is doing on the user’s screen.
Based on CCI network observations, HP recommends the following guidelines for the number of active users per network segment while providing a substantial buffer for extraordinary peaks in demand.
10,000 simultaneous active users per 1 Gigabit per second network segment.
1,000 simultaneous active users per 100 Megabits per second network segment.
100 simultaneous active users per 10 Megabits per second network segment.
While these are general guidelines that work for most infrastructures, differing circumstances and local variables can require analysis on a case-by-case basis.
7
Network reliability
Businesses require reliable networks, which are imperative to support solutions that successfully address business needs. In CCI, network failure affects user access to virtualized desktops. Therefore, HP recom­mends that you use redundant design practices to provide multiple network paths to all centralized com­puting resources. Carefully set expectations of how minor to catastrophic network outages will affect the end user environment. Consider customer business rules and job functions when establishing these expec­tations.
Network bandwidth
The term “bandwidth” in network design refers to the transfer data rate supported by a network connec­tion or interface and is commonly expressed in terms of bits per second (bps). The term comes from the field of electrical engineering, where bandwidth represents the total distance between the highest peak and lowest valley of a communication signal (band). Network bandwidth is one of the only factors that determines the perceived speed of a network.
Network latency
Network latency describes the time (in milliseconds) that it takes a packet to traverse a network. An easy way to measure network latency is with the MS DOS “PING” command. This command tests any network device that has a valid IP address. A successful ping yields a message displaying response time. Because CCI end users access their blade PC sessions across some distance, maintaining low network latency has a direct affect on user experience. A few factors that can increase network latency that you should con­sider when evaluating a CCI network include:
poor quality connections
network congestion
network saturation
Consider conducting a network traffic study to determine the level of saturation a connection may encoun­ter, and take steps (well documented outside the scope of this paper) to alleviate the underlying condi­tions.
8
Connection methods
CCI end users can connect to their blade PC using a network in a variety of ways: traditional dial-up modems, DSL, cable modems, LANs, etc. General performance with RDC (including CCI and any net­work architecture that uses MS-RDC) is based on response times as characterized in the following table:
Response Time (Round Trip Ping)
<1 ms - 10 ms LANs or WANs No discernible latency; same experience as if the
10 ms - 20 ms Broadband modems/bridges
10 ms - 100 ms Busy LANs or WANs Some latency, but rarely discernible (might be detect-
100 ms - 200 ms Dial-up modems Discernible latency (for example, when rapidly scroll-
> 200 ms Congested Networks Substantial latency (for example, when rapidly scroll-
Typical Network Type End User Experience
user is using a traditional desktop PC (all other things equal)
able if rapidly scrolling Power Point in full screen mode, for example)
ing PowerPoint or typing quickly, screen updates might not appear for a second or two). Reducing screen resolution and color depth can improve end user experience.
ing PowerPoint or typing quickly, letters might not appear on the screen for several seconds or more, scrolling text documents is “jumpy,” etc.). Using RDC with latency exceeding 200ms is generally not a sat­isfactory experience.
Based on testing by the HP CCI team, the implications of users working off the corporate LAN and access­ing the CCI from a remote location should yield the following experiences:
Broadband users: The experience when connecting to a corporate WAN/LAN should be similar to
being connected directly to the corporate LAN with no noticeable latency. However, every network and network connection is different, so other considerations such as congestion or poor connection quality can impact the end user experience. All testing to date indicates that CCI performance across broadband connections is very similar to being on a Local Area Network.
Analog dial-up users: For users who connect to their corporate network using a analog dial-up
modem, round-trip latency of 100ms to 200ms is common, and latency higher than 200ms is not unheard of. In this situation, CCI works well for reading email, working in spreadsheets, creating text documents, etc. However, latency above 200ms can become an inhibitor when performing complex tasks, such as creating PowerPoint documents or working with graphically intense applications. In case-by-case scenarios where the remote user only has access to an analog connection, a higher quality modem and/or custom modem initialization string might bring the latency down to a more acceptable level. You should involve someone technically skilled at troubleshooting analog network connections.
9
Additional considerations on dial-up modems: Analog modems vary in quality. Higher quality
56kbps modems may produce lower latency than lower quality modems. Because analog modems are designed to retrain their speed depending on line condition, you may need to fine tune modem configuration or seek a more robust method to access the network, such as an xDLS broadband con­nection or dedicated leased-line.
The HP BladeSystem PC Blade Switch
The HP PC Blade Switch is in the interconnect tray for the HP PC Blade Enclosure that uses a non-blocking, managed Layer 2+ Ethernet switch (see Figure 2). The HP PC Blade Switch consolidates 40 Ethernet NICs from blade PCs and provides four external dual personality (Copper or Fiber) Gigabit “uplink” Ethernet ports. You can configure the HP PC Blade Switch to combine all Ethernet signals into one physical uplink (see “Maximizing network cable reduction” on page 12) or up to four physical uplinks configured as a single logical uplink using IEEE 802.3ad link aggregation (see “Link aggregation groups” on page 15).
7
1
2 3 4 5 6
8 9 10 11
Figure 2 HP PC Blade Switch tray external panel
Item Description
1, 4 RJ-45 Auto MDIX, 10/100/1000T Gigabit Ethernet Uplink Ports e43 and e44
(Assigned to VLAN 2 by default)
2, 3 Small Form-factor Pluggable (SFP) Port for Ethernet Uplink GBIC Connector to Ports e43 and e44
(Assigned to VLAN 2 by default)
5 RJ-45 Auto MDIX, 10/100T Fast Ethernet Uplink Port e42
(Assigned to VLAN 1 by default)
6 DB-9 RS-232 Serial Port for Integrated Administrator console access
7 Integrated Administrator Reset Button
8, 11 RJ-45 Auto MDIX, 10/100/1000T Gigabit Ethernet Uplink Ports e45 and e46
(Assigned to VLAN 1 by default)
9, 10 Small Form-factor Pluggable (SFP) Port for Ethernet Uplink GBIC Connector to Ports e45 and e46
(Assigned to VLAN 1 by default)
10
Internal Ethernet ports
The HP PC Blade switch comes pre-configured with two Virtual LANs, VLAN 1 and VLAN 2. VLAN 1 is assigned to odd numbered ports e1 through e39, which physically connect blade PC NIC A to the switch by way of the passive centerwall assembly. Blade PC NIC A in bay 1 is connected to port e1 and blade PC NIC A in bay 2 is connected to port e3 and so forth. VLAN 2 is assigned to even numbered ports e2 through e40, which physically connect blade PC NIC B to the switch by way of the passive centerwall assembly. Blade PC NIC B in bay 1 is connected to port e2 and blade PC NIC A in bay 2 is connected to port e4 and so forth. Port e41 internally connects the Integrated Administrator to VLAN 1 for IA-to­Switch Ethernet communication.
External Ethernet ports
The HP PC Blade Switch includes four external RJ-45 10/100/1000 Gigabit Ethernet “uplink” ports (e43
- e46) and one 10/100Mbps FastEthernet port (e42). Ports e43 - e46 are each partnered with a Small Form-factor Pluggable (SFP) GBIC port. When the SFP port is in use, its RJ-45 counterpart is automatically disabled.
By default, e43 and e44 are assigned to VLAN 2, while e42, e45 and e46 are assigned to VLAN 1. These assignments provide segmented network support for blade PC NIC A and NIC B, which should coincide with most CCI solution implementations. However, to meet customer-specific requirements for CCI, the configuration can be changed.
Even though port e42 is assigned to VLAN 1, you can use the port for dedicated connectivity to a man­agement network or for local administration and diagnostic tasks without unplugging a dedicated uplink. Simultaneous management of the HP PC Blade Switch and the Integrated Administrator is possible using this single port (or any other external Ethernet port). Although ideally suited for management, you can use this port for other purposes, such as for additional network connectivity with reduced bandwidth.
NOTE: Pay careful attention if you use port e42 in a redundant scenario using spanning tree. Without proper adjustment, port e42’s default spanning tree cost and priority give it the highest priority of all external uplinks on the HP PC Blade Switch. If connected to the same spanning-tree topology as one or more of the other uplinks, e42 is put into a forwarding state, while other uplink(s) are put into a blocking (alternate) state. The switch will function normally, but with significantly reduced performance. Consider adjusting either the cost or priority attributes to make this port operate as the alternate (blocking) path when used in conjunction with any of the other uplink port(s).
Port speed, duplex, and flow control
The HP PC Blade Switch is configured by default to auto negotiate the port speed and duplex for all uplink ports (e43-e46). Port speed has been pre-configured to 100Mbit and duplex has been set to Full for ports (e1-e42). When making network design decisions, it is imperative to consider the settings of the upstream switch, blade PCs, and data center policy.
HP recommends you manually configure and set the settings for any external uplink at deployment time:
Port Speed - 1G
Duplex - Full
Flow Control - OFF
If you want to use the auto negotiate setting, HP recommends that you verify each of the parameters once a connection is made to the upstream switch(es).
11
Maximizing network cable reduction
For maximum cable reduction, consider using a combination of IEEE 802.1q VLAN Tagging and IEEE
802.3ad Link Aggregation. Using a LAG to carry the network traffic for all necessary VLANs over two physical uplinks for each HP PC Blade Switch can reduce total uplink to as few as 28 Ethernet connec­tions for a 42U rack fully populated with 14 HP PC blade Enclosures for a total of up to 560 network adapters. If further cable reduction is necessary, consider oversubscription techniques. HP recommends a thorough understanding of traffic load and demand before designing the CCI solution around these prin­ciples. Because not everyone uses CCI in the same manner and most networks are designed differently, it is beyond the scope of this paper to explain individual traffic patterns for each component of the CCI solution.
You can attempt to understand traffic load and demand by building the CCI solution using a greater num­ber of uplinks and then performing a comprehensive bandwidth study of how each component of the solution behaves based upon business and IT rule/policy requirements and practices.
Virtual Local Area Network (VLAN)
The HP PC Blade Switch supports 256 port-based IEEE 802.1Q VLANs with a maximum addressable range of 4094. The switch also supports Group VLAN Registration Protocol (GVRP) for dynamic VLAN registration. Members of a VLAN may be untagged or tagged according to IEEE 802.3ac VLAN Ethernet frame extensions for 802.1Q tagging. Therefore, any HP PC Blade Switch VLAN may span other switches that support 802.1Q tagging located within the network infrastructure.
Management IP
By default, the HP PC Blade Switch is configured to dynamically obtain an IP address using dynamic host configuration protocol (DHCP) for each of its two pre-configured VLANs. If preferred, an administrator can statically assign IP addresses through the Command Line Interface (CLI) or the Embedded Web Sys­tem (EWS). However, if making the change remotely over the network they may need to reconnect with the newly assigned IP address. For increased security, an administrator can specify the IP-based manage­ment stations that are allowed to access each switch or leverage additional access security protocols such as SSH, AAA, RADIUS, or TACACS+.
Switch diagnostics
The HP PC Blade Switch option includes a removable Integrated Administrator module (see Figure 3). The Integrated Administrator provides a single management console for the efficient management of the HP PC Blade Enclosure and its accompanying blade PCs. This includes automatic health monitoring of the switch with SNMP trap generation and system LED status.
External LEDs provide enclosure and switch status and link and speed on each Gigabit uplink connector (see Figure 3). An emergency enclosure shut-down feature is included in case of critical system tempera­ture caused by the switch or other enclosure component.
12
4
1
2
Figure 3 HP PC Blade Switch tray external panel LEDs
Item Description
1 Integrated Administrator Health
2 Interconnect Switch Health
3 Reserved for future use
4 Link Activity Status
5 Link Speed Status
The HP PC Blade Switch provides the following additional serviceability and diagnostic features:
3 5
Port mirroring with the ability to mirror desired type of frames (egress, ingress, or both).
Power on self test (POST) at boot for hardware verification.
Monitoring port utilization, data packets received/transmitted, error packets, packet size, trunk utiliza-
tion, SNMP data, etc.
Details of system information via the user interfaces such as port parameters and link status, switch
asset information, configuration values, log entries, etc.
Ability to “ping” to test the connectivity on the Ethernet network.
Local system log (syslog) with ability to view and clear messages that may be saved (uploaded) as
text file via TFTP.
MAC addresses view, clear, and delete from the forwarding database for identifying problems with
MAC address learning and packet forwarding.
Ability to maintain two separate valid firmware images, one active and one inactive.
For more detailed information on the administration capabilities of the Interconnect switch, see the HP BladeSystem PC Blade Switch and HP BladeSystem PC Enclosure user guides.
13
Switch management
The HP PC Blade Switch is an industry-standard managed Layer 2+ Ethernet switch, meaning users config­ure and manage the switch like other industry-standard Layer 2 Ethernet switches. To aid users during ini­tial deployment, the switch includes a default configuration that is fully operational at initial boot (see Appendix A). A basic CCI configuration should not require significant changes to the default settings. However, HP highly that you review the default settings and apply any changes before connecting the switch to a production network.
An Embedded Web System (EWS) and Command Line Interface (CLI) with scripting capability are included in the switch firmware to aid with configuring, managing, and monitoring the switch. The switch also supports Telnet, Simple Network Management Protocol (SNMP), Secure Socket Host (SSH), and Remote MONitoring (RMON). You can disable, enable, configure, and monitor any combination of the downlink and external uplink ports on a per port basis. Out-of-band or in-band (or both) access to the switch management interfaces are supported locally and remotely from anywhere on the network. Admin­istration of the switch is possible through any External Ethernet port as well as through the serial port by way of the Integrated Administrator. By default, switch configuration interfaces are assigned to both VLAN 1 and 2. VLAN 1 is externally accessible through from ports e42, e45, or e46. VLAN 2 is exter­nally accessible through ports e43 or e44.
Embedded Web System interface
Users can access the EWS by using Internet Explorer or Netscape Navigator over a TCP/IP network (see Figure 4). The EWS interface consists of three main sections:
The Active Virtual Graphic provides real-time status of the switch front panel and the means to quickly
view statistics of individual ports.
The Navigation Window contains items or features to select.
The Administration Window contains options for viewing or altering switch information.
14
Loading...
+ 31 hidden pages