Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
Text Part Number: OL-25712-04
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://
www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the document are shown
for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and coincidental.
For a complete list of all B-Series documentation, see the Cisco UCS B-Series Servers Documentation Roadmap
available at the following URL: http://www.cisco.com/go/unifiedcomputing/b-series-doc.
For a complete list of all C-Series documentation, see the Cisco UCS C-Series Servers Documentation Roadmap
available at the following URL: http://www.cisco.com/go/unifiedcomputing/c-series-doc .
Other Documentation Resources
An ISO file containing all B and C-Series documents is available at the following URL: http://www.cisco.com/
cisco/software/type.html?mdfid=283853163&flowid=25821. From this page, click Unified Computing
System (UCS) Documentation Roadmap Bundle.
The ISO file is updated after every major documentation release.
Follow Cisco UCS Docs on Twitter to receive document update notifications.
Related Cisco UCS Documentation
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to ucs-docfeedback@external.cisco.com. We appreciate your feedback.
Obtaining Documentation and Submitting a Service Request
For information on obtaining documentation, submitting a service request, and gathering additional information,
see the monthly What's New in Cisco Product Documentation, which also lists all new and revised Cisco
technical documentation.
Subscribe to the What's New in Cisco Product Documentation as a Really Simple Syndication (RSS) feed
and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free
service and Cisco currently supports RSS version 2.0.
Follow Cisco UCS Docs on Twitter to receive document update notifications.
• Overview of Cisco Unified Computing System, page 9
• Overview of Cisco UCS Manager, page 43
• Overview of Cisco UCS Manager GUI, page 47
CHAPTER 1
New and Changed Information
This chapter includes the following sections:
• New and Changed Information for this Release, page 3
New and Changed Information for this Release
The following table provides an overview of the significant changes to this guide for this current release. The
table does not provide an exhaustive list of all changes made to the configuration guides or of the new features
in this release. For information about new supported hardware in this release, see the Cisco UCS B-SeriesServers Documentation Roadmap available at the following URL: http://www.cisco.com/go/unifiedcomputing/
b-series-doc.
Table 1: New Features and Significant Behavioral Changes in Cisco UCS, Release 2.0(3)
Where DocumentedDescriptionFeature
Cipher Suite
Web Session Refresh
BIOS Settings
Overview of enabling MPIO
OL-25712-043
Adds support for Cipher Suite in
HTTPS configuration.
Enables you to configure the web
session refresh period and timeout for
authentication domains.
Adds support for new BIOS settings
that can be included in BIOS policies
and configured from Cisco UCS
Manager.
High level information added for how
to enable MPIO with iSCSI boot.
Table 2: New Features and Significant Behavioral Changes in Cisco UCS, Release 2.0(2)
Where DocumentedDescriptionFeature
IQN Pools
Adapter Port Channels
Unified Port Support for 6296
Fabric Interconnect
Renumbering for Rack-Mount
Servers
Changes to Behavior for Power
State Synchronization
BIOS Settings
UCS domains configured for iSCSI
boot.
Enables you to group all the physical
links from a Cisco UCS Virtual
Interface Card (VIC) to an I/O Module
into one logical link. (Requires
supported hardware.)
Enables you to use the ConfigureUnified Ports wizard to configure ports
on a 6296 fabric interconnect.
Enables you to renumber an integrated
rack-mount server.
Adds information and a caution about
power state synchronization, including
use of the physical power button or the
reset feature on a blade server or an
integrated rack-mount server.
Adds support for new BIOS settings
that can be included in BIOS policies
and configured from Cisco UCS
Manager.
iSCSI Boot, on page 443Adds support for IQN pools in Cisco
Configuring Ports and Port
Channels, on page 77
Unified Ports on the 6200
Series Fabric Interconnect, on
page 78
Managing Rack-Mount
Servers, on page 599
Managing Blade Servers, on
page 585
Managing Rack-Mount
Servers, on page 599
Configuring Server-Related
Policies, on page 381
Table 3: New Features in Cisco UCS, Release 2.0(1)
Where DocumentedDescriptionFeature
Disk Drive Monitoring Support
Support for disk drive monitoring on
certain blade servers and a specific LSI
Monitoring Hardware, on page
647
storage controller firmware level.
Fabric Port Channels
Enables you to group several of the
physical links from a IOM to a fabric
Configuring Ports and Port
Channels, on page 77
interconnect into one logical link for
redundancy and bandwidth sharing.
(Requires supported hardware.)
Firmware Bundle Option
Enables you to select a bundle instead
of a version when updating firmware
operating system from an iSCSI target
machine located remotely over a
network.
hardware.
to login when a user logs into Cisco
UCS Manager using the GUI or CLI.
Unified ports are ports on the 6200
series fabric interconnect that can be
configured to carry either Ethernet or
Fibre Channel traffic.
Enables you to configure Cisco UCS
to communicate with upstream disjoint
layer-2 networks.
The number of vNICs and vHBAs
configurable for a service profile is
determined by adapter capability and
the amount of virtual interface (VIF)
namespace available on the adapter.
iSCSI Boot, on page 443iSCSI boot enables a server to boot its
Licenses, on page 247Updated information for new UCS
Pre-Login Banner, on page 56Displays user-defined banner text prior
Unified Ports on the 6200
Series Fabric Interconnect, on
page 78
Configuring Upstream Disjoint
Layer-2 Networks, on page 321
Cisco UCS Virtual Interface Card
(VIC) drivers facilitate communication
between supported operating systems
and Cisco UCS Virtual Interface Cards
(VICs).
Cisco Virtual Machine Fabric Extender
(VM-FEX) for VMware provides
management integration and network
communication between Cisco UCS
Manager and VMware vCenter.
In previous releases, this functionality
was known as VN-Link in Hardware.
This feature is now
documented in the following
installation guides:
•
Cisco UCS Manager
Interface Card Drivers
for ESX Installation
Guide
•
Cisco UCS Manager
Interface Card Drivers
for Linux Installation
Guide
•
Cisco UCS Manager
Interface Card Drivers
for Windows Installation
Guide
The VIC driver installation
guides can be found here: http:/
/www.cisco.com/en/US/
products/ps10281/prod_
installation_guides_list.html
This feature is now
documented in the following
configuration guides:
•
Cisco UCS Manager
VM-FEX for VMware
GUI Configuration
Guide
•
Cisco UCS Manager
VM-FEX for VMware
CLI Configuration Guide
Cisco Virtual Machine Fabric Extender
(VM-FEX) for VMware provides
external switching for virtual machines
running on a KVM Linux-based
hypervisor in a Cisco UCS domain.
This feature is documented in
the following configuration
guides:
•
Cisco UCS Manager
VM-FEX for KVM GUI
Configuration Guide
•
Cisco UCS Manager
VM-FEX for KVM CLI
Configuration Guide
The VM-FEX configuration
guides can be found here: http:/
Cisco Unified Computing System (Cisco UCS) fuses access layer networking and servers. This
high-performance, next-generation server system provides a data center with a high degree of workload agility
and scalability.
The hardware and software components support Cisco's unified fabric, which runs multiple types of data
center traffic over a single converged network adapter.
CHAPTER 2
Architectural Simplification
The simplified architecture of Cisco UCS reduces the number of required devices and centralizes switching
resources. By eliminating switching inside a chassis, network access-layer fragmentation is significantly
reduced.
Cisco UCS implements Cisco unified fabric within racks and groups of racks, supporting Ethernet and Fibre
Channel protocols over 10 Gigabit Cisco Data Center Ethernet and Fibre Channel over Ethernet (FCoE) links.
This radical simplification reduces the number of switches, cables, adapters, and management points by up
to two-thirds. All devices in a Cisco UCS domain remain under a single management domain, which remains
highly available through the use of redundant components.
The management and data plane of Cisco UCS is designed for high availability and redundant access layer
fabric interconnects. In addition, Cisco UCS supports existing high availability and disaster recovery solutions
for the data center, such as data replication and application-level clustering technologies.
Scalability
A single Cisco UCS domain supports multiple chassis and their servers, all of which are administered through
one Cisco UCS Manager. For more detailed information about the scalability, speak to your Cisco representative.
Flexibility
A Cisco UCS domain allows you to quickly align computing resources in the data center with rapidly changing
business requirements. This built-in flexibility is determined by whether you choose to fully implement the
stateless computing feature.
Pools of servers and other system resources can be applied as necessary to respond to workload fluctuations,
support new applications, scale existing software and business services, and accommodate both scheduled
and unscheduled downtime. Server identity can be abstracted into a mobile service profile that can be moved
from server to server with minimal downtime and no need for additional network configuration.
With this level of flexibility, you can quickly and easily scale server capacity without having to change the
server identity or reconfigure the server, LAN, or SAN. During a maintenance window, you can quickly do
the following:
Unified Fabric
• Deploy new servers to meet unexpected workload demand and rebalance resources and traffic.
• Shut down an application, such as a database management system, on one server and then boot it up
again on another server with increased I/O capacity and memory resources.
Optimized for Server Virtualization
Cisco UCS has been optimized to implement VM-FEX technology. This technology provides improved
support for server virtualization, including better policy-based configuration and security, conformance with
a company's operational model, and accommodation for VMware's VMotion.
With unified fabric, multiple types of data center traffic can run over a single Data Center Ethernet (DCE)
network. Instead of having a series of different host bus adapters (HBAs) and network interface cards (NICs)
present in a server, unified fabric uses a single converged network adapter. This type of adapter can carry
LAN and SAN traffic on the same cable.
Cisco UCS uses Fibre Channel over Ethernet (FCoE) to carry Fibre Channel and Ethernet traffic on the same
physical Ethernet connection between the fabric interconnect and the server. This connection terminates at a
converged network adapter on the server, and the unified fabric terminates on the uplink ports of the fabric
interconnect. On the core network, the LAN and SAN traffic remains separated. Cisco UCS does not require
that you implement unified fabric across the data center.
The converged network adapter presents an Ethernet interface and Fibre Channel interface to the operating
system. At the server, the operating system is not aware of the FCoE encapsulation because it sees a standard
Fibre Channel HBA.
At the fabric interconnect, the server-facing Ethernet port receives the Ethernet and Fibre Channel traffic. The
fabric interconnect (using Ethertype to differentiate the frames) separates the two traffic types. Ethernet frames
and Fibre Channel frames are switched to their respective uplink interfaces.
Fibre Channel over Ethernet
Cisco UCS leverages Fibre Channel over Ethernet (FCoE) standard protocol to deliver Fibre Channel. The
upper Fibre Channel layers are unchanged, so the Fibre Channel operational model is maintained. FCoE
network management and configuration is similar to a native Fibre Channel network.
FCoE encapsulates Fibre Channel traffic over a physical Ethernet link. FCoE is encapsulated over Ethernet
with the use of a dedicated Ethertype, 0x8906, so that FCoE traffic and standard Ethernet traffic can be carried
on the same link. FCoE has been standardized by the ANSI T11 Standards Committee.
Fibre Channel traffic requires a lossless transport layer. Instead of the buffer-to-buffer credit system used by
native Fibre Channel, FCoE depends upon the Ethernet link to implement lossless service.
Ethernet links on the fabric interconnect provide two mechanisms to ensure lossless transport for FCoE traffic:
• Link-level flow control
• Priority flow control
Unified Fabric
Link-Level Flow Control
IEEE 802.3x link-level flow control allows a congested receiver to signal the endpoint to pause data transmission
for a short time. This link-level flow control pauses all traffic on the link.
The transmit and receive directions are separately configurable. By default, link-level flow control is disabled
for both directions.
On each Ethernet interface, the fabric interconnect can enable either priority flow control or link-level flow
control (but not both).
Priority Flow Control
The priority flow control (PFC) feature applies pause functionality to specific classes of traffic on the Ethernet
link. For example, PFC can provide lossless service for the FCoE traffic, and best-effort service for the standard
Ethernet traffic. PFC can provide different levels of service to specific classes of Ethernet traffic (using IEEE
802.1p traffic classes).
PFC decides whether to apply pause based on the IEEE 802.1p CoS value. When the fabric interconnect
enables PFC, it configures the connected adapter to apply the pause functionality to packets with specific CoS
values.
By default, the fabric interconnect negotiates to enable the PFC capability. If the negotiation succeeds, PFC
is enabled and link-level flow control remains disabled (regardless of its configuration settings). If the PFC
negotiation fails, you can either force PFC to be enabled on the interface or you can enable IEEE 802.x
link-level flow control.
Service profiles are the central concept of Cisco UCS. Each service profile serves a specific purpose: ensuring
that the associated server hardware has the configuration required to support the applications it will host.
The service profile maintains configuration information about the server hardware, interfaces, fabric
connectivity, and server and network identity. This information is stored in a format that you can manage
through Cisco UCS Manager. All service profiles are centrally managed and stored in a database on the fabric
interconnect.
Every server must be associated with a service profile.
Important
At any given time, each server can be associated with only one service profile. Similarly, each service
profile can be associated with only one server at a time.
After you associate a service profile with a server, the server is ready to have an operating system and
applications installed, and you can use the service profile to review the configuration of the server. If the
server associated with a service profile fails, the service profile does not automatically fail over to another
server.
When a service profile is disassociated from a server, the identity and connectivity information for the server
is reset to factory defaults.
Network Connectivity through Service Profiles
Each service profile specifies the LAN and SAN network connections for the server through the Cisco UCS
infrastructure and out to the external network. You do not need to manually configure the network connections
for Cisco UCS servers and other components. All network configuration is performed through the service
profile.
When you associate a service profile with a server, the Cisco UCS internal fabric is configured with the
information in the service profile. If the profile was previously associated with a different server, the network
infrastructure reconfigures to support identical network connectivity to the new server.
Configuration through Service Profiles
A service profile can take advantage of resource pools and policies to handle server and connectivity
configuration.
Hardware Components Configured by Service Profiles
When a service profile is associated with a server, the following components are configured according to the
data in the profile:
You do not need to configure these hardware components directly.
Server Identity Management through Service Profiles
You can use the network and device identities burned into the server hardware at manufacture or you can use
identities that you specify in the associated service profile either directly or through identity pools, such as
MAC, WWN, and UUID.
The following are examples of configuration information that you can include in a service profile:
• Profile name and description
• Unique server identity (UUID)
• LAN connectivity attributes, such as the MAC address
• SAN connectivity attributes, such as the WWN
Operational Aspects configured by Service Profiles
You can configure some of the operational functions for a server in a service profile, such as the following:
• Firmware packages and versions
• Operating system boot order and configuration
• IPMI and KVM access
vNIC Configuration by Service Profiles
A vNIC is a virtualized network interface that is configured on a physical network adapter and appears to be
a physical NIC to the operating system of the server. The type of adapter in the system determines how many
vNICs you can create. For example, a converged network adapter has two NICs, which means you can create
a maximum of two vNICs for each adapter.
A vNIC communicates over Ethernet and handles LAN traffic. At a minimum, each vNIC must be configured
with a name and with fabric and network connectivity.
vHBA Configuration by Service Profiles
A vHBA is a virtualized host bus adapter that is configured on a physical network adapter and appears to be
a physical HBA to the operating system of the server. The type of adapter in the system determines how many
vHBAs you can create. For example, a converged network adapter has two HBAs, which means you can
create a maximum of two vHBAs for each of those adapters. In contrast, a network interface card does not
have any HBAs, which means you cannot create any vHBAs for those adapters.
A vHBA communicates over FCoE and handles SAN traffic. At a minimum, each vHBA must be configured
with a name and fabric connectivity.
Service Profiles that Override Server Identity
This type of service profile provides the maximum amount of flexibility and control. This profile allows you
to override the identity values that are on the server at the time of association and use the resource pools and
policies set up in Cisco UCS Manager to automate some administration tasks.
You can disassociate this service profile from one server and then associate it with another server. This
re-association can be done either manually or through an automated server pool policy. The burned-in settings,
such as UUID and MAC address, on the new server are overwritten with the configuration in the service
profile. As a result, the change in server is transparent to your network. You do not need to reconfigure any
component or application on your network to begin using the new server.
This profile allows you to take advantage of and manage system resources through resource pools and policies,
such as the following:
• Virtualized identity information, including pools of MAC addresses, WWN addresses, and UUIDs
• Ethernet and Fibre Channel adapter profile policies
• Firmware package policies
• Operating system boot order policies
Unless the service profile contains power management policies, a server pool qualification policy, or another
policy that requires a specific hardware configuration, the profile can be used for any type of server in the
Cisco UCS domain.
You can associate these service profiles with either a rack-mount server or a blade server. The ability to
migrate the service profile depends upon whether you choose to restrict migration of the service profile.
Note
If you choose not to restrict migration, Cisco UCS Manager does not perform any compatibility checks
on the new server before migrating the existing service profile. If the hardware of both servers are not
similar, the association might fail.
Service Profiles that Inherit Server Identity
This hardware-based service profile is the simplest to use and create. This profile uses the default values in
the server and mimics the management of a rack-mounted server. It is tied to a specific server and cannot be
moved or migrated to another server.
You do not need to create pools or configuration policies to use this service profile.
This service profile inherits and applies the identity and configuration information that is present at the time
of association, such as the following:
• MAC addresses for the two NICs
• For a converged network adapter or a virtual interface card, the WWN addresses for the two HBAs
• BIOS versions
• Server UUID
Important
The server identity and configuration information inherited through this service profile may not be the
values burned into the server hardware at manufacture if those values were changed before this profile is
associated with the server.
With a service profile template, you can quickly create several service profiles with the same basic parameters,
such as the number of vNICs and vHBAs, and with identity information drawn from the same pools.
If you need only one service profile with similar values to an existing service profile, you can clone a
Tip
service profile in the Cisco UCS Manager GUI.
For example, if you need several service profiles with similar values to configure servers to host database
software, you can create a service profile template, either manually or from an existing service profile. You
then use the template to create the service profiles.
Cisco UCS supports the following types of service profile templates:
Initial template
Service profiles created from an initial template inherit all the properties of the template. However,
after you create the profile, it is no longer connected to the template. If you need to make changes to
one or more profiles created from this template, you must change each profile individually.
Server Architecture and Connectivity
Policies
Updating template
Service profiles created from an updating template inherit all the properties of the template and remain
connected to the template. Any changes to the template automatically update the service profiles created
from the template.
Policies determine how Cisco UCS components will act in specific circumstances. You can create multiple
instances of most policies. For example, you might want different boot policies, so that some servers can PXE
boot, some can SAN boot, and others can boot from local storage.
Policies allow separation of functions within the system. A subject matter expert can define policies that are
used in a service profile, which is created by someone without that subject matter expertise. For example, a
LAN administrator can create adapter policies and quality of service policies for the system. These policies
can then be used in a service profile that is created by someone who has limited or no subject matter expertise
with LAN administration.
You can create and use two types of policies in Cisco UCS Manager:
• Configuration policies that configure the servers and other components
• Operational policies that control certain management, monitoring, and access control functions
For example, you can choose to have associated servers boot from a local device, such as a local disk or
CD-ROM (VMedia), or you can select a SAN boot or a LAN (PXE) boot.
You must include this policy in a service profile, and that service profile must be associated with a server for
it to take effect. If you do not include a boot policy in a service profile, the server uses the default settings in
the BIOS to determine the boot order.
Important
Changes to a boot policy may be propagated to all servers created with an updating service profile template
that includes that boot policy. Reassociation of the service profile with the server to rewrite the boot order
information in the BIOS is auto-triggered.
Chassis Discovery Policy
The chassis discovery policy determines how the system reacts when you add a new chassis. Cisco UCS
Manager uses the settings in the chassis discovery policy to determine the minimum threshold for the number
of links between the chassis and the fabric interconnect and whether to group links from the IOM to the fabric
interconnect in a fabric port channel.
Chassis Links
If you have a Cisco UCS domain that has some chassis wired with 1 link, some with 2 links, some with 4
links, and some with 8 links we recommend that you configure the chassis discovery policy for the minimum
number links in the domain so that Cisco UCS Manager can discover all chassis.
For Cisco UCS implementations that mix IOMs with different numbers of links, we recommend using
Tip
the platform max value. Using platform max insures that Cisco UCS Manager uses the maximum number
of IOM uplinks available.
After the initial discovery, you must reacknowledge the chassis that are wired for a greater number of links
and Cisco UCS Manager configures the chassis to use all available links.
Cisco UCS Manager cannot discover any chassis that is wired for fewer links than are configured in the chassis
discovery policy. For example, if the chassis discovery policy is configured for 4 links, Cisco UCS Manager
cannot discover any chassis that is wired for 1 link or 2 links. Reacknowledgement of the chassis does not
resolve this issue.
The following table provides an overview of how the chassis discovery policy works in a multi-chassis Cisco
UCS domain:
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 1 link.
After initial
discovery,
reacknowledge
the chassis and
Cisco UCS
Manager
recognizes and
uses the
additional
links.
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 1 link.
After initial
discovery,
reacknowledge
the chassis and
Cisco UCS
Manager
recognizes and
uses the
additional
links.
2-Link Chassis
Discovery
Policy
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 2 links.
After initial
discovery,
reacknowledge
the chassis and
Cisco UCS
Manager
recognizes and
uses the
additional
links.
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 2 links.
After initial
discovery,
reacknowledge
the chassis and
Cisco UCS
Manager
recognizes and
uses the
additional
links.
4-Link Chassis
Discovery
Policy
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 4 link.
Chassis is
discovered by
Cisco UCS
Manager and
added to the
Cisco UCS
domain as a
chassis wired
with 4 links.
After initial
discovery,
reacknowledge
the chassis and
Cisco UCS
Manager
recognizes and
uses the
additional
links.
8-Link Chassis
Discovery Policy
Chassis cannot be
discovered by
Cisco UCS
Manager and is not
added to the Cisco
UCS domain.
Chassis is
discovered by
Cisco UCS
Manager and added
to the Cisco UCS
domain as a chassis
wired with 8 links.
Platform-Max
Discovery Policy
If the IOM has 4
links, the chassis is
discovered by
Cisco UCS
Manager and added
to the Cisco UCS
domain as a chassis
wired with 4 links.
If the IOM has 8
links, the chassis is
not fully
discovered by
Cisco UCS
Manager.
Chassis is
discovered by
Cisco UCS
Manager and added
to the Cisco UCS
domain as a chassis
wired with 8 links.
Link Grouping
For hardware configurations that support fabric port channels, link grouping determines whether all of the
links from the IOM to the fabric interconnect are grouped into a fabric port channel during chassis discovery.
If the link grouping preference is set to port channel, all of the links from the IOM to the fabric interconnect
are grouped in a fabric port channel. If set to no group, links from the IOM to the fabric interconnect are not
grouped in a fabric port channel.
Once a fabric port channel is created, links can be added or removed by changing the link group preference
and reacknowledging the chassis, or by enabling or disabling the chassis from the port channel.
Note
The link grouping preference only takes effect if both sides of the links between an IOM or FEX and the
fabric interconnect support fabric port channels. If one side of the links does not support fabric port
channels, this preference is ignored and the links are not grouped in a port channel.
Dynamic vNIC Connection Policy
The dynamic vNIC connection policy determines how the connectivity between VMs and dynamic vNICs is
configured. This policy is required for Cisco UCS domains that include servers with VIC adapters on which
you have installed VMs and configured dynamic vNICs.
Each dynamic vNIC connection policy includes an Ethernet adapter policy and designates the number of
vNICs that can be configured for any server associated with a service profile that includes the policy.
Note
If you migrate a server that is configured with dynamic vNICs, the dynamic interface used by the vNICs
fails and Cisco UCS Manager notifies you of that failure.
When the server comes back up, Cisco UCS Manager assigns new dynamic vNICs to the server. If you
are monitoring traffic on the dynamic vNIC, you must reconfigure the monitoring source.
Ethernet and Fibre Channel Adapter Policies
These policies govern the host-side behavior of the adapter, including how the adapter handles traffic. For
example, you can use these policies to change default settings for the following:
• Queues
• Interrupt handling
• Performance enhancement
• RSS hash
• Failover in an cluster configuration with two fabric interconnects
For Fibre Channel adapter policies, the values displayed by Cisco UCS Manager may not match those
displayed by applications such as QLogic SANsurfer. For example, the following values may result in an
apparent mismatch between SANsurfer and Cisco UCS Manager:
• Max LUNs Per Target—SANsurfer has a maximum of 256 LUNs and does not display more than
that number. Cisco UCS Manager supports a higher maximum number of LUNs.
• Link Down Timeout—In SANsurfer, you configure the timeout threshold for link down in seconds.
In Cisco UCS Manager, you configure this value in milliseconds. Therefore, a value of 5500 ms in
Cisco UCS Manager displays as 5s in SANsurfer.
• Max Data Field Size—SANsurfer has allowed values of 512, 1024, and 2048. Cisco UCS Manager
allows you to set values of any size. Therefore, a value of 900 in Cisco UCS Manager displays as
512 in SANsurfer.
Operating System Specific Adapter Policies
By default, Cisco UCS provides a set of Ethernet adapter policies and Fibre Channel adapter policies. These
policies include the recommended settings for each supported server operating system. Operating systems are
sensitive to the settings in these policies. Storage vendors typically require non-default adapter settings. You
can find the details of these required settings on the support list provided by those vendors.
We recommend that you use the values in these policies for the applicable operating system. Do not modify
any of the values in the default policies unless directed to do so by Cisco Technical Support.
However, if you are creating an Ethernet adapter policy for a Windows OS (instead of using the default
Windows adapter policy), you must use the following formulas to calculate values that work with Windows:
Global Cap Policy
Important
Completion Queues = Transmit Queues + Receive Queues
Interrupt Count = (Completion Queues + 2) rounded up to nearest power of 2
For example, if Transmit Queues = 1 and Receive Queues = 8 then:
Completion Queues = 1 + 8 = 9
Interrupt Count = (9 + 2) rounded up to the nearest power of 2 = 16
The global cap policy is a global policy that specifies whether policy-driven chassis group power capping or
manual blade-level power capping will be applied to all servers in a chassis.
We recommend that you use the default power capping method: policy-driven chassis group power capping.
Any change to the manual blade-level power cap configuration will result in the loss of any groups or
configuration options set for policy-driven chassis group power capping.
This policy enables you to specify a set of firmware versions that make up the host firmware package (also
known as the host firmware pack). The host firmware includes the following firmware for server and adapter
endpoints:
• Adapter
• BIOS
• Board Controller
• FC Adapters
• HBA Option ROM
• Storage Controller
You can include more than one type of firmware in the same host firmware package. For example, a host
Tip
firmware package can include both BIOS firmware and storage controller firmware or adapter firmware
for two different models of adapters. However, you can only have one firmware version with the same
type, vendor, and model number. The system recognizes which firmware version is required for an endpoint
and ignores all other firmware versions.
Server Architecture and Connectivity
The firmware package is pushed to all servers associated with service profiles that include this policy.
This policy ensures that the host firmware is identical on all servers associated with service profiles which
use the same policy. Therefore, if you move the service profile from one server to another, the firmware
versions are maintained. Also, if you change the firmware version for an endpoint in the firmware package,
new versions are applied to all the affected service profiles immediately, which could cause server reboots.
You must include this policy in a service profile, and that service profile must be associated with a server for
it to take effect.
Prerequisites
This policy is not dependent upon any other policies. However, you must ensure that the appropriate firmware
has been downloaded to the fabric interconnect. If the firmware image is not available when Cisco UCS
Manager is associating a server with a service profile, Cisco UCS Manager ignores the firmware upgrade and
completes the association.
IPMI Access Profile
This policy allows you to determine whether IPMI commands can be sent directly to the server, using the IP
address. For example, you can send commands to retrieve sensor data from the CIMC. This policy defines
the IPMI access, including a username and password that can be authenticated locally on the server, and
whether the access is read-only or read-write.
You must include this policy in a service profile and that service profile must be associated with a server for
it to take effect.
This policy configures any optional SAS local drives that have been installed on a server through the onboard
RAID controller of the local drive. This policy enables you to set a local disk mode for all servers that are
associated with a service profile that includes the local disk configuration policy.
The local disk modes include the following:
• No Local Storage—For a diskless server or a SAN only configuration. If you select this option, you
cannot associate any service profile which uses this policy with a server that has a local disk.
• RAID 0 Striped—Data is striped across all disks in the array, providing fast throughput. There is no
data redundancy, and all data is lost if any disk fails.
• RAID 1 Mirrored—Data is written to two disks, providing complete data redundancy if one disk fails.
The maximum array size is equal to the available space on the smaller of the two drives.
• Any Configuration—For a server configuration that carries forward the local disk configuration without
any changes.
• No RAID—For a server configuration that removes the RAID and leaves the disk MBR and payload
unaltered.
• RAID 5 Striped Parity—Data is striped across all disks in the array. Part of the capacity of each disk
stores parity information that can be used to reconstruct data if a disk fails. RAID 5 provides good data
throughput for applications with high read request rates.
• RAID 6 Striped Dual Parity—Data is striped across all disks in the array and two parity disks are used
to provide protection against the failure of up to two physical disks. In each row of data blocks, two sets
of parity data are stored.
• RAID10 Mirrored and Striped— RAID 10 uses mirrored pairs of disks to provide complete data
redundancy and high throughput rates.
You must include this policy in a service profile, and that service profile must be associated with a server for
the policy to take effect.
Management Firmware Package
This policy enables you to specify a set of firmware versions that make up the management firmware package
(also known as a management firmware pack). The management firmware package includes the Cisco Integrated
Management Controller (CIMC) on the server. You do not need to use this package if you upgrade the CIMC
directly.
The firmware package is pushed to all servers associated with service profiles that include this policy. This
policy ensures that the CIMC firmware is identical on all servers associated with service profiles which use
the same policy. Therefore, if you move the service profile from one server to another, the firmware versions
are maintained.
You must include this policy in a service profile, and that service profile must be associated with a server for
it to take effect.
This policy is not dependent upon any other policies. However, you must ensure that the appropriate firmware
has been downloaded to the fabric interconnect.
This policy defines how the mgmt0 Ethernet interface on the fabric interconnect should be monitored. If Cisco
UCS detects a management interface failure, a failure report is generated. If the configured number of failure
reports is reached, the system assumes that the management interface is unavailable and generates a fault. By
default, the management interfaces monitoring policy is disabled.
If the affected management interface belongs to a fabric interconnect which is the managing instance, Cisco
UCS confirms that the subordinate fabric interconnect's status is up, that there are no current failure reports
logged against it, and then modifies the managing instance for the end-points.
If the affected fabric interconnect is currently the primary inside of a high availability setup, a failover of the
management plane is triggered. The data plane is not affected by this failover.
You can set the following properties related to monitoring the management interface:
• Type of mechanism used to monitor the management interface.
• Interval at which the management interface's status is monitored.
• Maximum number of monitoring attempts that can fail before the system assumes that the management
is unavailable and generates a fault message.
Server Architecture and Connectivity
Important
In the event of a management interface failure on a fabric interconnect, the managing instance may not
change if one of the following occurs:
Network Control Policy
This policy configures the network control settings for the Cisco UCS domain, including the following:
• Whether the Cisco Discovery Protocol (CDP) is enabled or disabled
• How the VIF behaves if no uplink port is available in end-host mode
• The action that Cisco UCS Manager takes on the remote Ethernet interface, vEthernet interface , or
vFibreChannel interface when the associated border port fails
• Whether the server can use different MAC addresses when sending packets to the fabric interconnect
• Whether MAC registration occurs on a per-VNIC basis or for all VLANs.
Action on Uplink Fail
By default, the Action on Uplink Fail property in the network control policy is configured with a value of
link-down. For adapters such as the Cisco UCS M81KR Virtual Interface Card, this default behavior directs
Cisco UCS Manager to bring the vEthernet or vFibreChannel interface down if the associated border port
fails. For Cisco UCS systems using a non-VM-FEX capable converged network adapter that supports both
Ethernet and FCoE traffic, such as Cisco UCS CNA M72KR-Q and the Cisco UCS CNA M72KR-E, this
• A path to the end-point through the subordinate fabric interconnect does not exist.
• The management interface for the subordinate fabric interconnect has failed.
• The path to the end-point through the subordinate fabric interconnect has failed.
default behavior directs Cisco UCS Manager to bring the remote Ethernet interface down if the associated
border port fails. In this scenario, any vFibreChannel interfaces that are bound to the remote Ethernet interface
are brought down as well.
Note
Cisco UCS Manager, release 1.4(2) and earlier did not enforce the Action on Uplink Fail property for
those types of non-VM-FEX capable converged network adapters mentioned above. If the Action onUplink Fail property was set to link-down, Cisco UCS Manager would ignore this setting and instead
issue a warning. In the current version of Cisco UCS Manager this setting is enforced. Therefore, if your
implementation includes one of those converged network adapters and the adapter is expected to handle
both Ethernet and FCoE traffic, we recommend that you configure the Action on Uplink Fail property
with a value of warning.
Please note that this configuration may result in an Ethernet teaming driver not being able to detect a link
failure when the border port goes down.
MAC Registration Mode
In Cisco UCS Manager, releases 1.4 and earlier, MAC addresses were installed on all of the VLANs belonging
to an interface. Starting in release 2.0, MAC addresses are installed only on the native VLAN by default. In
most implementations this maximizes the VLAN port count.
Note
If a trunking driver is being run on the host and the interface is in promiscuous mode, we recommend that
you set the Mac Registration Mode to All VLANs.
Power Control Policy
Note
Power Policy
Cisco UCS uses the priority set in the power control policy, along with the blade type and configuration, to
calculate the initial power allocation for each blade within a chassis. During normal operation, the active
blades within a chassis can borrow power from idle blades within the same chassis. If all blades are active
and reach the power cap, service profiles with higher priority power control policies take precedence over
service profiles with lower priority power control policies.
Priority is ranked on a scale of 1-10, where 1 indicates the highest priority and 10 indicates lowest priority.
The default priority is 5.
For mission-critical application a special priority called no-cap is also available. Setting the priority to no-cap
prevents Cisco UCS from leveraging unused power from that particular blade server. The server is allocated
the maximum amount of power that that blade can reach.
You must include this policy in a service profile and that service profile must be associated with a server
for it to take effect.
The power policy is a global policy that specifies the redundancy for power supplies in all chassis in the Cisco
UCS domain. This policy is also known as the PSU policy.
For more information about power supply redundancy, see Cisco UCS 5108 Server Chassis Hardware
Installation Guide.
Quality of Service Policy
A quality of service (QoS) policy assigns a system class to the outgoing traffic for a vNIC or vHBA. This
system class determines the quality of service for that traffic. For certain adapters you can also specify additional
controls on the outgoing traffic, such as burst and rate.
You must include a QoS policy in a vNIC policy or vHBA policy and then include that policy in a service
profile to configure the vNIC or vHBA.
Rack Server Discovery Policy
The rack server discovery policy determines how the system reacts when you add a new rack-mount server.
Cisco UCS Manager uses the settings in the rack server discovery policy to determine whether any data on
the hard disks are scrubbed and whether server discovery occurs immediately or needs to wait for explicit
user acknowledgement.
Cisco UCS Manager cannot discover any rack-mount server that has not been correctly cabled and connected
to the fabric interconnects. For information about how to integrate a supported Cisco UCS rack-mount server
with Cisco UCS Manager, see the hardware installation guide for that server.
Server Architecture and Connectivity
Server Autoconfiguration Policy
Cisco UCS Manager uses this policy to determine how to configure a new server. If you create a server
autoconfiguration policy, the following occurs when a new server starts:
1
The qualification in the server autoconfiguration policy is executed against the server.
2
If the server meets the required qualifications, the server is associated with a service profile created from
the service profile template configured in the server autoconfiguration policy. The name of that service
profile is based on the name given to the server by Cisco UCS Manager.
3
The service profile is assigned to the organization configured in the server autoconfiguration policy.
Server Discovery Policy
This discovery policy determines how the system reacts when you add a new server. If you create a server
discovery policy, you can control whether the system conducts a deep discovery when a server is added to a
chassis, or whether a user must first acknowledge the new server. By default, the system conducts a full
discovery.
If you create a server discovery policy, the following occurs when a new server starts:
1
The qualification in the server discovery policy is executed against the server.
2
If the server meets the required qualifications, Cisco UCS Manager applies the following to the server:
• Depending upon the option selected for the action, either discovers the new server immediately or
waits for a user to acknowledge the new server
This policy is invoked during the server discovery process to create a service profile for the server. All service
profiles created from this policy use the values burned into the blade at manufacture. The policy performs the
following:
• Analyzes the inventory of the server
• If configured, assigns the server to the selected organization
• Creates a service profile for the server with the identity burned into the server at manufacture
You cannot migrate a service profile created with this policy to another server.
Server Pool Policy
This policy is invoked during the server discovery process. It determines what happens if server pool policy
qualifications match a server to the target pool specified in the policy.
If a server qualifies for more than one pool and those pools have server pool policies, the server is added to
all those pools.
Server Pool Policy Qualifications
This policy qualifies servers based on the inventory of a server conducted during the discovery process. The
qualifications are individual rules that you configure in the policy to determine whether a server meets the
selection criteria. For example, you can create a rule that specifies the minimum memory capacity for servers
in a data center pool.
Qualifications are used in other policies to place servers, not just by the server pool policies. For example, if
a server meets the criteria in a qualification policy, it can be added to one or more server pools or have a
service profile automatically associated with it.
You can use the server pool policy qualifications to qualify servers according to the following criteria:
• Adapter type
• Chassis location
• Memory type and configuration
• Power group
• CPU cores, type, and configuration
• Storage configuration and capacity
• Server model
Depending upon the implementation, you may configure several policies with server pool policy qualifications
including the following:
This template is a policy that defines how a vHBA on a server connects to the SAN. It is also referred to as
a vHBA SAN connectivity template.
You need to include this policy in a service profile for it to take effect.
VM Lifecycle Policy
The VM lifecycle policy determines how long Cisco UCS Manager retains offline VMs and offline dynamic
vNICs in its database. If a VM or dynamic vNIC remains offline after that period, Cisco UCS Manager deletes
the object from its database.
All virtual machines (VMs) on Cisco UCS servers are managed by vCenter. Cisco UCS Manager cannot
determine whether an inactive VM is temporarily shutdown, has been deleted, or is in some other state that
renders it inaccessible. Therefore, Cisco UCS Manager considers all inactive VMs to be in an offline state.
Cisco UCS Manager considers a dynamic vNIC to be offline when the associated VM is shutdown, or the
link between the fabric interconnect and the I/O module fails. On rare occasions, an internal error can also
cause Cisco UCS Manager to consider a dynamic vNIC to be offline.
The default VM and dynamic vNIC retention period is 15 minutes. You can set that for any period of time
between 1 minute and 7200 minutes (or 5 days).
Server Architecture and Connectivity
• Server inheritance policy
• Server pool policy
Note
vNIC Template
Note
The VMs that Cisco UCS Manager displays are for information and monitoring only. You cannot manage
VMs through Cisco UCS Manager. Therefore, when you delete a VM from the Cisco UCS Manager
database, you do not delete the VM from the server or from vCenter.
This policy defines how a vNIC on a server connects to the LAN. This policy is also referred to as a vNIC
LAN connectivity policy.
Beginning in Cisco UCS, Release 2.0(2), Cisco UCS Manager does not automatically create a VM-FEX port
profile with the correct settings when you create a vNIC template. If you want to create a VM-FEX port
profile, you must configure the target of the vNIC template as a VM.
You need to include this policy in a service profile for it to take effect.
If your server has two Emulex or QLogic NICs (Cisco UCS CNA M71KR-E or Cisco UCS CNA
M71KR-Q), you must configure vNIC policies for both adapters in your service profile to get a user-defined
MAC address for both NICs. If you do not configure policies for both NICs, Windows still detects both
of them in the PCI bus. Then because the second eth is not part of your service profile, Windows assigns
it a hardware MAC address. If you then move the service profile to a different server, Windows sees
additional NICs because one NIC did not have a user-defined MAC address.
vNIC/vHBA placement policies are used to determine what types of vNICs or vHBAs can be assigned to the
physical adapters on a server. Each vNIC/vHBA placement policy contains four virtual network interface
connections (vCons) that are virtual representations of the physical adapters. When a vNIC/vHBA placement
policy is assigned to a service profile, and the service profile is associated with a server, the vCons in the
vNIC/vHBA placement policy are assigned to the physical adapters.
If you do not include a vNIC/vHBA placement policy in the service profile or you use the default configuration
for a server with two adapters, Cisco UCS Manager defaults to the All configuration and equally distributes
the vNICs and vHBAs between the adapters.
You can use this policy to assign vNICs or vHBAs to either of the two vCons. Cisco UCS Manager uses the
vCon assignment to determine how to assign the vNICs and vHBAs to the physical adapter during service
profile association.
• All—All configured vNICs and vHBAs can be assigned to the vCon, whether they are explicitly assigned
to it, unassigned, or dynamic.
• Assigned Only—vNICs and vHBAs must be explicitly assigned to the vCon. You can assign them
explicitly through the service profile or the properties of the vNIC or vHBA.
• Exclude Dynamic—Dynamic vNICs and vHBAs cannot be assigned to the vCon. The vCon can be
used for all static vNICs and vHBAs, whether they are unassigned or explicitly assigned to it.
• Exclude Unassigned—Unassigned vNICs and vHBAs cannot be assigned to the vCon. The vCon can
be used for dynamic vNICs and vHBAs and for static vNICs and vHBAs that are explicitly assigned to
it.
Operational Policies
Fault Collection Policy
The fault collection policy controls the lifecycle of a fault in a Cisco UCS domain, including when faults are
cleared, the flapping interval (the length of time between the fault being raised and the condition being cleared),
and the retention interval (the length of time a fault is retained in the system).
A fault in Cisco UCS has the following lifecycle:
1
A condition occurs in the system and Cisco UCS Manager raises a fault. This is the active state.
2
When the fault is alleviated, it enters a flapping or soaking interval that is designed to prevent flapping.
Flapping occurs when a fault is raised and cleared several times in rapid succession. During the flapping
interval, the fault retains its severity for the length of time specified in the fault collection policy.
3
If the condition reoccurs during the flapping interval, the fault returns to the active state. If the condition
does not reoccur during the flapping interval, the fault is cleared.
4
The cleared fault enters the retention interval. This interval ensures that the fault reaches the attention of
an administrator even if the condition that caused the fault has been alleviated and the fault has not been
deleted prematurely. The retention interval retains the cleared fault for the length of time specified in the
fault collection policy.
5
If the condition reoccurs during the retention interval, the fault returns to the active state. If the condition
does not reoccur, the fault is deleted.
Flow control policies determine whether the uplink Ethernet ports in a Cisco UCS domain send and receive
IEEE 802.3x pause frames when the receive buffer for a port fills. These pause frames request that the
transmitting port stop sending data for a few milliseconds until the buffer clears.
For flow control to work between a LAN port and an uplink Ethernet port, you must enable the corresponding
receive and send flow control parameters for both ports. For Cisco UCS, the flow control policies configure
these parameters.
When you enable the send function, the uplink Ethernet port sends a pause request to the network port if the
incoming packet rate becomes too high. The pause remains in effect for a few milliseconds before traffic is
reset to normal levels. If you enable the receive function, the uplink Ethernet port honors all pause requests
from the network port. All traffic is halted on that uplink port until the network port cancels the pause request.
Because you assign the flow control policy to the port, changes to the policy have an immediate effect on how
the port reacts to a pause frame or a full receive buffer.
Maintenance Policy
A maintenance policy determines how Cisco UCS Manager reacts when a change that requires a server reboot
is made to a service profile associated with a server or to an updating service profile bound to one or more
service profiles.
The maintenance policy specifies how Cisco UCS Manager deploys the service profile changes. The deployment
can occur in one of the following ways:
Server Architecture and Connectivity
Scrub Policy
• Immediately
• When acknowledged by a user with admin privileges
• Automatically at the time specified in the schedule
If the maintenance policy is configured to deploy the change during a scheduled maintenance window, the
policy must include a valid schedule. The schedule deploys the changes in the first available maintenance
window.
This policy determines what happens to local data and to the BIOS settings on a server during the discovery
process and when the server is disassociated from a service profile. Depending upon how you configure a
scrub policy, the following can occur at those times:
Disk Scrub
One of the following occurs to the data on any local drives on disassociation:
• If enabled, destroys all data on any local drives
• If disabled, preserves all data on any local drives, including local storage configuration
One of the following occurs to the BIOS settings when a service profile containing the scrub policy is
disassociated from a server:
Serial over LAN Policy
This policy sets the configuration for the serial over LAN connection for all servers associated with service
profiles that use the policy. By default, the serial over LAN connection is disabled.
If you implement a serial over LAN policy, we recommend that you also create an IPMI profile.
You must include this policy in a service profile and that service profile must be associated with a server for
it to take effect.
• If enabled, erases all BIOS settings for the server and and resets them to the BIOS defaults for
that server type and vendor
• If disabled, preserves the existing BIOS settings on the server
Statistics Collection Policy
A statistics collection policy defines how frequently statistics are to be collected (collection interval) and how
frequently the statistics are to be reported (reporting interval). Reporting intervals are longer than collection
intervals so that multiple statistical data points can be collected during the reporting interval, which provides
Cisco UCS Manager with sufficient data to calculate and report minimum, maximum, and average values.
For NIC statistics, Cisco UCS Manager displays the average, minimum, and maximum of the change since
the last collection of statistics. If the values are 0, there has been no change since the last collection.
Statistics can be collected and reported for the following five functional areas of the Cisco UCS system:
• Adapter—statistics related to the adapters
• Chassis—statistics related to the blade chassis
• Host—this policy is a placeholder for future support
• Port—statistics related to the ports, including server ports, uplink Ethernet ports, and uplink Fibre
Channel ports
• Server—statistics related to servers
Note
Cisco UCS Manager has one default statistics collection policy for each of the five functional areas. You
cannot create additional statistics collection policies and you cannot delete the existing default policies.
You can only modify the default policies.
Statistics Threshold Policy
A statistics threshold policy monitors statistics about certain aspects of the system and generates an event if
the threshold is crossed. You can set both minimum and maximum thresholds. For example, you can configure
the policy to raise an alarm if the CPU temperature exceeds a certain value, or if a server is overutilized or
underutilized.
These threshold policies do not control the hardware or device-level thresholds enforced by endpoints, such
as the CIMC. Those thresholds are burned in to the hardware components at manufacture.
Cisco UCS enables you to configure statistics threshold policies for the following components:
• Servers and server components
• Uplink Ethernet ports
• Ethernet server ports, chassis, and fabric interconnects
• Fibre Channel port
Note
Pools
Server Pools
You cannot create or delete a statistics threshold policy for Ethernet server ports, uplink Ethernet ports,
or uplink Fibre Channel ports. You can only configure the existing default policy.
Pools are collections of identities, or physical or logical resources, that are available in the system. All pools
increase the flexibility of service profiles and allow you to centrally manage your system resources.
You can use pools to segment unconfigured servers or available ranges of server identity information into
groupings that make sense for the data center. For example, if you create a pool of unconfigured servers with
similar characteristics and include that pool in a service profile, you can use a policy to associate that service
profile with an available, unconfigured server.
If you pool identifying information, such as MAC addresses, you can pre-assign ranges for servers that will
host specific applications. For example, all database servers could be configured within the same range of
MAC addresses, UUIDs, and WWNs.
A server pool contains a set of servers. These servers typically share the same characteristics. Those
characteristics can be their location in the chassis, or an attribute such as server type, amount of memory,
local storage, type of CPU, or local drive configuration. You can manually assign a server to a server pool,
or use server pool policies and server pool policy qualifications to automate the assignment.
If your system implements multi-tenancy through organizations, you can designate one or more server pools
to be used by a specific organization. For example, a pool that includes all servers with two CPUs could be
assigned to the Marketing organization, while all servers with 64 GB memory could be assigned to the Finance
organization.
A server pool can include servers from any chassis in the system. A given server can belong to multiple server
pools.
MAC Pools
A MAC pool is a collection of network identities, or MAC addresses, that are unique in their layer 2
environment and are available to be assigned to vNICs on a server. If you use MAC pools in service profiles,
you do not have to manually configure the MAC addresses to be used by the server associated with the service
profile.
In a system that implements multi-tenancy, you can use the organizational hierarchy to ensure that MAC pools
can only be used by specific applications or business services. Cisco UCS Manager uses the name resolution
policy to assign MAC addresses from the pool.
To assign a MAC address to a server, you must include the MAC pool in a vNIC policy. The vNIC policy is
then included in the service profile assigned to that server.
You can specify your own MAC addresses or use a group of MAC addresses provided by Cisco.
UUID Suffix Pools
A UUID suffix pool is a collection of SMBIOS UUIDs that are available to be assigned to servers. The first
number of digits that constitute the prefix of the UUID are fixed. The remaining digits, the UUID suffix, are
variable. A UUID suffix pool ensures that these variable values are unique for each server associated with a
service profile which uses that particular pool to avoid conflicts.
If you use UUID suffix pools in service profiles, you do not have to manually configure the UUID of the
server associated with the service profile.
WWN Pools
Important
A WWN pool is a collection of WWNs for use by the Fibre Channel vHBAs in a Cisco UCS domain. You
create separate pools for the following:
• WW node names assigned to the server
• WW port names assigned to the vHBA
A WWN pool can include only WWNNs or WWPNs in the ranges from 20:00:00:00:00:00:00:00 to
20:FF:FF:FF:FF:FF:FF:FF or from 50:00:00:00:00:00:00:00 to 5F:FF:FF:FF:FF:FF:FF:FF. All other
WWN ranges are reserved. To ensure the uniqueness of the Cisco UCS WWNNs and WWPNs in the SAN
fabric, we recommend that you use the following WWN prefix for all blocks in a pool:
20:00:00:25:B5:XX:XX:XX
If you use WWN pools in service profiles, you do not have to manually configure the WWNs that will be
used by the server associated with the service profile. In a system that implements multi-tenancy, you can use
a WWN pool to control the WWNs used by each organization.
You assign WWNs to pools in blocks. For each block or individual WWN, you can assign a boot target.
WWNN Pools
A WWNN pool is a WWN pool that contains only WW node names. If you include a pool of WWNNs in a
service profile, the associated server is assigned a WWNN from that pool.
WWPN Pools
A WWPN pool is a WWN pool that contains only WW port names. If you include a pool of WWPNs in a
service profile, the port on each vHBA of the associated server is assigned a WWPN from that pool.
The management IP pool is a collection of external IP addresses. Cisco UCS Manager reserves each block of
IP addresses in the management IP pool for external access that terminates in the CIMC on a server.
You can configure service profiles and service profile templates to use IP addresses from the management IP
pool. You cannot configure servers to use the management IP pool.
All IP addresses in the management IP pool must be in the same subnet as the IP address of the fabric
interconnect.
Traffic Management
Note
The management IP pool must not contain any IP addresses that have been assigned as static IP addresses
for a server or service profile.
Traffic Management
Oversubscription
Oversubscription occurs when multiple network devices are connected to the same fabric interconnect port.
This practice optimizes fabric interconnect use, since ports rarely run at maximum speed for any length of
time. As a result, when configured correctly, oversubscription allows you to take advantage of unused
bandwidth. However, incorrectly configured oversubscription can result in contention for bandwidth and a
lower quality of service to all services that use the oversubscribed port.
For example, oversubscription can occur if four servers share a single uplink port, and all four servers attempt
to send data at a cumulative rate higher than available bandwidth of uplink port.
Oversubscription Considerations
The following elements can impact how you configure oversubscription in a Cisco UCS domain:
Ratio of Server-Facing Ports to Uplink Ports
You need to know what how many server-facing ports and uplink ports are in the system, because that ratio
can impact performance. For example, if your system has twenty ports that can communicate down to the
servers and only two ports that can communicate up to the network, your uplink ports will be oversubscribed.
In this situation, the amount of traffic created by the servers can also affect performance.
Number of Uplink Ports from Fabric Interconnect to Network
You can choose to add more uplink ports between the Cisco UCS fabric interconnect and the upper layers of
the LAN to increase bandwidth. In Cisco UCS, you must have at least one uplink port per fabric interconnect
to ensure that all servers and NICs to have access to the LAN. The number of LAN uplinks should be determined
by the aggregate bandwidth needed by all Cisco UCS servers.
For the 6100 series fabric interconnects, Fibre Channel uplink ports are available on the expansion slots only.
You must add more expansion slots to increase number of available Fibre Channel uplinks. Ethernet uplink
ports can exist on the fixed slot and on expansion slots.
For the 6200 series fabric interconnects running Cisco UCS Manager, version 2.0 and higher, Ethernet uplink
ports and Fibre Channel uplink ports are both configurable on the base module, as well as on the expansion
module.
For example, if you have two Cisco UCS 5100 series chassis that are fully populated with half width Cisco
UCS B200-M1 servers, you have 16 servers. In a cluster configuration, with one LAN uplink per fabric
interconnect, these 16 servers share 20GbE of LAN bandwidth. If more capacity is needed, more uplinks from
the fabric interconnect should be added. We recommend that you have symmetric configuration of the uplink
in cluster configurations. In the same example, if 4 uplinks are used in each fabric interconnect, the 16 servers
are sharing 80 GB of bandwidth, so each has approximately 5 GB of capacity. When multiple uplinks are
used on a Cisco UCS fabric interconnect the network design team should consider using a port channel to
make best use of the capacity.
Number of Uplink Ports from I/O Module to Fabric Interconnect
You can choose to add more bandwidth between I/O module and fabric interconnect by using more uplink
ports and increasing the number of cables. In Cisco UCS, you can have one, two, or four cables connecting
a I/O module to a Cisco UCS 6100 series fabric interconnect. You can have up to eight cables if you're
connecting a 2208 I/O module and a 6248 fabric interconnect. The number of cables determines the number
of active uplink ports and the oversubscription ratio.
Number of Active Links from Server to Fabric Interconnect
The amount of non-oversubscribed bandwidth available to each server depends on the number of I/O modules
used and the number of cables used to connect those I/O modules to the fabric interconnects. Having a second
I/O module in place provides additional bandwidth and redundancy to the servers. This level of flexibility in
design ensures that you can provide anywhere from 80 Gbps (two I/O modules with four links each) to 10
Gbps (one I/O module with one link) to the chassis.
With 80 Gbps to the chassis, each half-width server in the Cisco UCS domain can get up to 10 Gbps in a
non-oversubscribed configuration, with an ability to use up to 20 Gbps with 2:1 oversubscription.
Guidelines for Estimating Oversubscription
When you estimate the optimal oversubscription ratio for a fabric interconnect port, consider the following
guidelines:
Cost/Performance Slider
The prioritization of cost and performance is different for each data center and has a direct impact on the
configuration of oversubscription. When you plan hardware usage for oversubscription, you need to know
where the data center is located on this slider. For example, oversubscription can be minimized if the data
center is more concerned with performance than cost. However, cost is a significant factor in most data centers,
and oversubscription requires careful planning.
Bandwidth Usage
The estimated bandwidth that you expect each server to actually use is important when you determine the
assignment of each server to a fabric interconnect port and, as a result, the oversubscription ratio of the ports.
For oversubscription, you must consider how many GBs of traffic the server will consume on average, the
ratio of configured bandwidth to used bandwidth, and the times when high bandwidth use will occur.
The network type is only relevant to traffic on uplink ports, because FCoE does not exist outside Cisco UCS.
The rest of the data center network only differentiates between LAN and SAN traffic. Therefore, you do not
need to take the network type into consideration when you estimate oversubscription of a fabric interconnect
port.
Pinning in Cisco UCS is only relevant to uplink ports. You can pin Ethernet or FCoE traffic from a given
server to a specific uplink Ethernet port or uplink FC port.
When you pin the NIC and HBA of both physical and virtual servers to uplink ports, you give the fabric
interconnect greater control over the unified fabric. This control ensures more optimal utilization of uplink
port bandwidth.
Cisco UCS uses pin groups to manage which NICs, vNICs, HBAs, and vHBAs are pinned to an uplink port.
To configure pinning for a server, you can either assign a pin group directly, or include a pin group in a vNIC
policy, and then add that vNIC policy to the service profile assigned to that server. All traffic from the vNIC
or vHBA on the server travels through the I/O module to the same uplink port.
Pinning Server Traffic to Server Ports
All server traffic travels through the I/O module to server ports on the fabric interconnect. The number of
links for which the chassis is configured determines how this traffic is pinned.
The pinning determines which server traffic goes to which server port on the fabric interconnect. This pinning
is fixed. You cannot modify it. As a result, you must consider the server location when you determine the
appropriate allocation of bandwidth for a chassis.
Note
You must review the allocation of ports to links before you allocate servers to slots. The cabled ports are
not necessarily port 1 and port 2 on the I/O module. If you change the number of links between the fabric
interconnect and the I/O module, you must reacknowledge the chassis to have the traffic rerouted.
All port numbers refer to the fabric interconnect-side ports on the I/O module.
Chassis with One I/O Module (Not Configured for Fabric Port Channels)
Note
If the adapter in a server supports and is configured for adapter port channels, those port channels are
pinned to the same link as described in the following table. If the I/O module in the chassis supports and
is configured for fabric port channels, the server slots are pinned to a fabric port channel rather than to an
individual link.
Links on
Chassis
Fabric
Port
Channel
Link 8Link 7Link 6Link 5Link 4Link 3Link 2Link 1 /
Link 8Link 7Link 6Link 5Link 4Link 3Link 2Link 1 /
Port
Channel
2 links
4 links
8 links
Fabric
Port
Server
slots 1, 3,
5, and 7
Server
slots 1 and
5
Server
slot 1
slots
slots 2, 4,
6, and 8
Server
slots 2 and
6
Server
slot 2
Server
slots 3 and
7
Server
slot 3
slots 4 and
8
Server
slot 4
Server
slot 5
Server
slot 6
Server
slot 7
NoneNoneNoneNoneNoneNoneServer
NoneNoneNoneNoneServer
Server
slot 8
N/AN/AN/AN/AN/AN/AN/AAll server
Channel
Chassis with Two I/O Modules
If a chassis has two I/O modules, traffic from one I/O module goes to one of the fabric interconnects and
traffic from the other I/O module goes to the second fabric interconnect. You cannot connect two I/O modules
to a single fabric interconnect.
A
B
A-B
B-A
Guidelines for Pinning
When you determine the optimal configuration for pin groups and pinning for an uplink port, consider the
estimated bandwidth usage for the servers. If you know that some servers in the system will use a lot of
bandwidth, ensure that you pin these servers to different uplink ports.
Server Traffic PathFabric Interconnect Configured in vNIC
Server traffic goes to fabric interconnect A. If A fails, the server
traffic does not fail over to B.
All server traffic goes to fabric interconnect B. If B fails, the
server traffic does not fail over to A.
All server traffic goes to fabric interconnect A. If A fails, the
server traffic fails over to B.
All server traffic goes to fabric interconnect B. If B fails, the
server traffic fails over to A.
Cisco UCS provides the following methods to implement quality of service:
• System classes that specify the global configuration for certain types of traffic across the entire system
• QoS policies that assign system classes for individual vNICs
• Flow control policies that determine how uplink Ethernet ports handle pause frames
System Classes
Cisco UCS uses Data Center Ethernet (DCE) to handle all traffic inside a Cisco UCS domain. This industry
standard enhancement to Ethernet divides the bandwidth of the Ethernet pipe into eight virtual lanes. Two
virtual lanes are reserved for internal system and management traffic. You can configure quality of service
for the other six virtual lanes. System classes determine how the DCE bandwidth in these six virtual lanes is
allocated across the entire Cisco UCS domain.
Each system class reserves a specific segment of the bandwidth for a specific type of traffic. This provides a
level of traffic management, even in an oversubscribed system. For example, you can configure the Fibre
Channel Priority system class to determine the percentage of DCE bandwidth allocated to FCoE traffic.
The following table describes the system classes that you can configure:
Traffic Management
Table 5: System Classes
Platinum
Gold
Silver
Bronze
Best Effort
Fibre Channel
DescriptionSystem Class
A configurable set of system classes that you can include in the QoS policy
for a service profile. Each system class manages one lane of traffic.
All properties of these system classes are available for you to assign custom
settings and policies.
A system class that sets the quality of service for the lane reserved for Basic
Ethernet traffic.
Some properties of this system class are preset and cannot be modified. For
example, this class has a drop policy that allows it to drop data packets if
required. You cannot disable this system class.
A system class that sets the quality of service for the lane reserved for Fibre
Channel over Ethernet traffic.
Some properties of this system class are preset and cannot be modified. For
example, this class has a no-drop policy that ensures it never drops data packets.
You cannot disable this system class.
A quality of service (QoS) policy assigns a system class to the outgoing traffic for a vNIC or vHBA. This
system class determines the quality of service for that traffic. For certain adapters you can also specify additional
controls on the outgoing traffic, such as burst and rate.
You must include a QoS policy in a vNIC policy or vHBA policy and then include that policy in a service
profile to configure the vNIC or vHBA.
Flow Control Policy
Flow control policies determine whether the uplink Ethernet ports in a Cisco UCS domain send and receive
IEEE 802.3x pause frames when the receive buffer for a port fills. These pause frames request that the
transmitting port stop sending data for a few milliseconds until the buffer clears.
For flow control to work between a LAN port and an uplink Ethernet port, you must enable the corresponding
receive and send flow control parameters for both ports. For Cisco UCS, the flow control policies configure
these parameters.
When you enable the send function, the uplink Ethernet port sends a pause request to the network port if the
incoming packet rate becomes too high. The pause remains in effect for a few milliseconds before traffic is
reset to normal levels. If you enable the receive function, the uplink Ethernet port honors all pause requests
from the network port. All traffic is halted on that uplink port until the network port cancels the pause request.
Because you assign the flow control policy to the port, changes to the policy have an immediate effect on how
the port reacts to a pause frame or a full receive buffer.
Opt-In Features
Each Cisco UCS domain is licensed for all functionality. Depending upon how the system is configured, you
can decide to opt in to some features or opt out of them for easier integration into existing environment. If a
process change happens, you can change your system configuration and include one or both of the opt-in
features.
The opt-in features are as follows:
Stateless Computing
Stateless computing allows you to use a service profile to apply the personality of one server to a different
server in the same Cisco UCS domain. The personality of the server includes the elements that identify that
server and make it unique in the Cisco UCS domain. If you change any of these elements, the server could
lose its ability to access, use, or even achieve booted status.
The elements that make up a server's personality include the following:
• Stateless computing, which takes advantage of mobile service profiles with pools and policies where
each component, such as a server or an adapter, is stateless.
• Multi-tenancy, which uses organizations and role-based access control to divide the system into smaller
logical segments.
Stateless computing creates a dynamic server environment with highly flexible servers. Every physical server
in a Cisco UCS domain remains anonymous until you associate a service profile with it, then the server gets
the identity configured in the service profile. If you no longer need a business service on that server, you can
shut it down, disassociate the service profile, and then associate another service profile to create a different
identity for the same physical server. The "new" server can then host another business service.
To take full advantage of the flexibility of statelessness, the optional local disks on the servers should only
be used for swap or temp space and not to store operating system or application data.
You can choose to fully implement stateless computing for all physical servers in a Cisco UCS domain, to
not have any stateless servers, or to have a mix of the two types.
If You Opt In to Stateless Computing
Each physical server in the Cisco UCS domain is defined through a service profile. Any server can be used
to host one set of applications, then reassigned to another set of applications or business services, if required
by the needs of the data center.
You create service profiles that point to policies and pools of resources that are defined in the Cisco UCS
domain. The server pools, WWN pools, and MAC pools ensure that all unassigned resources are available
on an as-needed basis. For example, if a physical server fails, you can immediately assign the service profile
to another server. Because the service profile provides the new server with the same identity as the original
server, including WWN and MAC address, the rest of the data center infrastructure sees it as the same server
and you do not need to make any configuration changes in the LAN or SAN.
If You Opt Out of Stateless Computing
Each server in the Cisco UCS domain is treated as a traditional rack mount server.
You create service profiles that inherit the identify information burned into the hardware and use these profiles
to configure LAN or SAN connectivity for the server. However, if the server hardware fails, you cannot
reassign the service profile to a new server.
Multi-Tenancy
Multi-tenancy allows you to divide up the large physical infrastructure of an Cisco UCS domain into logical
entities known as organizations. As a result, you can achieve a logical isolation between organizations without
providing a dedicated physical infrastructure for each organization.
You can assign unique resources to each tenant through the related organization, in the multi-tenant
environment. These resources can include different policies, pools, and quality of service definitions. You
can also implement locales to assign or restrict user privileges and roles by organization, if you do not want
all users to have access to all organizations.
If you set up a multi-tenant environment, all organizations are hierarchical. The top-level organization is
always root. The policies and pools that you create in root are system-wide and are available to all organizations
in the system. However, any policies and pools created in other organizations are only available to organizations
that are above it in the same hierarchy. For example, if a system has organizations named Finance and HR
that are not in the same hierarchy, Finance cannot use any policies in the HR organization, and HR cannot
access any policies in the Finance organization. However, both Finance and HR can use policies and pools
in the root organization.
If you create organizations in a multi-tenant environment, you can also set up one or more of the following
for each organization or for a sub-organization in the same hierarchy:
If You Opt In to Multi-Tenancy
Each Cisco UCS domain is divided into several distinct organizations. The types of organizations you create
in a multi-tenancy implementation depends upon the business needs of the company. Examples include
organizations that represent the following:
• Resource pools
• Policies
• Service profiles
• Service profile templates
• Enterprise groups or divisions within a company, such as marketing, finance, engineering, or human
resources
• Different customers or name service domains, for service providers
You can create locales to ensure that users have access only to those organizations that they are authorized
to administer.
If You Opt Out of Multi-Tenancy
The Cisco UCS domain remains a single logical entity with everything in the root organization. All policies
and resource pools can be assigned to any server in the Cisco UCS domain.
Virtualization in Cisco UCS
Overview of Virtualization
Virtualization allows the creation of multiple virtual machines (VMs) to run in isolation, side by side on the
same physical machine.
Each virtual machine has its own set of virtual hardware (RAM, CPU, NIC) upon which an operating system
and fully configured applications are loaded. The operating system sees a consistent, normalized set of hardware
regardless of the actual physical hardware components.
In a virtual machine, both hardware and software are encapsulated in a single file for rapid copying,
provisioning, and moving between physical servers. You can move a virtual machine, within seconds, from
one physical server to another for zero-downtime maintenance and continuous workload consolidation.
The virtual hardware makes it possible for many servers, each running in an independent virtual machine, to
run on a single physical server. The advantages of virtualization include better use of computing resources,
greater server density, and seamless server migration.
A virtualized server implementation consists of one or more VMs running as 'guests' on a single physical
server. The guest VMs are hosted and managed by a software layer called the hypervisor or virtual machine
manager (VMM). The hypervisor typically presents a virtual network interface to each VM and performs
Layer 2 switching of traffic from a VM to other local VMs or to a physical interface to the external network.
Working with a Cisco virtual interface card (VIC) adapter, Cisco Virtual Machine Fabric Extender (VM-FEX)
bypasses software-based switching of VM traffic by the hypervisor in favor of external hardware-based
switching in the fabric interconnect. This method results in a reduced load on the server CPU, faster switching,
and the ability to apply a rich set of network management features to local and remote traffic.
VM-FEX extends the (prestandard) IEEE 802.1Qbh port extender architecture to the VMs, providing each
VM interface with a virtual Peripheral Component Interconnect Express (PCIe) device and a virtual port on
a switch. This solution allows precise rate limiting and quality of service (QoS) guarantees on the VM interface.
Virtualization with Network Interface Cards and Converged Network Adapters
Network interface card (NIC) and converged network adapters support virtualized environments with the
standard VMware integration with ESX installed on the server and all virtual machine management performed
through the VC.
Portability of Virtual Machines
If you implement service profiles you retain the ability to easily move a server identity from one server to
another. After you image the new server, the ESX treats that server as if it were the original.
Communication between Virtual Machines on the Same Server
These adapters implement the standard communications between virtual machines on the same server. If an
ESX host includes multiple virtual machines, all communications must go through the virtual switch on the
server.
If the system uses the native VMware drivers, the virtual switch is out of the network administrator's domain
and is not subject to any network policies. As a result, for example, QoS policies on the network are not
applied to any data packets traveling from VM1 to VM2 through the virtual switch.
If the system includes another virtual switch, such as the Nexus 1000, that virtual switch is subject to the
network policies configured on that switch by the network administrator.
Virtualization with a Virtual Interface Card Adapter
A Cisco VIC adapter, such as the Cisco UCS M81KR Virtual Interface Card, is a converged network adapter
(CNA) designed for both single-OS and VM-based deployments. The VIC adapter supports static or dynamic
virtualized interfaces, including up to 128 virtual network interface cards (vNICs).
VIC adapters support VM-FEX to provide hardware-based switching of traffic to and from virtual machine
interfaces.
• Tasks You Can Perform in Cisco UCS Manager , page 44
• Tasks You Cannot Perform in Cisco UCS Manager , page 46
• Cisco UCS Manager in a High Availability Environment, page 46
About Cisco UCS Manager
Cisco UCS Manager is the management system for all components in a UCS Manager. Cisco UCS Manager
runs within the fabric interconnect. You can use any of the interfaces available with this management service
to access, configure, administer, and monitor the network and server resources for all chassis connected to
the fabric interconnect.
CHAPTER 3
Multiple Management Interfaces
Cisco UCS Manager includes the following interfaces you can use to manage a Cisco UCS domain:
• Cisco UCS Manager GUI
• Cisco UCS Manager CLI
• XML API
• KVM
• IPMI
Almost all tasks can be performed in any of the interfaces, and the results of tasks performed in one interface
are automatically displayed in another.
However, you cannot do the following:
• Use Cisco UCS Manager GUI to invoke Cisco UCS Manager CLI.
• View the results of a command invoked through Cisco UCS Manager CLI in Cisco UCS Manager GUI.
Cisco UCS Manager centralizes the management of resources and devices, rather than using multiple
management points. This centralized management includes management of the following devices in a Cisco
UCS domain:
• Fabric interconnects.
• Software switches for virtual servers.
• Power and environmental management for chassis and servers.
• Configuration and firmware updates for server network interfaces (Ethernet NICs and converged network
adapters).
• Firmware and BIOS settings for servers.
Support for Virtual and Physical Servers
Cisco UCS Manager abstracts server state information—including server identity, I/O configuration, MAC
addresses and World Wide Names, firmware revision, and network profiles—into a service profile. You can
apply the service profile to any server resource in the system, providing the same flexibility and support to
physical servers, virtual servers, and virtual machines connected to a virtual device provided by a VIC adapter.
Role-Based Administration and Multi-Tenancy Support
Cisco UCS Manager supports flexibly defined roles so that data centers can use the same best practices with
which they manage discrete servers, storage, and networks to operate a Cisco UCS domain. You can create
user roles with privileges that reflect user responsibilities in the data center. For example, you can create the
following:
• Server administrator roles with control over server-related configurations.
• Storage administrator roles with control over tasks related to the SAN.
• Network administrator roles with control over tasks related to the LAN.
Cisco UCS is multi-tenancy ready, exposing primitives that allow systems management software using the
API to get controlled access to Cisco UCS resources. In a multi-tenancy environment, Cisco UCS Manager
enables you to create locales for user roles that can limit the scope of a user to a particular organization.
Tasks You Can Perform in Cisco UCS Manager
You can use Cisco UCS Manager to perform management tasks for all physical and virtual devices within a
Cisco UCS domain.
Cisco UCS Hardware Management
You can use Cisco UCS Manager to manage all hardware within a Cisco UCS domain, including the following:
You can use Cisco UCS Manager to create and manage all resources within a Cisco UCS domain, including
the following:
• Servers
• WWN addresses
• MAC addresses
• UUIDs
• Bandwidth
Server Administration
A server administrator can use Cisco UCS Manager to perform server management tasks within a Cisco UCS
domain, including the following:
• Create server pools and policies related to those pools, such as qualification policies
• Create policies for the servers, such as discovery policies, scrub policies, and IPMI policies
• Create service profiles and, if desired, service profile templates
• Apply service profiles to servers
• Monitor faults, alarms, and the status of equipment
Network Administration
A network administrator can use Cisco UCS Manager to perform tasks required to create LAN configuration
for a Cisco UCS domain, including the following:
• Configure uplink ports, port channels, and LAN PIN groups
• Create VLANs
• Configure the quality of service classes and definitions
• Create the pools and policies related to network configuration, such as MAC address pools and Ethernet
adapter profiles
Storage Administration
A storage administrator can use Cisco UCS Manager to perform tasks required to create SAN configuration
for a Cisco UCS domain, including the following:
• Configure ports, port channels, and SAN PIN groups
• Create VSANs
• Configure the quality of service classes and definitions
• Create the pools and policies related to the network configuration, such as WWN pools and Fibre Channel
adapter profiles
Tasks You Cannot Perform in Cisco UCS Manager
You cannot use Cisco UCS Manager to perform certain system management tasks that are not specifically
related to device management within a Cisco UCS domain.
No Cross-System Management
You cannot use Cisco UCS Manager to manage systems or devices that are outside the Cisco UCS domain
where Cisco UCS Manager is located. For example, you cannot manage heterogeneous environments, such
as non-Cisco UCS x86 systems, SPARC systems, or PowerPC systems.
No Operating System or Application Provisioning or Management
Cisco UCS Manager provisions servers and, as a result, exists below the operating system on a server. Therefore,
you cannot use it to provision or manage operating systems or applications on servers. For example, you
cannot do the following:
• Deploy an OS, such as Windows or Linux
• Deploy patches for software, such as an OS or an application
• Install base software components, such as anti-virus software, monitoring agents, or backup clients
• Install software applications, such as databases, application server software, or web servers
• Perform operator actions, including restarting an Oracle database, restarting printer queues, or handling
non-Cisco UCS user accounts
• Configure or manage external storage on the SAN or NAS storage
Cisco UCS Manager in a High Availability Environment
In a high availability environment with two fabric interconnects, you can run a separate instance of Cisco
UCS Manager on each fabric interconnect. The Cisco UCS Manager on the primary fabric interconnect acts
as the primary management instance, and the Cisco UCS Manager on the other fabric interconnect is the
subordinate management instance.
The two instances of Cisco UCS Manager communicate across a private network between the L1 and L2
Ethernet ports on the fabric interconnects. Configuration and status information is communicated across this
private network to ensure that all management information is replicated. This ongoing communication ensures
that the management information for Cisco UCS persists even if the primary fabric interconnect fails. In
addition, the "floating" management IP address that runs on the primary Cisco UCS Manager ensures a smooth
transition in the event of a failover to the subordinate fabric interconnect.
• Logging in to Cisco UCS Manager GUI through HTTPS, page 53
• Logging in to Cisco UCS Manager GUI through HTTP, page 54
• Logging Off Cisco UCS Manager GUI , page 54
• Web Session Limits, page 55
• Pre-Login Banner, page 56
• Cisco UCS Manager GUI Properties, page 57
• Determining the Acceptable Range of Values for a Field, page 60
• Determining Where a Policy Is Used, page 60
• Determining Where a Pool Is Used, page 61
• Copying the XML, page 61
Overview of Cisco UCS Manager GUI
Cisco UCS Manager GUI is the Java application that provides a GUI interface to Cisco UCS Manager. You
can start and access Cisco UCS Manager GUI from any computer that meets the requirements listed in the
System Requirements section of the Cisco UCS Software Release Notes.
Each time you start Cisco UCS Manager GUI, Cisco UCS Manager uses Java Web Start technology to cache
the current version of the application on your computer. As a result, you do not have to download the application
every time you log in. You only have to download the application the first time that you log in from a computer
after the Cisco UCS Manager software has been updated on a system.
The title bar displays the name of the Cisco UCS domain to which you are connected.Tip
The Fault Summary area displays in the upper left of Cisco UCS Manager GUI. This area displays a summary
of all faults that have occurred in the Cisco UCS domain.
Each type of fault is represented by a different icon. The number below each icon indicates how many faults
of that type have occurred in the system. If you click an icon, Cisco UCS Manager GUI opens the Faults tab
in the Work area and displays the details of all faults of that type.
The following table describes the types of faults each icon in the Fault Summary area represents:
DescriptionFault Type
Critical Alarms
Major Alarms
Minor Alarms
Warning Alarms
If you only want to see faults for a specific object, navigate to that object and then review the Faults tab
Tip
for that object.
Navigation Pane
The Navigation pane displays on the left side of Cisco UCS Manager GUI below the Fault Summary area.
This pane provides centralized navigation to all equipment and other components in the Cisco UCS domain.
When you select a component in the Navigation pane, the object displays in the Work area.
Critical problems exist with one or more components. These issues should be
researched and fixed immediately.
Serious problems exist with one or more components. These issues should be
researched and fixed immediately.
Problems exist with one or more components that might adversely affect system
performance. These issues should be researched and fixed as soon as possible before
they become major or critical issues.
Potential problems exist with one or more components that might adversely affect
system performance if they are allowed to continue. These issues should be
researched and fixed as soon as possible before the problem grows worse.
The Navigation pane has five tabs. Each tab includes the following elements:
• A Filter combo box that you can use to filter the navigation tree to view all nodes or only one node.
• An expandable navigation tree that you can use to access all components on that tab. An icon next to an
folder indicates that the node or folder has subcomponents.
Equipment Tab
This tab contains a basic inventory of the equipment in the Cisco UCS domain. A system or server administrator
can use this tab to access and manage the chassis, fabric interconnects, servers, and other hardware. A red,
orange, or yellow rectangle around a device name indicate that the device has a fault.
The major nodes below the Equipment node in this tab are the following:
• Chassis
• Fabric Interconnects
Servers Tab
This tab contains the server-related components, such as service profiles, polices, and pools. A server
administrator typically accesses and manages the components on this tab.
The major nodes below the Servers node in this tab are the following:
• Service Profiles
• Service Profile Templates
• Policies
• Pools
LAN Tab
This tab contains the components related to LAN configuration, such as LAN pin groups, quality of service
classes, VLANs, policies, pools, and the internal domain. A network administrator typically accesses and
manages the components on this tab.
The major nodes below the LAN node in this tab are the following:
• LAN Cloud
• Policies
• Pools
• Internal LAN Domains
SAN Tab
This tab contains the components related to SAN configuration, such as pin groups, VSANs, policies, and
pools. A storage administrator typically accesses and manages the components on this tab.
The major nodes below the SAN node in this tab are the following:
• SAN Cloud
• Policies
• Pools
VM Tab
This tab contains the components required to configure VM-FEX for servers with a VIC adapter. For example,
you use components on this tab to configure the connection between Cisco UCS Manager and VMware
vCenter, to configure distributed virtual switches, port profiles, and to view the virtual machines hosted on
servers in the Cisco UCS domain.
The major node below the All node in this tab is the VMware node.
This tab contains system-wide settings, such as user manager and communication services, and troubleshooting
components, such as faults and events. The system administrator typically accesses and manages the components
on this tab.
The major nodes below the All node in this tab are the following:
• Faults, Events and Audit Log
• User Management
• Key Management
• Communication Management
• Stats Management
• Timezone Management
• Capability Catalog
Toolbar
Work Pane
The toolbar displays on the right side of Cisco UCS Manager GUI above the Work pane. You can use the
menu buttons in the toolbar to perform common actions, including the following actions:
• Navigate between previously viewed items in the Work pane
• Create elements for the Cisco UCS domain
• Set options for Cisco UCS Manager GUI
• Access online help for Cisco UCS Manager GUI
The Work pane displays on the right side of Cisco UCS Manager GUI. This pane displays details about the
component selected in the Navigation pane.
The Work pane includes the following elements:
• A navigation bar that displays the path from the main node of the tab in the Navigation pane to the
selected element. You can click any component in this path to display that component in the Work pane.
• A content area that displays tabs with information related to the component selected in the Navigation
pane. The tabs displayed in the content area depends upon the selected component. You can use these
tabs to view information about the component, create components, modify properties of the component,
and examine a selected object.
Status Bar
The status bar displays across the bottom of Cisco UCS Manager GUI. The status bar provides information
about the state of the application.
On the left, the status bar displays the following information about your current session in Cisco UCS Manager
GUI:
• A lock icon that indicates the protocol you used to log in. If the icon is locked, you connected with
HTTPS and if the icon is unlocked, you connected with HTTP.
• The username you used to log in.
• The IP address of the server where you logged in.
On the right, the status bar displays the system time.
Table Customization
Cisco UCS Manager GUI enables you to customize the tables on each tab. You can change the type of content
that you view and filter the content.
Table Customization Menu Button
This menu button in the upper right of every table enables you to control and customize your view of the
table. The drop-down menu for this button includes the following options:
Overview of Cisco UCS Manager GUI
DescriptionMenu Item
Column Name
The menu contains an entry for each column in the table.
Click a column name to display or hide the column.
Horizontal Scroll
If selected, adds a horizontal scroll bar to the table. If not selected,
when you widen one of the columns, all columns to the right narrow
and do not scroll.
Resizes all columns to their default width.Pack All Columns
Resizes only the selected column to its default width.Pack Selected Column
Table Content Filtering
The Filter button above each table enables you to filter the content in the table according to the criteria that
you set in the Filter dialog box. The dialog box includes the following filtering options:
DescriptionName
Disable option
No filtering criteria is used on the content of the column. This is the
default setting.
Equal option
Displays only that content in the column which exactly matches the
value specified.
Not Equal option
Displays only that content in the column which does not exactly match
the value specified.
The LAN Uplinks Manager provides a single interface where you can configure the connections between
Cisco UCS and the LAN. You can use the LAN Uplinks Manager to create and configure the following:
The criteria you enter can include one of the following wildcards:
• _ (underscore) or ? (question mark)—replaces a single character
• % (percent sign) or * (asterisk)—replaces any sequence of
characters
Displays only that content in the column which is less than the value
specified.
Displays only that content in the column which is less than or equal to
the value specified.
Displays only that content in the column which is greater than the value
specified.
Displays only that content in the column which is greater than or equal
to the value specified.
• Ethernet switching mode
• Uplink Ethernet ports
• Port channels
• LAN pin groups
• Named VLANs
• Server ports
• QoS system classes
Some of the configuration that you can do in the LAN Uplinks Manager can also be done in nodes on other
tabs, such as the Equipment tab or the LAN tab.
Internal Fabric Manager
The Internal Fabric Manager provides a single interface where you can configure server ports for a fabric
interconnect in a Cisco UCS domain. The Internal Fabric Manager is accessible from the General tab for that
fabric interconnect.
Some of the configuration that you can do in the Internal Fabric Manager can also be done in nodes on the
Equipment tab, on the LAN tab, or in the LAN Uplinks Manager.
For each chassis in a Cisco UCS domain, Cisco UCS Manager GUI provides a hybrid display that includes
both physical components and connections between the chassis and the fabric interconnects.
This tab displays detailed information about the connections between the selected chassis and the fabric
interconnects. It has an icon for the following:
The lines between the icons represent the connections between the following:
Logging in to Cisco UCS Manager GUI through HTTPS
• Each fabric interconnect in the system
• The I/O module (IOM) in the selected component, which is shown as an independent unit to make the
connection paths easier to see
• The selected chassis showing the servers and PSUs
• DCE interface on each server and the associated server port on the IOM. These connections are created
by Cisco and cannot be changed.
• Server port on the IOM and the associated port on the fabric interconnect. You can change these
connections if desired.
You can mouse over the icons and lines to view tooltips identifying each component or connection, and you
can double-click any component to view properties for that component.
If there is a fault associated with the component or any of its subcomponents, Cisco UCS Manager GUI
displays a fault icon on top of the appropriate component. If there are multiple fault messages, Cisco UCS
Manager GUI displays the icon associated with the most serious fault message in the system.
Logging in to Cisco UCS Manager GUI through HTTPS
The default HTTPS web link for Cisco UCS Manager GUI is https://UCSManager_IP, where
UCSManager_IP represents the IP address assigned to Cisco UCS Manager. This IP address can be one of
the following:
• Cluster configuration: UCSManager_IP represents the virtual or cluster IP address assigned to Cisco
UCS Manager. Do not use the IP addresses assigned to the management port on the fabric interconnects.
• Standalone configuration: UCSManager_IP represents the IP address for the management port on the
fabric interconnect.
Procedure
Step 1
Step 2
Step 3
In your web browser, type the Cisco UCS Manager GUI web link or select the bookmark in your browser.
If a Security Alert dialog box appears, click Yes to accept the security certificate and continue.
In the Cisco UCS Manager launch page, click Launch UCS Manager.
Depending upon the web browser you use to log in, you may be prompted to download or save the .JNLP
file.
Step 4
Step 5
OL-25712-0453
If Cisco UCS Manager displays a pre-login banner, review the message and click OK to close the dialog box.
If a Security dialog box displays, do the following:
a) (Optional) Check the check box to accept all content from Cisco.
b) Click Yes to accept the certificate and continue.
Step 6
In the Login dialog box, do the following:
a) Enter your username and password.
b) If your Cisco UCS implementation includes multiple domains, select the appropriate domain from the
Domain drop-down list.
c) Click Login.
Logging in to Cisco UCS Manager GUI through HTTP
The default HTTP web link for Cisco UCS Manager GUI is http://UCSManager_IP , where
UCSManager_IP represents the IP address assigned to Cisco UCS Manager. This IP address can be one of
the following:
• Cluster configuration: UCSManager_IP represents the virtual or cluster IP address assigned to
Cisco UCS Manager. Do not use the IP addresses assigned to the management port on the fabric
interconnects.
• Standalone configuration: UCSManager_IP represents the IP address for the management port on
the fabric interconnect
Procedure
Step 1
Step 2
Step 3
Step 4
In your web browser, type the Cisco UCS Manager GUI web link or select the bookmark in your browser.
If Cisco UCS Manager displays a pre-login banner, review the message and click OK to close the dialog box.
In the Cisco UCS Manager launch page, click Launch UCS Manager.
Depending upon the web browser you use to log in, you may be prompted to download or save the .JNLP
file.
In the Login dialog box, do the following:
a) Enter your username and password.
b) If your Cisco UCS implementation includes multiple domains, select the appropriate domain from the
Domain drop-down list.
c) Click Login.
Logging Off Cisco UCS Manager GUI
Procedure
Step 1
In Cisco UCS Manager GUI, click Exit in the upper right.
Cisco UCS Manager GUI blurs on your screen to indicate that you cannot use it and displays the Exit dialog
box.
Step 2
Step 3
From the drop-down list, select one of the following:
• Exit to log out and shut down Cisco UCS Manager GUI.
• Log Off to log out of Cisco UCS Manager GUI and log in a different user.
Click OK.
Web Session Limits
Web session limits are used by Cisco UCS Manager to restrict the number of web sessions (both GUI and
XML) permitted access to the system at any one time.
By default, the number of concurrent web sessions allowed by Cisco UCS Manager is set to the maximum
value: 256.
Setting the Web Session Limit for Cisco UCS Manager
Procedure
Step 1
Step 2
Step 3
Step 4
In the Navigation pane, click the Admin tab.
On the Admin tab, expand All > Communication Management > Communication Services.
In the Work pane, click the Communication Services tab.
In the Web Session Limits area, complete the following fields:
DescriptionName
Maximum Sessions Per User field
Maximum Sessions field
Step 5
OL-25712-0455
Click Save Changes.
The maximum number of concurrent HTTP and HTTPS sessions allowed
for each user.
Enter an integer between 1 and 256.
The maximum number of concurrent HTTP and HTTPS sessions allowed
for all users within the system.
With a pre-login banner, when a user logs into Cisco UCS Manager GUI, Cisco UCS Manager displays the
banner text in the Create Pre-Login Banner dialog box and waits until the user dismisses that dialog box
before it prompts for the username and password. When a user logs into Cisco UCS Manager CLI, Cisco UCS
Manager displays the banner text in a dialog box and waits for the user to dismiss that dialog box before it
prompts for the password. It then repeats the banner text above the copyright block that it displays to the user.
Creating the Pre-Login Banner
If the Pre-Login Banner area does not appear on the Banners tab, Cisco UCS Manager does not display a
pre-login banner when users log in. If the Pre-Login Banner area does appear, you cannot create a second
pre-login banner. You can only delete or modify the existing banner.
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
In the Navigation pane, click the Admin tab.
On the Admin tab, expand All > User Management.
Click the User Services node.
In the Work pane, click the Banners tab.
In the Actions area, click Create Pre-Login Banner.
In the Create Pre-Login Banner dialog box, click in the text field and enter the message that you want users
to see when they log in to Cisco UCS Manager.
You can enter any standard ASCII character in this field.
Click OK.
Modifying the Pre-Login Banner
Procedure
Step 1
Step 2
Step 3
Step 4
Step 5
In the Navigation pane, click the Admin tab.
On the Admin tab, expand All > User Management.
Click the User Services node.
In the Work pane, click the Banners tab.
Click in the text field in the Pre-Login Banner area and make the necessary changes to the text.
You can enter any standard ASCII character in this field.
If the Cisco UCS Manager GUI displays a confirmation dialog box, click Yes.
Cisco UCS Manager GUI Properties
Configuring the Cisco UCS Manager GUI Session and Log Properties
These properties determine how Cisco UCS Manager GUI reacts to session interruptions and inactivity, and
configures the Cisco UCS Manager GUI Java message logging.
Procedure
Step 1
Step 2
Step 3
In the toolbar, click Options to open the Properties dialog box.
In the right pane, click Session.
In the Session page, update one or more of the following fields:
DescriptionName
Automatically Reconnect check
box
GUI Inactivity Time Out
drop-down list
OL-25712-0457
If checked, the system tries to reconnect if communication between the
GUI and the fabric interconnect is interrupted.
The number of minutes the system should wait before ending an inactive
session. To specify that the session should not time out regardless of
the length of inactivity, choose NEVER.
The amount of Java message logging done for Cisco UCS Manager
GUI on the user's local machine. This can be one of the following:
• All—All relevant Java information for the GUI is logged. There
can be a maximum of 10 log files, each of which can be a
maximum of 10 MB in size. Once the final file has been filled,
Cisco UCS Manager deletes the oldest log file and starts a new
one.
• Off—Cisco UCS Manager does not create any Java log files for
the GUI.
Note
The log file location is determined by the Java runtime settings
on the user's local machine. For more information, see the
documentation for the version of Java that you are using.
The maximum size, in megabytes, that Cisco UCS Manager allocates
to any of the logs it saves for this Cisco UCS domain.
If the Automatically Reconnect check box is checked, this is the
number of seconds the system waits before trying to reconnect.
Configuring Properties for Confirmation Messages
These properties determine whether or not Cisco UCS Manager GUI displays a confirmation message after
configuration changes and other operations.
Procedure
Step 1
Step 2
Step 3
In the toolbar, click Options to open the Properties dialog box.
In the right pane, click Confirmation Messages.
In the Confirmation Messages page, complete the following fields:
DescriptionName
Confirm Deletion check box
If checked, Cisco UCS Manager GUI requires that you confirm all
delete operations.
Confirm Discard Changes check
box
Confirm Modification/Creation
check box
If checked, Cisco UCS Manager GUI requires that you confirm before
the system discards any changes.
If checked, Cisco UCS Manager GUI requires that you confirm before
the system modifies or creates objects.
Determining the Acceptable Range of Values for a Field
DescriptionName
The number of tabs the system should store in memory for use with the
Forward and Back toolbar buttons.
If checked, all labels are right-aligned with respect to one another.
Otherwise all labels are left-aligned.
If checked, when you drag an object from one place to another, the GUI
displays a transparent version of that object until you drop the object
in its new location.
If checked, when you go to a new page in a wizard the first page fades
out and the new page fades in. Otherwise the page changes without a
visible transition.
Step 4
Max History Size field
Right Aligned Labels check box
Show Image while Dragging
check box
Wizard Transition Effects check
box
Click OK.
Determining the Acceptable Range of Values for a Field
Some properties have a restricted range of values that you can enter. You can use this procedure to determine
that acceptable range for fields in a dialog box, window, or tab. You cannot use this procedure to determine
the acceptable range of values for properties listed in a table or tree.
Procedure
Step 1
Step 2
Place your cursor in the field for which you want to check the range to give focus to that field.
Press Alt + Shift + R.
Cisco UCS Manager GUI displays the acceptable range of values for a few seconds. The range disappears if
you click anywhere on the screen.
Determining Where a Policy Is Used
You can use this procedure to determine which service profiles and service profile templates are associated
with the selected policy.
In the Navigation pane, click the policy whose usage you want to view.
In the Work pane, click the General tab.
In the Actions area, click Show Policy Usage.
Cisco UCS Manager GUI displays the Service Profiles/Templates dialog box that shows the associated
service profiles and service profile templates.
Determining Where a Pool Is Used
You can use this procedure to determine which service profiles and service profile templates are associated
with the selected pool.
Procedure
Step 1
Step 2
Step 3
In the Navigation pane, click the pool whose usage you want to view.
In the Work pane, click the General tab.
In the Actions area, click Show Pool Usage.
Cisco UCS Manager GUI displays the Service Profiles/Templates dialog box that shows the associated
service profiles and service profile templates.
Copying the XML
To assist you in developing scripts or creating applications with the XML API for Cisco UCS, Cisco UCS
Manager GUI includes an option to copy the XML used to create an object in Cisco UCS Manager. This
option is available on the right-click menu for most object nodes in the Navigation pane, such as the PortProfiles node or the node for a specific service profile.
Procedure
Step 1
Step 2
Step 3
OL-25712-0461
In the Navigation pane, navigate to the object for which you want to copy the XML.
Right-click on that object and choose Copy XML.
Paste the XML into an XML editor, Notepad, or another application.