HP Integrity Superdome 16-socket, Integrity Superdome 32-socket, Integrity Superdome 64-socket Specification

QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Overview
DA - 11717 North America — Version 15 — January 3, 2005
Page 1
At A Glance
At A GlanceAt A Glance
At A Glance
The latest release of Superdome, HP Integrity Superdome supports the new and improved sx1000 chip set. HP Integrity Superdome supports the following processors:
Itanium 2 1.5 GHz and 1.6-GHz processors PA 8800 HP mx2 processor module based on two Itanium 2 processors
HP Integrity Superdome cannot support both PA-8800 and Itanium processors in the same system, even if they are on different partitions. However, it is possible to have the Itanium 2 1.5-GHz processor, the Itanium 2 1.6-GHz processor and the HP mx2 processor module in the same system, but on different partitions.
Throughout the rest of this document, the term HP Integrity Superdome with Itanium 2 1.5 GHz processors, Itanium 2 1.6-GHz processors or mx2 processor modules will be referred to as simply "Superdome".
Superdome with Itanium processors showcases HP's commitment to delivering a 64 socket Itanium server and superior investment protection. It is the dawn of a new era in high end computing with the emergence of commodity based hardware.
Superdome supports a multi OS environment. Currently, HP UX, Windows Server 2003, Red Hat RHEL AS 3, and SUSE SLES 9 are shipping with Integrity Superdome Customers can order any combination of HP UX 11i v2, Windows Server 2003, Datacenter Edition, or RHEL AS 3, running in separate hard partitions.
The multi-OS environment offered by Superdome is listed below.
HP-UX 11i version 2
HP-UX 11i version 2HP-UX 11i version 2
HP-UX 11i version 2
Improved performance over PA-8700 Investment protection through upgrades from existing Superdomes to next-generation Itanium 2 processors
Windows Server 2003,
Windows Server 2003,Windows Server 2003,
Windows Server 2003, Datacenter Edition for
Datacenter Edition forDatacenter Edition for
Datacenter Edition for Itanium 2
Itanium 2Itanium 2
Itanium 2
Extension of industry standard-based computing with the Windows operating system further into the enterprise data center Increased performance and scalability over 32-bit implementations Lower cost of ownership versus proprietary operating system solutions Ideal for scale up database opportunities (such as SQL Server 2000 (64-bit), Enterprise Edition) Ideal for database consolidation opportunities such as consolidation of legacy 32-bit versions of SQL Server 2000 to SQL Server 2000 (64-bit)
Red Hat RHEL AS 3 and
Red Hat RHEL AS 3 andRed Hat RHEL AS 3 and
Red Hat RHEL AS 3 and SUSE SLES 9
SUSE SLES 9SUSE SLES 9
SUSE SLES 9
Extension of industry standard computing with Linux further into the enterprise data center Lower cost of ownership Ideal for server consolidation opportunities Not supported on Superdome with mx2 processor modules
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Overview
DA - 11717 North America — Version 15 — January 3, 2005
Page 2
Superdome Service
Superdome ServiceSuperdome Service
Superdome Service Solutions
SolutionsSolutions
Solutions
Superdome continues to provide the same positive Total Customer Experience via industry-leading HP Services, as with existing Superdome servers. The HP Services component of Superdome is as follows:
HP customers have consistently achieved higher levels of satisfaction when key components of their IT infrastructures are implemented using the
Solution Life Cycle
Solution Life CycleSolution Life Cycle
Solution Life Cycle
. The Solution Life Cycle focuses on rapid productivity and maximum availability by examining customers' specific needs at each of five distinct phases (plan, design, integrate, install, and manage) and then designing their Superdome solution around those needs. HP offers three pre configured service solutions for Superdome that provides customers with a choice of lifecycle services to address their own individual business requirements.
Foundation Service Solution:
Foundation Service Solution:Foundation Service Solution:
Foundation Service Solution:
This solution reduces design problems, speeds time-to-production, and lays the groundwork for long-term system reliability by combining pre-installation preparation and integration services, hands on training and reactive support. This solution includes HP Support Plus 24 to provide an integrated set of 24x7 hardware and software services as well as software updates for selected HP and third party products. Proactive Service Solution:
Proactive Service Solution:Proactive Service Solution:
Proactive Service Solution:
This solution builds on the Foundation Service Solution by enhancing the management phase of the Solution Life Cycle with HP Proactive 24 to complement your internal IT resources with proactive assistance and reactive support. Proactive Service Solution helps reduce design problems, speed time to production, and lay the groundwork for long term system reliability by combining pre installation preparation and integration services with hands on staff training and transition assistance. With HP Proactive 24 included in your solution, you optimize the effectiveness of your IT environment with access to an HP-certified team of experts that can help you identify potential areas of improvement in key IT processes and implement necessary changes to increase availability. Critical Service Solution:
Critical Service Solution:Critical Service Solution:
Critical Service Solution:
Mission Critical environments are maintained by combining proactive and reactive support services to ensure maximum IT availability and performance for companies that can't tolerate downtime without serious business impact. Critical Service Solution encompasses the full spectrum of deliverables across the Solution Lifecycle and is enhanced by HP Critical Service as the core of the management phase. This total solution provides maximum system availability and reduces design problems, speeds time-to-production, and lays the groundwork for long term system reliability by combining pre-installation preparation and integration services, hands on training, transition assistance, remote monitoring, and mission critical support. As part of HP Critical Service, you get the services of a team of HP certified experts that will assist with the transition process, teach your staff how to optimize system performance, and monitor your system closely so potential problems are identified before they can affect availability.
HP's Mission Critical Partnership:
HP's Mission Critical Partnership:HP's Mission Critical Partnership:
HP's Mission Critical Partnership:
This service offering provides customers the opportunity to create a custom agreement with Hewlett Packard to achieve the level of service that you need to meet your business requirements. This level of service can help you reduce the business risk of a complex IT infrastructure, by helping you align IT service delivery to your business objectives, enable a high rate of business change, and continuously improve service levels. HP will work with you proactively to eliminate downtime, and improve IT management processes. Service Solution Enhancements:
Service Solution Enhancements:Service Solution Enhancements:
Service Solution Enhancements:
HP's full portfolio of services is available to enhance your Superdome Service Solution in order to address your specific business needs. Services focused across multi-operating systems as well as other platforms such as storage and networks can be combined to compliment your total solution.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Overview
DA - 11717 North America — Version 15 — January 3, 2005
Page 3
Minimum/Maximum Configurations for Superdome with Intel Itanium 2 Processors (1.5-GHz and 1.6-GHz)
Minimum/Maximum Configurations for Superdome with Intel Itanium 2 Processors (1.5-GHz and 1.6-GHz)Minimum/Maximum Configurations for Superdome with Intel Itanium 2 Processors (1.5-GHz and 1.6-GHz)
Minimum/Maximum Configurations for Superdome with Intel Itanium 2 Processors (1.5-GHz and 1.6-GHz)
System
SystemSystem
System
HP-UX 11i version 2
HP-UX 11i version 2HP-UX 11i version 2
HP-UX 11i version 2
Windows Server 2003
Windows Server 2003Windows Server 2003
Windows Server 2003
Datacenter Edition
Datacenter EditionDatacenter Edition
Datacenter Edition
Red Hat RHEL
Red Hat RHELRed Hat RHEL
Red Hat RHEL
AS 3 U3 &
AS 3 U3 &AS 3 U3 &
AS 3 U3 &
SUSE SLES 9
SUSE SLES 9SUSE SLES 9
SUSE SLES 9
SUSE
SUSESUSE
SUSE SLES 9
SLES 9SLES 9
SLES 9
Red Hat
Red HatRed Hat
Red Hat RHEL AS 3
RHEL AS 3RHEL AS 3
RHEL AS 3
Minimum
MinimumMinimum
Minimum
Maximum
MaximumMaximum
Maximum (in one
(in one(in one
(in one partition)
partition)partition)
partition)
Minimum
MinimumMinimum
Minimum
Maximum
MaximumMaximum
Maximum (in one
(in one(in one
(in one partition)
partition)partition)
partition)
Minimum
MinimumMinimum
Minimum
Maximum
MaximumMaximum
Maximum (in one
(in one(in one
(in one partition)
partition)partition)
partition)
Maximum
MaximumMaximum
Maximum (in one
(in one(in one
(in one partition)
partition)partition)
partition)
Superdome 16-
Superdome 16-Superdome 16-
Superdome 16­socket
socketsocket
socket
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
16 CPUs 256 GB Memory 4 Cell Boards 4 PCI-X Chassis 4
npars
max
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
16 CPUs 256 GB Memory 4 Cell Boards 4 PCI-X Chassis 4
npars
max
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
16 CPUs 256 GB Memory 4 Cell Boards 2 PCI-X Chassis 4 nPars max
8 CPUs 128 GB Memory 2 Cell Boards 2 PCI-X Chassis 4
npars
max
Superdome 32-
Superdome 32-Superdome 32-
Superdome 32­socket
socketsocket
socket
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
32 CPUs 512 GB Memory 8 Cell Boards 8 PCI-X Chassis
8 npars max IOX required if more than 4 npars.
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
32 CPUs 512 GB Memory 8 Cell Boards 8 PCI-X Chassis
8 npars max IOX required if more than 4 npars.
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
16 CPUs 256 GB Memory 4 Cell Boards 2 PCI-X Chassis 8 nPars max IOX required if more than 4 nPars
8 CPUs 128GB Memory 2 Cell Boards 2 PCI-X Chassis
8 npars max IOX required if more than 4 npars
Superdome 64-
Superdome 64-Superdome 64-
Superdome 64­socket
socketsocket
socket
6 CPUs 6 GB memory 3 Cell Boards 1 PCI-X Chassis
64 CPUs 1024 GB Memory 16 Cell Boards 16 PCI-X Chassis
16 npars max IOX required if more than 8 npars.
6 CPUs 6 GB memory 3 Cell Boards 1 PCI-X Chassis
64 CPUs 1024 GB total memory (512 GB Max per partition) 16 Cell Boards 16 PCI-X Chassis
16 npars max IOX required if more than 8 npars.
6 CPUs 6 GB memory 3 Cell Boards 1 PCI-X Chassis
16 CPUs 256 GB Memory 4 Cell Boards 2 PCI-X Chassis 16 nPars max IOX required if more than 8 nPars.
8 CPUs 128 GB Memory 2 Cell Boards 2 PCI-X Chassis
16 npars max IOX required if more than 8 npars.
Standard
StandardStandard
Standard Hardware
HardwareHardware
Hardware Features
FeaturesFeatures
Features
Redundant Power supplies Redundant Fans Factory integration of memory and I/O cards Installation Guide, Operator's Guide and Architecture Manual HP site planning and installation One-year warranty with same business day on-site service response
Minimum/Maximum Configurations for Superdome with mx2 Processor Modules
Minimum/Maximum Configurations for Superdome with mx2 Processor ModulesMinimum/Maximum Configurations for Superdome with mx2 Processor Modules
Minimum/Maximum Configurations for Superdome with mx2 Processor Modules
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Standard Features
DA - 11717 North America — Version 15 — January 3, 2005
Page 4
System
SystemSystem
System
HP-UX 11i version 2
HP-UX 11i version 2HP-UX 11i version 2
HP-UX 11i version 2
Windows Server 2003 Datacenter Edition
Windows Server 2003 Datacenter EditionWindows Server 2003 Datacenter Edition
Windows Server 2003 Datacenter Edition
Minimum
MinimumMinimum
Minimum
Maximum
MaximumMaximum
Maximum
Minimum
MinimumMinimum
Minimum
Maximum
MaximumMaximum
Maximum
Superdome
SuperdomeSuperdome
Superdome 16-socket
16-socket16-socket
16-socket
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
32 CPUs 256 GB Memory 4 Cell Boards 4 PCI-X Chassis
4 nPars max
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
32 CPUs 256 GB Memory 4 Cell Boards 4 PCI-X Chassis
4 nPars max
Superdome
SuperdomeSuperdome
Superdome 32-socket
32-socket32-socket
32-socket
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
64 CPUs 512 GB Memory 8 Cell Boards 8 PCI-X Chassis
8 nPars max IOX required if more than 4 nPars
2 CPUs 2 GB Memory 1 Cell Board 1 PCI-X Chassis
64 CPUs 512 GB Memory 8 Cell Boards 8 PCI-X Chassis
8 nPars max IOX required if more than 4 nPars.
Superdome
SuperdomeSuperdome
Superdome 64-socket
64-socket64-socket
64-socket
6 CPUs 6 GB memory 3 Cell Boards 1 PCI-X Chassis
128 CPUs (64 CPUs max per partition) 1024 GB Memory 16 Cell Boards 16 PCI-X Chassis
16 nPars max IOX required if more than 8 nPars
6 CPUs 6 GB memory 3 Cell Boards 1 PCI-X Chassis
128 CPUs (64 CPUs max per partition) 1024 GB Memory 16 Cell Boards 16 PCI-X Chassis
16 nPars max IOX required if more than 8 nPars.
Standard
StandardStandard
Standard Hardware
HardwareHardware
Hardware Features
FeaturesFeatures
Features
Redundant Power supplies Redundant Fans Factory integration of memory and I/O cards Installation Guide, Operator's Guide and Architecture Manual HP site planning and installation One-year warranty with same business day on-site service response
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Standard Features
DA - 11717 North America — Version 15 — January 3, 2005
Page 5
There are three basic building blocks in the Superdome system architecture: the cell, the crossbar backplane and the PCI-X based I/O subsystem.
Cabinets
CabinetsCabinets
Cabinets
Starting with the sx1000 chip set, Superdome servers will be released with the Graphite color. A Superdome system will consist of up to four different types of cabinet assemblies:
One Superdome left cabinet. No more than one Superdome right cabinet (only Superdome 64-socket system) The Superdome cabinets contain all of the processors, memory and core devices of the system. They will also house most (usually all) of the system's PCI X cards. Systems may include both left and right cabinet assemblies containing, a left or right backplane respectively. One or more HP Rack System/E cabinets. These 19-inch rack cabinets are used to hold the system peripheral devices such as disk drives. Optionally, one or more I/O expansion cabinets (Rack System/E). An I/O expansion cabinet is required when a customer requires more PCI X cards than can be accommodated in their Superdome cabinets.
Superdome cabinets will be serviced from the front and rear of the cabinet only. This will enable customers to arrange the cabinets of their Superdome system in the traditional row fashion found in most computer rooms. The width of the cabinet will accommodate moving it through common doorways in the U.S.. The intake air to the main (cell) card cage will be filtered. This filter will be removable for cleaning/replacement while the system is fully operational.
A status display will be located on the outside of the front and rear doors of each cabinet. The customer and field engineers can therefore determine basic status of each cabinet without opening any cabinet doors.
Superdome 16-socket and Superdome 32-socket systems are available in single cabinets. Superdome 64-socket systems are available in dual cabinets.
Each cabinet may contain a specific number of cell boards (consisting of CPUs and memory) and I/O. See the following sections for configuration rules pertaining to each cabinet.
Cells (CPUs and Memory)
Cells (CPUs and Memory)Cells (CPUs and Memory)
Cells (CPUs and Memory)
A cell, or cell board, is the basic building block of a Superdome system. It is a symmetric multi-processor (SMP), containing up to 4 processor modules and up to 16 GB of main memory using 512 MB DIMMs or up to 32 GB of main memory using 1 GB DIMMs. It is also possible to mix 512 MB and 1 GB DIMMs on the same cell board. A connection to a 12-slot PCI-X card cage is optional for each cell.
The Superdome cell boards shipped from the factory are offered with 2 sockets or 4 sockets. These cell boards are different from those that were used in the previous PA RISC releases of Superdome.
The Superdome cell board contains:
Itanium 2 1.5 GHz CPUs or Itanium 2 1.6-GHz CPUs (up to 4 processor modules for a total of 4 CPUs) or mx2 dual processor modules (up to 4 modules for a total of 8 CPUs) Cell controller ASIC (application specific integrated circuit) Main memory DIMMs (up to 32 DIMMs per board in 4 DIMM increments, using 512 MB, 1 GB, or 2-GB DIMMs ­or some combination of both.) Voltage Regulator Modules (VRM) Data buses Optional link to 12 PCI-X I/O slots
Crossbar Backplane
Crossbar BackplaneCrossbar Backplane
Crossbar Backplane
Each crossbar backplane contains two sets of two crossbar chips that provide a non blocking connection between eight cells and the other backplane. Each backplane cabinet can support up to eight cells or 32 processors (in a Superdome 32­socket in a single cabinet). A backplane supporting four cells or 16 processors would result in a Superdome 16-socket. Two backplanes can be linked together with flex cables to produce a cabinet that can support up to 16 cells or 64 processors (Superdome 64-socket in dual cabinets).
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 6
I/O Subsystem
I/O SubsystemI/O Subsystem
I/O Subsystem
Each I/O chassis provides twelve I/O slots. Superdome with Itanium 2 processors or mx2 processor modules supports I/O chassis with 12 PCI-X 133 capable slots, eight supported via single enhanced (2x) ropes (533 MB/s peak) and four supported via dual enhanced (4x) ropes (1066 MB/s peak). Please note that if a PCI card is inserted into a PCI-X slot, the card cannot take advantage of the faster slot.
Each Superdome cabinet supports a maximum of four I/O chassis. The optional I/O expansion cabinet can support up to six I/O chassis.
A 4-cell Superdome (16-socket) supports up to four I/O chassis for a maximum of 48 PCI-X slots.
An 8-cell Superdome (32-socket) supports up to eight I/O chassis for a maximum of 96 PCI-X slots. Four of these I/O chassis will reside in an I/O expansion cabinet.
A 16-cell Superdome (64-socket) supports up to sixteen I/O chassis for a maximum of 192 PCI-X slots. Eight of these I/O chassis will reside in two I/O expansion cabinets (either six chassis in one I/O expansion cabinet and two chassis in the other, or four chassis in each).
Core I/O
Core I/OCore I/O
Core I/O
The core I/O in Superdome provides the base set of I/O functions required by every Superdome partition. Each partition must have at least one core I/O card in order to boot. Multiple core I/O cards may be present within a partition (one core I/O card is supported per I/O backplane); however, only one may be active at a time. Core I/O will utilize the standard long card PCI-X form factor but will add a second card cage connection to the I/O backplane for additional non-PCI X signals (USB and utilities). This secondary connector will not impede the ability to support standard PCI-X cards in the core slot when a core I/O card is not installed.
Any I/O chassis can support a Core I/O card that is required for each independent partition. A system configured with 16 cells, each with its own I/O chassis and core I/O card could support up to 16 independent partitions. Note that cells can be configured without I/O chassis attached, but I/O chassis cannot be configured in the system unless attached to a cell.
HP-UX Core I/O
HP-UX Core I/OHP-UX Core I/O
HP-UX Core I/O (A6865A)
(A6865A)(A6865A)
(A6865A)
The core I/O card's primary functions are:
Partitions (console support) including USB and RS-232 connections 10/100Base-T LAN (general purpose)
Other common functions, such as Ultra/Ultra2 SCSI, Fibre Channel, and Gigabit Ethernet, are not included on the core I/O card. These functions are, of course, supported as normal PCI-X add-in cards.
The unified 100Base-T Core LAN driver code searches to verify whether there is a cable connection on an RJ-45 port or on an AUI port. If no cable connection is found on the RJ-45 port, there is a busy wait pause of 150 ms when checking for an AUI connection. By installing the loopback connector (description below) in the RJ-45 port, the driver would think an RJ-45 cable was connected and would not continue to search for an AUI connection, hence eliminate the 150 ms busy wait state:
Product/
Product/Product/
Product/ Option Number
Option NumberOption Number
Option Number
Description
DescriptionDescription
Description
A7108A
RJ-45 Loopback Connector
0D1
Factory integration RJ-45 Loopback Connector
Windows Core I/O
Windows Core I/OWindows Core I/O
Windows Core I/O (A6865A and optional
(A6865A and optional(A6865A and optional
(A6865A and optional VGA/USB A6869A)
VGA/USB A6869A)VGA/USB A6869A)
VGA/USB A6869A)
For Windows Server 2003, Windows does not support the 10/100 LAN on the A6865A core I/O card, a separate Gigabit Ethernet card such as the A7061A, A7073A, A9899A or A9900A is required.The use of Graphics/USB card (A6869A) is optional and not required.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 7
Linux Core I/O (A6865A)
Linux Core I/O (A6865A)Linux Core I/O (A6865A)
Linux Core I/O (A6865A)
The core I/O card's primary functions are:
Partitions (console support) including USB and RS-232 connections 10/100Base-T LAN (general purpose)
Other common functions, such as Ultra/Ultra2 SCSI, Fibre Channel, and Gigabit Ethernet, are not included on the core I/O card. These functions are supported as normal PCI-X add-in cards.
I/O Expansion Cabinet
I/O Expansion CabinetI/O Expansion Cabinet
I/O Expansion Cabinet
The I/O expansion functionality is physically partitioned into four rack-mounted chassis—the I/O expansion utilities chassis (XUC), the I/O expansion rear display module (RDM), the I/O expansion power chassis (XPC) and the I/O chassis enclosure (ICE). Each ICE supports up to two 12-slot PCI-X chassis.
Field Racking
Field RackingField Racking
Field Racking
The only field rackable I/O expansion components are the ICE and the 12-slot I/O chassis. Either component would be field installed when the customer has ordered additional I/O capability for a previously installed I/O expansion cabinet.
No I/O expansion cabinet components will be delivered to be field installed in a customer's existing rack other than a previously installed I/O expansion cabinet. The I/O expansion components were not designed to be installed in racks other than Rack System E. In other words, they are not designed for Rosebowl I, pre-merger Compaq, Rittal, or other third-party racks.
The I/O expansion cabinet is based on a modified HP Rack System E and all expansion components mount in the rack. Each component is designed to install independently in the rack. The Rack System E cabinet has been modified to allow I/O interface cables to route between the ICE and cell boards in the Superdome cabinet. I/O expansion components are not designed for installation behind a rack front door. The components are designed for use with the standard Rack System E perforated rear door.
I/O Chassis Enclosure
I/O Chassis EnclosureI/O Chassis Enclosure
I/O Chassis Enclosure (ICE)
(ICE)(ICE)
(ICE)
The I/O chassis enclosure (ICE) provides expanded I/O capability for Superdome. Each ICE supports up to 24 PCI-X slots by using two 12-slot Superdome I/O chassis. The I/O chassis installation in the ICE puts the PCI-X cards in a horizontal position. An ICE supports one or two 12-slot I/O chassis. The I/O chassis enclosure (ICE) is designed to mount in a Rack System E rack and consumes 9U of vertical rack space.
To provide online addition/replacement/deletion access to PCI or PCI-X cards and hot-swap access for I/O fans, all I/O chassis are mounted on a sliding shelf inside the ICE.
Four (N+1) I/O fans mounted in the rear of the ICE provide cooling for the chassis. Air is pulled through the front as well as the I/O chassis lid (on the side of the ICE) and exhausted out the rear. The I/O fan assembly is hot swappable. An LED on each I/O fan assembly indicates that the fan is operating.
Cabinet Height and
Cabinet Height andCabinet Height and
Cabinet Height and Configuration Limitations
Configuration LimitationsConfiguration Limitations
Configuration Limitations
Although the individual I/O expansion cabinet components are designed for installation in any Rack System E cabinet, rack size limitations have been agreed upon. IOX Cabinets will ship in either the 1.6 meter (33U) or 1.96 meter (41U) cabinet. In order to allay service access concerns, the factory will not install IOX components higher than 1.6 meters from the floor. Open space in an IOX cabinet will be available for peripheral installation.
Peripheral Support
Peripheral SupportPeripheral Support
Peripheral Support
All peripherals qualified for use with Superdome and/or for use in a Rack System E are supported in the I/O expansion cabinet as long as there is available space. Peripherals not connected to or associated with the Superdome system to which the I/O expansion cabinet is attached may be installed in the I/O expansion cabinet.
Server Support
Server SupportServer Support
Server Support
No servers except those required for Superdome system management such as Superdome Support Management Station or ISEE may be installed in an I/O expansion.
Peripherals installed in the I/O expansion cabinet cannot be powered by the XPC. Provisions for peripheral AC power must be provided by a PDU or other means.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 8
Standalone I/O Expansion
Standalone I/O ExpansionStandalone I/O Expansion
Standalone I/O Expansion Cabinet
CabinetCabinet
Cabinet
If an I/O expansion cabinet is ordered alone, its field installation can be ordered via option 750 in the ordering guide (option 950 for Platinum Channel partners).
DVD Solution
DVD SolutionDVD Solution
DVD Solution
The DVD solution for Superdome requires the following components. These components are required per partition. External racks A4901A and A4902A must also be ordered with the DVD solution.
NOTE:
NOTE:NOTE:
NOTE:
One DVD and one DAT is required per nPartition.
Superdome DVD Solutions
Superdome DVD SolutionsSuperdome DVD Solutions
Superdome DVD Solutions
Description
DescriptionDescription
Description
Part Number
Part NumberPart Number
Part Number
Option Number
Option NumberOption Number
Option Number
PCI Ultra160 SCSI Adapter or PCI-X Dual channel Ultra160 SCSI Adapter
A6828A or A6829A
0D1
PCI Ultra160 SCSI Adapter or PCI X Dual channel Ultra 160 SCSI Adapter (Windows Server 2003, Red Hat RHEL AS 3, SUSE SLES 9)
A7059A or A7060A
0D1
Surestore Tape Array 5300
C7508AZ
HP DVD+RW Array Module (one per partition)
NOTE:
NOTE: NOTE:
NOTE:
The HP DVD-ROM Array Module for the TA5300 (C7499B) is replaced by HP DVD+RW Array Module (Q1592A) to provide customers with read capabilities for loading software from CD or DVD, DVD write capabilities for small amounts of data (up to 4 GB) and offline hot-swap capabilities. Windows supports using and reading from this device, but Windows does not support DVD write with this device.
Q1592A
0D1
DDS-4/DAT40 (DDS-5/DAT 72 is also supported. Product number is Q1524A) (one per partition)
C7497B
0D1
Jumper SCSI Cable for DDS-4 (optional)
1
C2978B
0D1
SCSI cable 1-meter multi-mode VH-HD68
C2361B
0D1
SCSI Terminator
C2364A
0D1
1
0.5-meter HD HDTS68 is required if DDS-4 or DDS-5 is used.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 9
Partitions
PartitionsPartitions
Partitions
Superdome can be configured with hardware partitions, (nPars). Given that HP UX 11i version 2, Windows Server 2003, SUSE SLES 9 and Red Hat (RHEL) AS 3 do not support virtual partitions (vPars), Superdome systems running HP UX 11i version 2, Windows Server 2003, Datacenter Edition, SUSE SLES 9 or Red Hat RHEL AS 3, do not support vPars.
A hardware partition (nPar) consists of one or more cells that communicate coherently over a high bandwidth, low latency crossbar fabric. Individual processors on a single-cell board cannot be separately partitioned. Hardware partitions are logically isolated from each other such that transactions in one partition are not visible to the other hardware partitions within the same complex.
Each nPar runs its own independent operating system. Different nPars may be executing the same or different revisions of an operating system, or they may be executing different operating systems altogether. Superdome supports HP UX 11i version 2, Windows Server 2003, Datacenter Edition, SUSE SLES 9 and Red Hat RHEL AS 3 operating systems. The diagram below shows a multi OS environment within Superdome.
Each nPar has its own independent CPUs, memory and I/O resources consisting of the resources of the cells that make up the partition. Resources (cell boards and/or I/O chassis) may be removed from one nPar and added to another without having to physically manipulate the hardware, but rather by using commands that are part of the System Management interface. The table below shows the maximum size of nPars per operating system:
HP-UX 11i Version
HP-UX 11i VersionHP-UX 11i Version
HP-UX 11i Version 2222
Windows Server
Windows ServerWindows Server
Windows Server 2003
20032003
2003
Red Hat RHEL AS 3
Red Hat RHEL AS 3Red Hat RHEL AS 3
Red Hat RHEL AS 3
SUSE SLES 9
SUSE SLES 9SUSE SLES 9
SUSE SLES 9
Maximum size of
Maximum size ofMaximum size of
Maximum size of nPar
nParnPar
nPar
64 CPUs, 512 GB RAM 64 CPUs, 512 GB RAM
8 CPUs, 128 GB RAM
16 CPUs, 256 GB RAM
Maximum number of
Maximum number ofMaximum number of
Maximum number of nPars
nParsnPars
nPars
16 16 16 16
For information on type of I/O cards for networking and mass storage for each operating environment, please refer to the Technical Specifications
Technical SpecificationsTechnical Specifications
Technical Specifications
section of this document. For licensing information for each operating system, please refer to
the Ordering Guide.
Superdome supports static partitions. Static partitions imply that any nPar configuration change requires a reboot of the nPar. In a future HP-UX and Windows release, dynamic nPars will be supported. Dynamic npars imply that nPar configuration changes do not require a reboot of the nPar. Using the related capabilities of dynamic reconfiguration (i.e. on­line addition, on-line removal), new resources may be added to an nPar and failed modules may be removed and replaced while the nPar continues in operation. Adding new nPars to Superdome system does not require a reboot of the system.
Windows Server 2003,
Windows Server 2003,Windows Server 2003,
Windows Server 2003, Datacenter edition for
Datacenter edition forDatacenter edition for
Datacenter edition for Itanium-based systems -
Itanium-based systems -Itanium-based systems -
Itanium-based systems ­HP Product Structure
HP Product StructureHP Product Structure
HP Product Structure
Product Number
T2372A
T2372AT2372A
T2372A
Pre-loaded Windows Server 2003, Datacenter Edition for Itanium 2 systems
Options:
0D1 - factory integration B01 - on site installation at customer's location (must contact HP Services for a quote to install on-site!) ABA - English localization only (other languages, Ger, Fre, Ita available only as a special with extra lead time) 002 - 2 processor LTU 004 - 4 processor LTU 008 - 8 processor LTU 016 - 16 processor LTU 032 - 32 processor LTU 064 - 64 processor LTU
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 10
Single System
Single SystemSingle System
Single System Reliability/Availability
Reliability/AvailabilityReliability/Availability
Reliability/Availability Features
FeaturesFeatures
Features
Superdome high availability offering is as follows:
NOTE:
NOTE: NOTE:
NOTE:
Online addition/replacement for cell boards is not currently supported and will be available in a future HP-UX
release. Online addition/replacement of individual CPUs and memory DIMMs will never be supported.)
CPU
CPUCPU
CPU
: The features below nearly eliminate the down time associated with CPU cache errors (which are the majority of CPU errors). If a CPU is exhibiting excessive cache errors, HP-UX 11i version 2 will ONLINE activate to take its place. Furthermore, the CPU cache will automatically be repaired on reboot, eliminating the need for a service call.
Dynamic processor resilience w/ Instant Capacity enhancement.
NOTE:
NOTE:NOTE:
NOTE:
Dynamic processor resilience and Instant Capacity are not supported when running Windows Server 2003 SUSE SLES 9 or Red Hat RHEL AS 3 in the partition.
CPU cache ECC protection and automatic de allocation CPU bus parity protection Redundant DC conversion
Memory
MemoryMemory
Memory
: The memory subsystem design is such that a single SDRAM chip does not contribute more than 1 bit to each ECC word. Therefore, the only way to get a multiple-bit memory error from SDRAMs is if more than one SDRAM failed at the same time (rare event). The system is also resilient to any cosmic ray or alpha particle strike because these failure modes can only affect multiple bits in a single SDRAM. If a location in memory is "bad", the physical page is deallocated dynamically and is replaced with a new page without any OS or application interruption. In addition, a combination of hardware and software scrubbing is used for memory. The software scrubber reads/writes all memory locations periodically. However, it does not have access to "locked down" pages. Therefore, a hardware memory scrubber is provided for full coverage. Finally data is protected by providing address/control parity protection.
Memory DRAM fault tolerance, i.e. recovery of a single SDRAM failure DIMM address / control parity protection Dynamic memory resilience, i.e. page de allocation of bad memory pages during operation.
NOTE:
NOTE: NOTE:
NOTE:
Dynamic memory resilience is not supported when running Windows Server 2003, SUSE SLES 9 or Red Hat RHEL AS 3 in the partition.
Hardware and software memory scrubbing Redundant DC conversion Cell COD.
NOTE:
NOTE: NOTE:
NOTE:
Cell COD is not supported when Windows Server 2003 SUSE SLES 9 or Red Hat RHEL AS 3 is running in the partition.
I/O
I/OI/O
I/O
: Partitions configured with dual path I/O can be configured to have no shared components between them, thus preventing I/O cards from creating faults on other I/O paths. I/O cards in hardware partitions (nPars) are fully isolated from I/O cards in other hard partitions. It is not possible for an I/O failure to propagate across hard partitions. It is possible to dynamically repair and add I/O cards to an existing running partition.
Full single-wire error detection and correction on I/O links I/O cards fully isolated from each other HW for the Prevention of silent corruption of data going to I/O On-line addition/replacement (OLAR) for individual I/O cards, some external peripherals, SUB/HUB.
NOTE:
NOTE:NOTE:
NOTE:
Online addition/replacement (OLAR) is not supported when running Red Hat RHEL AS 3, or SUSE SLES 9 in the partition.
Parity protected I/O paths Dual path I/O
Crossbar and Cabinet Infrastructure
Crossbar and Cabinet InfrastructureCrossbar and Cabinet Infrastructure
Crossbar and Cabinet Infrastructure
: Recovery of a single crossbar wire failure Localization of crossbar failures to the partitions using the link Automatic de-allocation of bad crossbar link upon boot Redundant and hotswap DC converters for the crossbar backplane ASIC full burn-in and "high quality" production process Full "test to failure" and accelerated life testing on all critical assemblies Strong emphasis on quality for multiple-nPartition single points of failure (SPOFs) System resilience to Management Processor (MP) Isolation of nPartition failure Protection of nPartitions against spurious interrupts or memory corruption Hot swap redundant fans (main and I/O) and power supplies (main and backplane power bricks) Dual power source Phone-Home capability
"HA Cluster-In-A-Box" Configuration
"HA Cluster-In-A-Box" Configuration"HA Cluster-In-A-Box" Configuration
"HA Cluster-In-A-Box" Configuration
: The "HA Cluster-In-A-Box" allows for failover of users' applications between hardware partitions (nPars) on a single Superdome system. All providers of mission critical solutions agree that failover between clustered systems provides the safest availability-no single points of failures (SPOFs) and no
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 11
ability to propagate failures between systems. However, HP supports the configuration of HA cluster software in a single system to allow the highest possible availability for those users that need the benefits of a non-clustered solution, such as scalability and manageability. Superdome with this configuration will provide the greatest single system availability configurable. Since no single system solution in the industry provides protection against a SPOF, users that still need this kind of safety and HP's highest availability should use HA cluster software in a multiple system HA configuration. Multiple HA software clusters can be configured within a single Superdome system (i.e., two 4-node clusters configured within a 32-socket Superdome system).
HP-UX: Serviceguard and Serviceguard Extension for RAC Windows Server 2003: Microsoft Cluster Service (MSCS) - limited configurations supported Red Hat Enterprise Linux AS 3 and SUSE SLES 9: Serviceguard for Linux
Multi-system High
Multi-system HighMulti-system High
Multi-system High Availability
AvailabilityAvailability
Availability
HP-UX 11i v2
HP-UX 11i v2HP-UX 11i v2
HP-UX 11i v2 Any Superdome partition that is protected by Serviceguard or Serviceguard Extension for RAC can be configured in a cluster
with:
Another Superdome with like processors (i.e. Both Superdomes must have Itanium 2 1.5 GHz processors or both Superdomes must have mx2 processor modules in the partitions that are to be clustered together.) One or more standalone non Superdome systems with like processors Another partition within the same single cabinet Superdome (refer to "HA Cluster-in-a-Box" above for specific requirements) that has like processors
Separate partitions within the same Superdome system can be configured as part of different Serviceguard clusters.
Geographically Dispersed
Geographically DispersedGeographically Dispersed
Geographically Dispersed Cluster Configurations
Cluster ConfigurationsCluster Configurations
Cluster Configurations
The following Geographically Dispersed Cluster solutions fully support cluster configurations using Superdome systems. The existing configuration requirements for non-Superdome systems also apply to configurations that include Superdome systems. An additional recommendation, when possible, is to configure the nodes of cluster in each datacenter within multiple cabinets to allow for local failover in the case of a single cabinet failure. Local failover is always preferred over a remote failover to the other datacenter. The importance of this recommendation increases as the geographic distance between datacenters increases.
Extended Campus Clusters (using Serviceguard with MirrorDisk/UX) MetroCluster with Continuous Access XP MetroCluster with EMC SRDF ContinentalClusters
From an HA perspective, it is always better to have the nodes of an HA cluster spread across as many system cabinets (Superdome and non-Superdome systems) as possible. This approach maximizes redundancy to further reduce the chance of a failure causing down time.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 12
Windows Server 2003,
Windows Server 2003,Windows Server 2003,
Windows Server 2003, Datacenter Edition for
Datacenter Edition forDatacenter Edition for
Datacenter Edition for Itanium 2 systems
Itanium 2 systemsItanium 2 systems
Itanium 2 systems
Microsoft Cluster Service (MSCS) comes standard with Windows Server 2003. When a customer orders T2372A, Windows Server 2003, Datacenter edition for Itanium 2 systems, it includes Microsoft Cluster Service - there is no additional SKU or charge for this Windows Server 2003 functionality. MSCS does not come preconfigured from HP's factories, however, so it is recommended that if your customer is interested in a MSCS configuration with Integrity Superdome, HP Services be engaged for a statement of work to configure MSCS on Integrity Superdome with HP storage.
HP Storage is qualified and supported with MSCS clusters. HP storage arrays tested and qualified with MSCS clusters on Superdome are:
EVA 3000 v3.01 EVA 5000 v3.01 XP 48/512 XP 128/1024. XP12000 MSA1000
In addition, the following EMC storage arrays are supported with MSCS:
EMC CLARiiON FC4700 EMC CLARiiON CX200/CX400/CX600 EMC CLARiiON CX300/CX500/CX700 EMC Symmetrix 8000 Family EMC DMX 800/1000/2000/3000
HP has qualified and supports the following capabilities with Integrity Superdome and MSCS:
Active/Active and Active/Passive MSCS clusters Partition size: any size from 2 CPUs up to 64 CPUs can be in a cluster HP supports anywhere from 2 nodes in an MSCS cluster with Superdome to 8 nodes Cluster nodes can be within the same Superdome cabinet or between different Superdome cabinets co-located at the same site MSCS clusters can be between similar partitions of CPU capacity (i.e. 8CPU partition clustered to 8CPU partition, 16CPU partition clustered to 16CPU partition) MSCS clusters can be also be between dissimilar partitions of CPU capacity (i.e. 16CPU partition clustered to 8CPU partition, 32CPU partition clustered to 16CPU partition) Please note, however, that you and the customer should work with HP Support to determine the appropriate configuration based on the availability level that is needed by the customer. As an example, if the customer wants a Service Level Agreement based on application availability, then perhaps an exact mirror of the production partition be set up for failover (i.e. similar CPU capacity). In any event, please ensure that the proper amount of hardware resources on the target server is available for failover purposes. HP Cluster Extention XP is a disaster recovery solution that extends local clusters over metropolitan-wide distance. It now supports MSCS on Windows Integrity with XP48/XP512, XP128/XP1024, XP12000.
For high availability purposes with MSCS, it is recommended (but not required) that customers also use HP SecurePath software (v4.0c-SP1) with HP storage for multi pathing and load balancing capabilities in conjunction with the fibre channel HBA, AB232A, AB466A or AB4667A. Additionally, the NCU (NIC Configuration Utility), which is provided from HP on the SmartSetup CD that ships with Windows partitions, can also be used in conjunction with MSCS clusters with the HP supported Windows NIC cards.
Additionally, customers can see the completion of our certification for the Microsoft Windows catalog at the following URL:
http://www.microsoft.com/windows/catalog/server/default.aspx?subID=22&xslt= cataloghome&pgn=catalogHome
Microsoft requires hardware vendors to complete this certification - also called "Windows logo-ing."
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 13
Below is the ordering information for Windows Server 2003 Datacenter Edition.
Windows Server 2003 Datacenter edition
Windows Server 2003 Datacenter edition Windows Server 2003 Datacenter edition
Windows Server 2003 Datacenter edition
(for Itanium 2 based HP Integrity Superdome only)
To order a system with Windows Server 2003 Datacenter Edition, you must order the T2372A product number with English or Japanese localization (option ABA or ABJ), and the appropriate license to use option code (002 through 064). Windows Server 2003 Datacenter Edition license options should be ordered to accommodate the total number of processors running Windows in the system. Order the fewest option numbers possible for the total license number. For example if there are a total of 24 processors in the system running Datacenter, order options 016 and 008. Datacenter can be partitioned (npars only) into any number of instances, but is limited to one OS image per npar.
NOTE 1:
NOTE 1: NOTE 1:
NOTE 1:
Windows Server 2003 Datacenter Edition must be installed by HP. If factory installation is selected, then a qualified Windows storage device must be ordered and an A9890A, AB466A or AB467A card must be ordered. There must be at least one boot drive for each partition. Two drives are required for RAID 1; one drive is required for RAID 0 or no RAID. NOTE 2:
NOTE 2:NOTE 2:
NOTE 2:
Can not order more than one of the same license options.
NOTE 3:
NOTE 3:NOTE 3:
NOTE 3:
Windows only supports a maximum of 64 processors per partition.
Microsoft® Windows® Server 2003 Datacenter Edition for Itanium 2 Systems
T2372A
English Localization
ABA
Japanese Localization
ABJ
Factory Integration
0D1
Include with complete system
B01
2 processor license to use
002
4 processor license to use
004
8 processor license to use
008
16 processor license to use
016
32 processor license to use
032
64 processor license to use
064
HP Standalone Operating System for field install. Windows Server 2003 Datacenter Edition Stand-alone for use when adding licenses to an existing server or replacing another operating system on a Datacenter qualified server. 0D1 (factory integration) is the default operating system installation method and should be used whenever possible. Must be ordered with the appropriate number of licenses (LTU, for example T2372A-016 for 16 Windows Server 2003 Datacenter licenses. There must be a Windows Server 2003 Datacenter Edition processor license for each processor in an Integrity server running Windows. When ordering T2372A-501, the appropriate on­site HP Services installation options will be added to the order.
501
Network Adapter
Network AdapterNetwork Adapter
Network Adapter Teaming with Windows
Teaming with WindowsTeaming with Windows
Teaming with Windows Server 2003
Server 2003Server 2003
Server 2003
Windows Server 2003 supports NCU, NIC Configuration Utility. This is the same NCU that is available to Proliant customers. This NCU has been ported to 64 bit Windows Server 2003 and is included with every SmartSetup CD that comes with a Windows partition on Integrity Superdome.
All ProLiant Ethernet network adapters support the following three types of teaming:
NFT—Network Fault Tolerance TLB—Transmit Load Balancing SLB—Switch-assisted Load Balancing
For Windows Server 2003, Datacenter edition on Superdome, there are four network interface cards that are currently supported (thus, these are the only cards that can be teamed with this NCU):
Windows/Linux PCI 1000Base-T Gigabit Ethernet Adapter (Copper)
A7061A
Windows/Linux PCI 1000Base-SX Gigabit Ethernet Adapter (Fiber)
A7073A
Windows/Linux PCI 2 port 1000Base-T Gigabit Ethernet Adapter (Copper)
A9900A
Windows/Linux PCI 2 port 1000Base-T Gigabit Ethernet Adapter (Fiber)
A9899A
Also, note that teaming between the ports on a single A9900A or A9899A above is supported by the NCU.
Red Hat RHEL AS 3 and
Red Hat RHEL AS 3 andRed Hat RHEL AS 3 and
Red Hat RHEL AS 3 and SUSE SLES 9
SUSE SLES 9SUSE SLES 9
SUSE SLES 9
Support of Serviceguard and Cluster Extension on Red Hat RHEL AS 3 and SUSE SLES 9 should be available in late 2004 or early 2005.
QuickSpecs
HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-HP Integrity Superdome Servers: 16-socket, 32-
HP Integrity Superdome Servers: 16-socket, 32-
socket, and 64-socket
socket, and 64-socket
Configuration
DA - 11717 North America — Version 15 — January 3, 2005
Page 14
Loading...
+ 30 hidden pages