IBM 88553RX, 88554RU, Eserver xSeries 455 Installation Manual

Page 1
ibm.com/redbooks
IBM Eserver xSeries 455 Planning and Planning and Installation Guideuide
David Watts
Aubrey Applewhaite
Yonni Meza
Describes the technical details of the new 64-bit server
Covers supported Windows and Linux 64-bit operating systems
Helps you prepare for and perform an installation
Front cover
Page 2
Page 3
IBM Eserver xSeries 455 Planning and Installation Guide
February 2004
International Technical Support Organization
SG24-7056-00
Page 4
© Copyright International Business Machines Corporation 2004. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
First Edition (February 2004)
This edition applies to the IBM Eserver xSeries 455, machine type 8855.
Note: Before using this information and the product it supports, read the information in “Notices” on page vii.
Page 5
© Copyright IBM Corp. 2004. All rights reserved. iii
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1. Technical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Comparing the x455 with the x450 . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Features not supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 The x455 base models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Front and rear views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 System assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 IBM XA-64 chipset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 The processor-board assembly. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 The memory-board assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.3 PCI-X board assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Remote Supervisor Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 RXE-100 Expansion Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Multinode scalable partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.1 RXE-100 connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7.2 Multinode configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7.3 Integrated I/O function support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.7.4 Error recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.8 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.9 Light path diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.10 Extensible Firmware Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.10.1 GUID Partition Table disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.10.2 EFI System Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.10.3 EFI and the reduced-legacy concept . . . . . . . . . . . . . . . . . . . . . . . 34
1.11 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.12 Enterprise X-Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.12.1 NUMA architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chapter 2. Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.1 Migrating to a 64-bit platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2 Scalable system partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Page 6
iv IBM Eserver xSeries 455 Planning and Installation Guide
2.2.1 RXE-100 Expansion Enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4 Server consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 ServerProven® . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.6 IBM Datacenter Solution Program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.7 Application solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7.1 Database applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.7.2 Business logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.7.3 e-Business and security transactions . . . . . . . . . . . . . . . . . . . . . . . . 50
2.7.4 In-house developed compute-intensive applications . . . . . . . . . . . . 51
2.7.5 Science and technology industries . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.8 Why choose x455 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Chapter 3. Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1 System hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.1 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.2 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.1.3 PCI-X slot configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1.4 Broadcom Gigabit Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2 Cabling and connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2.1 SMP Expansion connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.2.2 Remote Expansion Enclosure connectivity . . . . . . . . . . . . . . . . . . . . 70
3.2.3 Remote Supervisor Adapter connectivity . . . . . . . . . . . . . . . . . . . . . 76
3.2.4 Serial connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.3 Storage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.3.1 xSeries storage solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.3.2 Tape backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.4 Rack installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.5 Power considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.6 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.6.1 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.7 IBM Director support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.8 Solution Assurance Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.8.1 Trigger Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.8.2 Electronic Solution Assurance Review (eSAR). . . . . . . . . . . . . . . . . 90
Chapter 4. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1 Using The Extensible Firmware Interface . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.1.1 EFI Firmware Boot Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.2 The EFI shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.1.3 Driver Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.1.4 Flash update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.1.5 Configuration/Setup utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Page 7
Contents v
4.1.6 Diagnostic utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.1.7 Boot Option Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.2 Configuring scalable partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.2.1 Creating a scalable partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.2.2 Booting a scalable partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.2.3 Multiple Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2.4 Deleting a scalable partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.3 Installing Windows Server 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.3.1 Important information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.3.2 Preparing to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.3.3 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.3.4 Post-setup phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.4 Installing Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.4.1 Linux IA-64 kernel overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.4.2 Choosing a Linux distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.4.3 Installing SUSE LINUX Enterprise Server 8.0. . . . . . . . . . . . . . . . . 156
4.4.4 Installing Red Hat Enterprise Linux AS . . . . . . . . . . . . . . . . . . . . . . 160
4.4.5 Linux boot process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4.6 Information about the installed system . . . . . . . . . . . . . . . . . . . . . . 163
4.4.7 Using the serial port for the Linux console . . . . . . . . . . . . . . . . . . . 171
4.4.8 RXE-100 Expansion Enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.4.9 Upgrading drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Chapter 5. Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.1 IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.1.1 Scalable Systems Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.2 The Remote Supervisor Adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.2.1 Connecting via a Web browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.2.2 Connecting via the ASM interconnect . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.3 Installing the device driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.4 Configuring the remote control password . . . . . . . . . . . . . . . . . . . . 186
5.3 Management using the Remote Supervisor Adapter . . . . . . . . . . . . . . . 187
5.3.1 Configuring which alerts to monitor . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.3.2 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.3.3 Sending alerts directly to IBM Director . . . . . . . . . . . . . . . . . . . . . . 191
5.3.4 Creating a test event action plan in IBM Director . . . . . . . . . . . . . . 193
5.4 Windows System Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.4.1 WSRM description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.4.2 WSRM features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.4.3 WSRM in the x455 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Page 8
vi IBM Eserver xSeries 455 Planning and Installation Guide
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Page 9
© Copyright IBM Corp. 2004. All rights reserved. vii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
Page 10
viii IBM Eserver xSeries 455 Planning and Installation Guide
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Chipkill™ DB2 Connect™ DB2 Universal Database™ DB2® DRDA® Enterprise Storage Server® ESCON®
Eserver™
Eserver™ eServer™ FlashCopy® FICON™
IBM® ibm.com® iSeries™ LANClient Control Manager™ Notes® OnForever™ Predictive Failure Analysis® PS/2® pSeries® Redbooks™ Redbooks (logo) ™ RETAIN®
ServerGuide™ ServerProven® ServeRAID™ ThinkPad® Tivoli® TotalStorage® Wake on LAN® X-Architecture™ xSeries® zSeries®
The following terms are trademarks of International Business Machines Corporation and Rational Software Corporation, in the United States, other countries or both.
Rational®
The following terms are trademarks of other companies:
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.
Page 11
© Copyright IBM Corp. 2004. All rights reserved. ix
Preface
The IBM Eserver xSeries® 455 is the second generation Enterprise X-Architecture™ server using the 64-bit IBM® XA-64 chipset and the Intel® Itanium 2 processor. Unlike the x450, its predecessor, the x455 supports the merging of four server chassis to form a single 16-way image, providing even greater expandability and investment protection.
This IBM Redbook is a comprehensive resource on the technical aspects of the server, and is divided into five key subject areas:
Chapter 1, “Technical description” on page 1, introduces the server and its
subsystems and describes the key features and how they work. This includes the Extensible Firmware Interface, which provides a powerful replacement to the BIOS facility found on the IA-32 platform.
Chapter 2, “Positioning” on page 39, examines the types of applications that
would be used on a server such as the x455.
Chapter 3, “Planning” on page 55, describes the considerations when
planning to purchase and planning to install the x455. It covers such topics as configuration, operating system specifics, scalability, and physical site planning.
Chapter 4, “Installation” on page 91, covers the process of installing
Windows® Server 2003, SUSE LINUX Enterprise Server, and Red Hat Enterprise Linux AS on the x455.
Chapter 5, “Management” on page 175, describes how to use the Remote
Supervisor Adapter to send alerts to an IBM Director management environment.
The team that wrote this redbook
This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center.
David Watts is a Consulting IT Specialist at the International Technical Support Organization in Raleigh. He manages residencies and produces IBM Redbooks™ on hardware and software topics related to IBM xSeries systems and associated client platforms. He has authored more than 30 IBM Redbooks and redpapers; his most recent books include Implementing Systems Management Solutions Using IBM Director, SG24-6188. He has a
Page 12
x IBM Eserver xSeries 455 Planning and Installation Guide
Bachelor of Engineering degree from the University of Queensland (Australia) and has worked for IBM for more than 14 years. He is an IBM Eserver™ Certified Specialist for xSeries and an IBM Certified IT Specialist.
Aubrey Applewhaite is a Senior IT Specialist working for the IBM Systems Group in the United Kingdom. He is a member of the Server Implementation Team and specializes in xSeries hardware, Microsoft® Windows, clustering and VMware. He has worked in the IT industry for over 16 years and been at IBM for eight years. He currently works in a customer-acing role providing consultancy and practical assistance to help IBM customers implement new technology with particular emphasis on xSeries hardware. He holds a Bachelor of Science Degree in Sociology and Politics from Aston University and is an MCSE for both Windows NT® and Windows 2000, IBM eServer™ Certified Systems Expert, Cisco CNA, and Compaq Proliant ASE.
Yonni Meza is an xSeries Specialist in Peru who also works as the country’s PCI Instructor, teaching courses on xSeries and Personal Computer Division (PCD) products. Additionally, he supports the PCD team with pre- and post-sales technical support, conducting demos and presentations on a regular basis. He has four years of experience in personal computing systems as well as Intel servers. Furthermore, he has implemented several xSeries solutions such as clustering in Windows and Linux with SCSI, SAN and ESS. Yonni studied Systems Engineering at the University of Lima.
The redbook team (l-r): David, Aubrey, Yonni
Page 13
Preface xi
Thanks to the following people for their contributions to this project:
Henry Artner, Service Education Curriculum Manager, Raleigh Pat Byers, Program Director, Linux xSeries Alliances & Marketing Alex Candelaria, IBM Center for Microsoft Technologies, Seattle Greg Clarke, IBM Advanced Technical Support, Dallas Rufus Credle, International Technical Support Organization, Raleigh Gary Hade, IBM Linux Technology Center, Beaverton Jim Hanna, xSeries development, Austin Cecil Lockett, Senior Engineer, Engineering Software, Raleigh Gerry McGettigan, Advanced Technical Support, EMEA Michael L Nelson, IBM Eserver Solutions Engineering, Raleigh Lubos Nikolini, Systems Engineer, HT Computers Charles Perkins, Course Developer, Service and Support Education, Raleigh Steve Powell, Service and Support Education Team, Raleigh Ken Rauch, Delivery Project Manager, Markham Jose Rodriquez Ruibal, Advanced Technical Support, EMEA Steve Russell, EMEA ATS xSeries Product Introduction Center, Hursley Bob Zuber, x455 World Wide Product Manager, Raleigh Julie Czubik, Technical Editor, ITSO, Poughkeepsie
Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Page 14
xii IBM Eserver xSeries 455 Planning and Installation Guide
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an Internet note to:
redbook@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization Dept. HZ8 Building 662 P.O. Box 12195 Research Triangle Park, NC 27709-2195
Page 15
© Copyright IBM Corp. 2004. All rights reserved. 1
Chapter 1. Technical description
The IBM ^ xSeries 455 is the latest IBM top-of-the-line server and is the second implementation of the 64-bit IBM XA-64 chipset, code named “Summit”, which forms part of the Enterprise X-Architecture strategy. The x455 completes the xSeries product family, leveraging the proven Enterprise X-Architecture to deliver robust and reliable 64-bit systems.
“Features” on page 2“The x455 base models” on page 4“System assembly” on page 6“IBM XA-64 chipset” on page 7“Remote Supervisor Adapter” on page 19“RXE-100 Expansion Enclosure” on page 21“Multinode scalable partitions” on page 22“Redundancy” on page 27“Light path diagnostics” on page 27“Extensible Firmware Interface” on page 29“Operating system support” on page 35“Enterprise X-Architecture” on page 35
1
Page 16
2 IBM Eserver xSeries 455 Planning and Installation Guide
1.1 Features
The following are the key features of the x455: One-way or two-way Intel Itanium 2 models, upgradable to 4-way in a single
node and 16-way in a 4-node partition.
64 MB XceL4 Server Accelerator Cache providing an extra level of cache,
upgradeable to 256 MB in a 4-node partition.
1 GB or 2 GB RAM standard, upgradeable to 56 GB in a single node and 224
GB in a 4-node partition. Available options are 512 MB, 1 GB, and 2 GB ECC DDR SDRAM RDIMMs.
Memory enhancement such as memory mirroring, Chipkill™, Memory
ProteXion, and hot swap.
Dual channel Ultra320 SCSI/RAID controller.Six 64-bit Active PCI-X slots: Two 133 MHz, two 100 MHz, and two 66 MHz,
upgradeable to 24 x 64-bit PCI-X slots in a four-node partition.
Scalable system partitioning in two-node and four-node configurations via
three scalability ports.
Connectivity to an RXE-100 external enclosure for an additional 12 PCI-X
slots, upgradeable to 24 additional PCI-X slots in a 4-node partition.
Two hot-swap 1-inch drive bays, upgradeable to eight in a four-node partition.Support for major storage subsystems, including SCSI and Fibre Channel.Light path diagnostics for troubleshooting.Remote Supervisor Adapter (RSA) for systems management and remote
diagnostics.
Integrated dual 10/100/1000 Mbps Ethernet controller.Integrated ATI Rage XL with 8 MB video RAM.Three USB ports and one serial port.Two 1050 W hot swap power supply.24x combination DVD/CD-RW drive.4U x 26” rack drawer design.
1.1.1 Comparing the x455 with the x450
The x455 builds on the proven and popular x450 and brings a number of enhancements. Table 1-1 on page 3 summarizes the differences and enhancements between the two servers.
Page 17
Chapter 1. Technical description 3
Table 1-1 Comparing the differences between the x455 and x450
1.1.2 Features not supported
Due to its 64-bit architecture many existing 32-bit applications are no longer supported. These include:
32-bit and 16-bit operating systemsServerGuide™Remote Deployment Manager (RDM)LANClient Control Manager™ (LCCM)UpdateExpressAccess Support
64-bit versions of some of these tool will be made available in the future.
The following functions are also not supported:
More than one RXE-100 connected to a single nodeHot add/remove of an RXE-100Physical partitioning within a single nodePartial mirroring of memoryHot add/swap of the CD-ROM or diskette drive (if installed)Inter-Process Communications (IPC) over scalability portsHot adding memory (hot swap is supported)PS/2® keyboard and mouseParallel port
Component x450 x455
Maximum memory (GB) 40 56
Active memory with hot-swap support No Yes
Multi chassis support No Yes (one, two or four
chassis)
Shared RXE-100 between 2 machines No Yes
Redundant cabling to RXE-100 (only from a single independent machine)
No Yes
Enterprise X-Architecture First generation
chipset
Second generation chipset
Important: The x455 does not have PS/2 ports for a keyboard and mouse. Either a USB keyboard and mouse are required or the appropriate cables to connect to a KVM switch.
Page 18
4 IBM Eserver xSeries 455 Planning and Installation Guide
1.2 The x455 base models
Powered by XA-64 Enterprise X-Architecture and the 64-bit Itanium 2 “Madison” processors, the x455 server brings the future of 64-bit processing and production-level reliability to your data centers today. Featuring mainframe-inspired advanced mission-critical functions, you can depend on these 16-way-capable enterprise servers to run your complex business applications around the clock.
The initial models of the x455 are listed in Table 1-2.
Table 1-2 Initial x455 base models
The base models can also be connected together to form two-node (eight CPUs) and four-node (16 CPUs) configurations. See “Multinode scalable partitions” on page 22 for details.
1.2.1 Front and rear views
Figure 1-1 on page 5 shows the front view of the x455 showing the system components.
Base model 8855-1RX 8855-2RX 8855-3RX
Itanium 2 processors 1 x 1.3 GHz 2 x 1.4 GHz 2 x 1.5 GHz
Max SMP 4-way 4-way 4-way
Memory 1 GB 2 GB 2 GB
L1 cache 32 KB 32 KB 32 KB
L2 cache 256 KB 256 KB 256 KB
L3 cache 3 MB 4 MB 6 MB
XceL4 Accelerator Cache 64 MB 64 MB 64 MB
Page 19
Chapter 1. Technical description 5
Figure 1-1 Front panel of the xSeries 455
Figure 1-2 shows the rear view of the x455 showing the system connectors.
Figure 1-2 Rear view of the x455
Power button
Reset button
Power-on light
Hot-swap fans
USB port
System-error light (amber) Information light (amber) SCSI activity light (green) Locator light (blue)
DVD/CD-RW drive
Hot swap power supplies
Blank media bay
Light Path Diagnostics panel (pulls out)
Hot swap drive bays
System power connector (1)
System power connector (2)
RXE Expansion Port (B) connector
Remote Supervisor Adapter connectors and LEDs
Ethernet LEDs
Gigabit Ethernet connectors
RXE Expansion Port (A) connector
Video connector
USB 2 connector
USB 1 connector
RXE Management Port connector
SCSI connector
Serial connector
SMP Expansion Port 1 connector
SMP Expansion Port 2 connector
SMP Expansion Port 3 connector
Page 20
6 IBM Eserver xSeries 455 Planning and Installation Guide
1.3 System assembly
The x455 has a similar internal design to the x450. The midplane board (viewed from the front) interfaces with three major assemblies:
The processor-board assembly
This is located to the right of the midplane board and under the memory-board assembly. It houses the Itanium 2 processors, Cache and Scalability Controller, and the 64 MB of Xcel4 cache.
The memory-board assembly
This is located to the right of the midplane board and above the processor-board assembly. It houses the memory and memory controller.
The PCI-X board assembly
This is located to the left of the midplane board. It houses all the PCI-X slots and all other I/O components.
Page 21
Chapter 1. Technical description 7
Figure 1-3 Memory-board and processor-board assembly locations
1.4 IBM XA-64 chipset
The IBM XA-64 chipset is the product name describing the chipset developed under the code name “Summit” and implemented on the IA-64 platform. A product of the IBM Microelectronics Division, the XA-64 chipset leverages the proven Enterprise X-Architecture chipset used initially in the x440 and applies the same technologies to the IA-64 architecture. The XA-64 chipset comprises the following components:
Itanium 2 processor(s)Cache and Scalability Controller
A single controller, code named “Tornado”, located within the processor-board assembly.
N O
T E
:
F O
R
P
R O
P E
R
A
I R
F L
O
W,
R
E P
L A
C
E
F A
N
W
I T
H
I N
2
M
I N
U
T E
S
F R
O
N T
O
F
B O
X
N O
T E
:
F O
R
P
R O
P E
R
A
I R
F L
O
W,
R
E P
L
A C
E F
A
N W
I
T H
I N
2 M
I
N
U T
E S
F R
O
N T
O
F
B O
X
Processor-board assembly
Memory-board assembly
PCI-X slots
Page 22
8 IBM Eserver xSeries 455 Planning and Installation Guide
Memory controller
A single memory controller, code named “Cyclone”, located within the memory-board assembly.
Two PCI-X bridges
Two PCI-X bridges, code named “Winnipeg”, one located on the PCI-X board and the other on the I/O board. These control both the PCI-X and Remote I/O.
Figure 1-4 shows the various IBM XA-64 components in an x455 configuration.
Figure 1-4 xSeries 455 system block diagram
Table 1-3 shows how the bandwidths in Figure 1-4 are calculated.
Table 1-3 Bus speeds
Ultra320
SCSI
Gigabit
Ethernet
Video
3x USB
Serial
RSA
33 MHz66 MHz
64-bit
66 MHz
64-bit
100 MHz
64-bit
133 MHz
Bus A66 MHz
RXE
Expansion
Port A
(1 GBps)
B-100
D-133C-133
IBM XA-64 ("Summit") core chipset
6.4 GBps
64 MB
L4 cache
400 MHz
3.2 GBps
CPU 1 CPU 2
CPU 3 CPU 4
PCI-X bridge PCI-X bridge
RXE Expansion Port B (1 GBps)
1 GBps1 GBps
200 MHz
2-way or 4-way
interleaved DDR
Port 2
3.2 GBps
Port 1
3.2 GBps
DDR
DDR
DDR
DDR
DDR
DDR
SMI-E
DDR
DDR
Memory
controller
SMI-E SMI-E SMI-E
6.4 GBps
Processor-board assembly Memory-board assembly
Cache and
scalability
controller
SMP scalability
ports
From To Bandwidth Calculation
CPUs Cache controller 6.4 GBps 400 MHz x 128-bit data path
L4 Cache Cache controller 6.4 GBps 400 MHz x 128-bit data path
Page 23
Chapter 1. Technical description 9
1.4.1 The processor-board assembly
The processor-board assembly is located below the memory board. It is held in place by retaining levers, an EMC shield and a retention bracket. For instructions to remove or install please refer to the Option Installation Guide.
Figure 1-5 The processor-board assembly
The power modules shown in Figure 1-5 supply power to the processors and are equivalent to VRMs in other systems.
Processors should be installed in the order 1, 2, 3, 4. The bootstrap processor (BSP) may not necessarily be the processor located in processor socket 1. The Intel Itanium Architecture processors are initialized and tested in parallel. The first processor to complete initialization becomes the BSP.
The CPUs are connected together with a 200 MHz frontside bus, but supply data at an effective rate of 400 MHz using the “dual-pump” design of the Intel Itanium 2 architecture is described in “Intel Itanium 2 processors” on page 10.
SDRAM Memory controller 3.2 GBps 400 MHz x 64-bit data path
Cache controller Memory controller 3.2 GBps 400 MHz x 64-bit data path
From To Bandwidth Calculation
Warning: Be careful when removing or installing the processor-board
assembly or the memory-board assembly. It is possible to damage the midplane if not done correctly.
Processors 1 & 3 (processors 2 & 4 are on the underside of the circuit board)
Power modules for each processor
Page 24
10 IBM Eserver xSeries 455 Planning and Installation Guide
The processor-board assembly is also equipped with LEDs for light path diagnostics for the following components:
Each processorEach power module (“pod”)
In addition, a “remind” button is located on the upper side of the processor-board assembly. Pressing this button while the processor-board assembly is not attached to AC power will illuminate any light path LEDs for 10 seconds that had been lit while the system was under power.
Intel Itanium 2 processors
The Itanium 2 processor used in the x455 is code named “Madison”. It uses a ZIF socket design and its small form factor is what allows up to four processors in a 4U dense machine.
Table 1-4 outlines some of the differences between the Itanium and Itanium 2 processors (both the “Madison” and the earlier “McKinley” processor).
Table 1-4 Itanium v Itanium 2 processors
The Itanium 2 processor has three levels of cache, all of which are on the processor die:
Level 1 cache, 32 KB
Feature Itanium Itanium 2 “McKinley” Itanium 2 “Madison”
Processor core speed 733 or 800 MHz 900 MHz or 1.0 GHz 1.3, 1.4 or 1.5 GHz
L3 Cache 2 or 4 MB 1.5 or 3 MB 3, 4 or 6 MB
Frontside bus 266 MHz 400 MHz, 128 bit 400 MHz, 128 bit
Frontside bus bandwidth 2.1 GBps 6.4 GBps 6.4 GBps
Pipeline stages 10 8 8
Issue ports 9 11 11
on-board registers 328 328 328
Integer units 3 6 6
Branch units 3 3 3
Floating point units 2 2 2
SIMD units 2 1 1
Load and store units 2 (total) 2 load and 2 store 2 load and 2 store
Page 25
Chapter 1. Technical description 11
This is new and the “closest” to the processor, and is used to store micro-operations. These are decoded executable machine instructions. It serves these to the processor at rated speed. This additional level of cache saves decode time on cache hits.
Level 2 cache, 256 KB
This is equivalent to L1 cache on the Pentium® III Xeon.
Level 3 cache 3–6 MB
This is equivalent to L2 cache on the Pentium III Xeon or the L3 cache on the Pentium Xeon MP processor. Unlike the design of the original Itanium processor, this L3 cache is now on the processor die, greatly improving performance, up to 2 times greater than that of the original Itanium.
The x455 also implements a Level 4 cache as described in “IBM XceL4 Accelerator Cache” on page 12.
Intel has also introduced a number of features associated with its Itanium micro-architecture. These are available in the x455, including:
400 MHz frontside bus
The Pentium III Xeon processor had a 100 MHz frontside bus that equated to a burst throughput of 800 MBps. With protocols such as TCP/IP, this had been shown to be a bottleneck in high-throughput situations. The Itanium 2 processor improves on this by using a single 200 MHz clock but using both edges of each clock cycle to transmit data. This is shown in Figure 1-6.
Figure 1-6 Dual-pumped frontside bus
This increases the performance of the frontside bus. The end result is an effective burst throughput of 6.4 GBps (128-bit wide data path running at 400 MHz), which can have a substantial impact, especially on TCP/IP-based LAN traffic. This is opposed to the Itanium processor, which had a burst throughput of only 2.1 GBps (64-bit wide data path running at 266 MHz).
Explicitly Parallel Instruction Computing (EPIC)
EPIC technology, developed by Intel and HP, leads to more efficient, faster processors because it eliminates numerous processing inefficiencies in current processors and attacks the perennial data bottleneck problems by increasing parallelism, rather than simply boosting the raw “clock” speed of the processor.
200 MHz clock
Page 26
12 IBM Eserver xSeries 455 Planning and Installation Guide
Specifically, in today's 32-bit processors, much of the instruction scheduling (the order in which computing instructions are executed) is done on the chip itself, leading to a great deal of overhead and slowing down overall processor performance. Moreover, today's processors are plagued by instruction flow problems since the processor often has to stop what it is doing and reconstruct the instruction flow due to inherent inefficiencies in instruction handling.
EPIC makes the instruction scheduling more intelligent and handles much of the scheduling off-chip, in the compiler program, before feeding “parallelized” instructions to the Itanium 2 processor for execution. The parallelized instructions allow the chip to process a number of instructions simultaneously, increasing performance.
The Itanium 2 architecture is based on EPIC technology and has the following features:
– Provides faster online transaction processing – Has the capability to execute multiple instructions simultaneously – Enables faster calculations and data analysis – Allows for faster storage and movement of large models (CAD, CAE) – Speeds up simulation and rendering times
For more information about the features of the Itanium 2 processor, go to:
http://www.intel.com/design/itanium2
IBM XceL4 Accelerator Cache
Integrated into the processor-board assembly is 64 MB of Level 4 cache, which is shown in Figure 1-4 on page 8. This XceL4 Server Accelerator Cache provides the necessary extra level of cache to maximize CPU throughput by reducing the need for main memory access under demanding workloads, resulting in an overall enhancement to system performance.
Cache memory is two-way interleaved 200 MHz DDR memory and is faster than the main memory because it is directly connected to the Cache and Scalability Controller and does not have additional latency associated with the large fan-out necessary to support the 28 DIMM slots. Since the data interface to the controller is 400 MHz, peak bandwidth for the XceL4 cache is 6.4 GBps.
The XceL4 Accelerator Cache has been designed with commercial workloads in mind that tend to have high cache hit rates. This effectively boosts performance and compensates in part for the 3.2 GBps bandwidth between the Cache and Scalability Controller and Memory Controller.
Page 27
Chapter 1. Technical description 13
1.4.2 The memory-board assembly
The x455 memory-board assembly is installed in the top of the server and mounts to the side of the midplane using two retaining levers on the top. This location allows for easy access to all memory DIMMs without having to remove any components from the system.
The memory-board assembly houses 28 DIMM slots. All DIMM slots can be used with 512 MB, 1 GB or 2GB RDIMMs for a maximum of 56 GB. Memory can be hot swapped but not hot added. This function, however, is as much a part of the operating system as the hardware. Check with your operating system Help for details.
Figure 1-7 The memory-board assembly, showing the two memory ports
The memory-board assembly is also equipped with LEDs for light path diagnostics for each DIMM. In addition, the assembly is equipped with LEDs for the following:
Power to memory port 1Power to memory port 2Hot-plug memory enabled
Memory used in the x455 is standard PC2100 ECC DDR SDRAM RDIMMs. The memory is 2-way interleaved; however, 4-way interleaving is also supported when both ports are engaged. Interleaving requires DIMMs to be installed in matched pairs and in specific DIMM sockets (see “Memory” on page 57).
There are 14 DIMM slots in each of the two ports, for a total of 28 DIMMs.
Memory Port 1
Memory Port 2
Page 28
14 IBM Eserver xSeries 455 Planning and Installation Guide
System memory
DIMMs must be installed in matched pairs, since the DIMMs are two-way interleaved. However, if memory is installed in matched fours (a matched pair in each port), the system automatically detects this and will enable 4-way interleaving. With this, memory access is performed simultaneously from both ports (two separate paths into the memory controller as shown in Figure 1-4 on page 8), leading to improved memory performance.
Figure 1-8 Memory DIMMs are divided into two ports
There are a number of advanced features implemented in the x455 memory subsystem, collectively known as
Active Memory:
Memory ProteXion
Memory ProteXion, also known as “redundant bit steering”, is the technology behind using redundant bits in a data packet to provide backup in the event of a DIMM failure.
Port 1 Por t 2
Front of server
Page 29
Chapter 1. Technical description 15
Currently, other industry-standard servers use 8 bits of the 72-bit data packets for ECC functions and the remaining 64 bits for data. However, the x455 (and several other xSeries servers) use an advanced ECC algorithm that is based not on bits but on memory symbols. Symbols are groups of multiple bits, and in the case of the x455, each symbol is 4 bits wide. With two-way interleaved memory, the algorithm needs only three symbols to perform the same ECC functions, thus leaving one symbol free (2 bits on each DIMM).
In the event that a chip failure on the DIMM is detected by memory scrubbing, the memory controller can re-route data around that failed chip through the spare symbol (similar to the hot-spare drive of a RAID array). It can do this automatically without issuing a Predictive Failure Analysis® (PFA) or light path diagnostics alert to the administrator. After the second DIMM failure, PFA and light path diagnostics alerts would occur on that DIMM as normal.
Memory scrubbing
Memory scrubbing is an automatic daily test of all the system memory that corrects soft errors and reports recoverable errors. An excessive rate of recoverable errors reported triggers Memory ProteXion to replace the failing locations.
Memory mirroring
Memory mirroring is equivalent to RAID-1 in disk arrays, in that memory is divided in two ports and one port is mirrored to the other (see Figure 1-8 on page 14). If 8 GB is installed, then the operating system sees 4 GB once memory mirroring is enabled (it is disabled by default). All mirroring activities are handled by the hardware without any additional support required from the operating system.
When memory mirroring is enabled the data that is written to memory is stored in two locations. One copy is kept in the port 1 DIMMs, while a second copy is kept in the port 2 DIMMs.
During the execution of the read command, the data is read simultaneously from both ports, and error-free data from either port is forwarded. This provides an extra level of error recovery capability.
When an unrecoverable memory error from one of the memory ports is encountered, good data from the non-failing memory port is forwarded to the system. The failing DIMM is reported and indicated with light path. The failing memory port is then disabled.
Certain restrictions exist with respect to placement and size of memory DIMMs when memory mirroring is enabled. These are discussed in “Memory mirroring” on page 58.
Page 30
16 IBM Eserver xSeries 455 Planning and Installation Guide
Chipkill memory
Chipkill is integrated into the XA-64 chipset and does not require special Chipkill DIMMs. Chipkill corrects multiple single-bit errors to keep a DIMM from failing. When combining Chipkill with Memory ProteXion and Active Memory, the x455 provides very high reliability in the memory subsystem. Chipkill memory is approximately 100 times more effective than ECC technology, providing correction for up to 4 bits per DIMM, whether on a single chip or multiple chips.
If a memory chip error does occur, Chipkill is designed to automatically take the inoperative memory chip offline while the server keeps running. The memory controller provides memory protection similar in concept to disk array striping with parity, writing the memory bits across multiple memory chips on the DIMM. The controller is able to reconstruct the “missing” bit from the failed chip and continue working as usual.
Chipkill support is provided in the memory controller and implemented using standard RDIMMs, so it is transparent to the operating system.
In addition, to maintain the highest levels of system availability, if a memory error is detected during POST or memory configuration, the server can automatically disable the failing DIMM and continue operating with reduced memory capacity. You can manually re-enable the memory bank after the problem is corrected via the Setup/Configuration option in the EFI Firmware Boot Manager menu. EFI is
Extensible Firmware Interface, the replacement to BIOS as described in
“Extensible Firmware Interface” on page 29.
Memory ProteXion, memory mirroring, and Chipkill provide multiple levels of redundancy to the memory subsystem. Combining Memory ProteXion with Chipkill enables up to two memory chip failures per memory port (14 DIMMs). Both memory ports could sustain up to four memory chip failures. Memory mirroring provides additional protection with the ability to continue operations with memory module failures.
1. The first failure detected by the Chipkill algorithm on each port does not generate a light path diagnostics error, since Memory ProteXion recovers from the problem automatically.
2. Each memory port could then sustain a second chip failure without shutting down.
3. Provided that memory mirroring is enabled, the third chip failure on that port would send the alert and take the DIMM offline, but keep the system running out of the redundant memory bank.
The combination of these technologies provides the most reliable memory subsystem available.
Page 31
Chapter 1. Technical description 17
1.4.3 PCI-X board assembly
Strictly speaking the PCI-X board assembly does not truly exist in the same way that the processor board and memory board assemblies do. The term is used here to loosely describe the PCI-X board and I/O subsystem that together comprise the other components of the machine.
The two PCI-X bridges in the XA-64 chipset provide support for 33, 66, 100, and 133 MHz devices using 4 PCI-X buses (labelled A–D in Figure 1-4 on page 8).
The PCI-X bridges also have two 1 GBps bi-directional Remote Expansion I/O (RXE) ports for connectivity to the RXE-100 Expansion Enclosure. The RXE-100 provides up to an additional 12 PCI-X slots and can be connected by a single cable to port A or to both ports A and B to provide redundancy.
The rear panel of the x455, which indicates the location of the RXE Expansion Ports, is shown in Figure 1-2 on page 5.
PCI-X subsystem
There are six available PCI-X slots in four buses. These are shown in Figure 1-9.
Figure 1-9 PCI-X slot information
Only 3.3V cards are supported, but all slots will accept the following:
32-bit or 64-bit cards33MHz, 66MHz, 100MHz or 133MHz cards
PCI-X slot 1 (66 )MHz
PCI-X slot 2 (66 )MHz
PCI-X slot 3 (100 )MHz
PCI-X slot 4 (100 )MHz
PCI-X slot 5 (133 )MHz
Back of server
Bus: D B AC
PCI-X slot 6 (133 MHz)
Page 32
18 IBM Eserver xSeries 455 Planning and Installation Guide
You should bear in mind that the overall throughput will be the lowest common denominator. As such it is important that you carefully consider the placement of cards to get the best throughput.
The following guidelines will help you gain the best performance from the PCI-X buses:
Put cards in a slot that will give the best performance but not in a slot rated
higher than the card.
If a 133 MHz card is in bus B then avoid using the other slot in the bus if
possible so the card will operate at its rated speed.
Use buses B, C and D in preference to A. Bus A is connected to the PCI-X
south bridge with legacy devices and competes with these “slower” devices when sending and receiving data to and from the memory controller.
See “PCI-X slot configuration” on page 61 for details on what adapters are supported and in what combinations.
The PCI-X subsystem also supplies these I/O devices: Dual channel Ultra320 SCSI/RAID controller with one internal port and one
external port using the LSI 53C1030 chipset The internal port supports both single disks and RAID-1 mirrored pairs of
disks. The external port will support additional RAID configurations but this has a number of limitations and is not recommended.
Dual Gigabit 10/100/1000 Ethernet ports using the Broadcom 5704 chipset.
The BCM5704 supports full- and half-duplex performance at all speeds (10/100/1000 Mbps, auto-negotiated) and includes integrated on-chip memory for buffering data transmissions to ensure the highest network performance. It has dual onboard RISC processors for advanced packet parsing and backwards compatibility with today's 10/100 networks. It includes
Important: The rating of bus B at 100 MHz is for a fully populated bus. If a single 133 MHz card is installed the bus increases to 133 MHz. This is part of the PCI-X specification.
Note: The LSI 53C1030 is a PCI-X to Dual Channel Ultra320 SCSI/RAID Multifunction Controller. Internally, however, each SCSI channel is managed separately and controlled by two separate PCI-X configuration spaces. As a result it may be viewed by software such as LSI configuration, ServeRAID™, and the operating system as two separate controllers, each with a single channel.
Page 33
Chapter 1. Technical description 19
software support for failover, layer-3 load balancing, and comprehensive diagnostics.
Category 5 or better Ethernet cabling is required with RJ-45 connectors. If you plan to implement a Gigabit Ethernet connection, ensure that your network infrastructure is capable of the necessary throughput to match the server’s I/O capacity.
SVGA with 8 MB video memory (ATI RageXL chipset)Three USB ports: One on the front panel and two on the rear
All ports are USB 2.0 compliant.
One RS-232 serial port, located on the rear of the machineRemote Supervisor Adapter with:
– 10/100 Ethernet port management port – RS-485 ASM interconnect bus –Serial port – External power port
There are no PS/2 keyboard or mouse ports on the x455. USB keyboard and mice are supported. If you require KVM support, the 1.5 M USB Conversion Option (UCO) (part number 73P5832) enables the x455 to be attached to one of the Advanced Connectivity Technology (ACT) switches for common management within the rack. This smart cable is plugged into the USB and video ports on the server. It converts KVM signals to CAT5 signals for transmission over a CAT5 cable to either a Remote Console Manager (RCM) or Local Console Manager (LCM). USB servers can be managed on the same set of switches as legacy PS/2-based or C2T-based KVM servers.
1.5 Remote Supervisor Adapter
The x454 includes a Remote Supervisor Adapter (RSA), which is positioned horizontally in a dedicated PCI-X slot beneath the PCI-X adapter area of the system.
Page 34
20 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 1-10 Remote Supervisor Adapter connectors
The Remote Supervisor Adapter allows you to provide remote management both out-of-band and in-band. Out-of-band refers to managing the server without the use of the operating system. This would be done via the Ethernet or serial port via a Web or telnet session. In-band refers to managing the server through the operating system. This would typically be via IBM Director and/or SNMP.
The features of the Remote Supervisor Adapter include: In-band and out-of-band remote server access and alerting through IBM
Director
Full Web browser support with no other software requiredEnhanced security featuresGraphics/text console redirection for remote controlDedicated 10/100 Ethernet access portASM interconnect bus for connection to other service processorsSerial dial in/outE-mail, pager and SNMP alertingEvent log Predictive Failure Analysis on memory, power, hard drives, and CPUsTemperature and voltage monitoring with setable thresholdLight path diagnosticsAutomatic Server Restart (ASR) for operating system and POSTRemote firmware updateLAN accessAlert forwardingAbility to manage and monitor an RXE-100
See the IBM Redbook Implementing Systems Management Solutions using IBM Director, SG24-6188, for more information on the Remote Supervisor Adapter.
External power supply
Error LED (amber)
Power LED (green)
ASM interconnect (RS-485) port
10/100 Ethernet port
Management COM port
Rear of x455
Page 35
Chapter 1. Technical description 21
1.6 RXE-100 Expansion Enclosure
The attachment of an RXE-100 Remote Expansion Enclosure is also supported. These are supported in a number of configurations with some configurations offering redundancy. The RXE-100 is connected to the x455 via one or two remote I/O cables and has a throughput of 1 GBps. It is available as standard with six PCI-X slots available but is upgradable to 12 PCI-X slots, giving you up to a total of 12 PCI-X or 18 PCI-X slots, respectively. In a multinode configuration an RXE-100 can be shared by two machines. This allows a total of one RXE-100 in a 2-node partition and two RXE-100s in a 4-node partition.
For systems configured for RXE failover, one of the RXE Expansion Port connections may fail at either runtime or boot time and the system will continue to operate properly. Failover capability requires the RXE-100 to be configured with 12 slots, be configured in unified (that is, non-shared) mode, and have both RXE Expansion Ports connected. Runtime failures will be logged in the System Error Log. The broken connection is not re-established at runtime. A Reboot is required to recover failover capability.
An RXE-100 connection also requires the connection of the RXE management port(s). There is no failover capability for RXE management port connections.
The following connectivity options are supported:
Connecting one 6-slot RXE to one x455Connecting one 12-slot RXE to one x455 using one or (recommended) two
data cables
Sharing one 12-slot RXE between two independent x455sConnecting one RXE to a two-node x455, configured so as to share the slots
between the nodes
Connecting one RXE to a 4-node x455 (one RXE connected to two of the
nodes, either nodes 1 and 2, or nodes 3 and 4)
Connecting two RXEs to a 4-node x455 (one RXE connected to two of the
nodes), configured so as to share the slots between nodes 1 and 2, and between nodes 3 and 4
See “Remote Expansion Enclosure connectivity” on page 70 for details.
Important: Although other combinations of attaching RXE-100s in a multinode partition will work, they are not supported configurations.
Page 36
22 IBM Eserver xSeries 455 Planning and Installation Guide
1.7 Multinode scalable partitions
The x455 can be configured as part of a multinode partition. The partition can comprise either two nodes or four nodes. The partition is a hardware partition, which is invisible to the operating system. From an operating system point of view it sees all the hardware as a single machine.
All nodes in a scalable partition must have the following:
Same base modelProcessors that have the same core speed and cacheMemory mirroring that is configured the same on all nodes, that is, either all
enabled or all disabled
When you have a multi-node configuration, all nodes must be fully populated with processors. In other words, only eight-way and 16-way CPU configurations are supported.
Conceptually two-node and four-node partitions are depicted in Figure 1-11 and Figure 1-12 on page 23. These also show the cable requirements.
Figure 1-11 Two-node partition
x455
Node A
x455
Node B
1
3 2
2
3
1
1
2
3
SMP Scalability ports
Page 37
Chapter 1. Technical description 23
Figure 1-12 Four-node partition
The partition is powered on by turning on the primary node (Node 1). This will power on the other nodes. It is also powered off by turning off the primary node, which will power off the other nodes.
The three scalability ports on the back of the machine are used to interconnect the machines, and the Configuration/Setup utility is used to define the partition and members of the partition. The scalability cables can be up to 3.5 meters long. This restriction normally means that all members of a partition will be in the same rack.
For properly configured two-node systems, one of the scalability port connections may fail at either runtime or boot time and the system will continue to operate properly. Runtime failures will be logged in the System Error Log.
For properly configured four-node systems, one of the scalability port connections may fail at runtime and the system will continue to operate properly. Failures will be logged in the System Error Log. Boot time scalability connection failure results in each node booting to the EFI shell independently.
Below are the maximum supported configurations for one-, two- and four-node partitions.
Table 1-5 Comparing one-node, two-node and four-node configurations
Note: Powering on and off the other nodes will have the same effect, but this should always be done from the primary node.
Component 1 node 2 nodes 4 nodes
Itanium 2 processors 4 8 16
x455
Node A
x455
Node C
x455
Node B
x455
Node D
1
3
2
3
2
1
23 2
3
1
1
1
2
3
SMP Scalability ports
Page 38
24 IBM Eserver xSeries 455 Planning and Installation Guide
1.7.1 RXE-100 connectivity
RXE-100s may be attached to x455 systems. They can be connected using either optional 3.5m Scalability Port Cables or optional 8m RXE Cables. The RXE cable length allows placement of RXE-100s in a rack adjacent to the rack containing the x455 system.
Conceptually, two-node and four-node configurations with RXE-100s are depicted in Figure 1-13 on page 25 and Figure 1-14 on page 25. These also show the cable requirements.
Memory 56 GB 112 GB 224 GB
XceL4 Cache 64 MB 128 MB 256 MB
Internal hard disks 2 4 8
Active 64-bit PCI-X slots 2 x 66 MHz
2 x 100 MHz 2 x 133 MHz
4 x 66 MHz 4 x 100 MHz 4 x 133 MHz
8 x 66 MHz 8 x 100 MHz 8 x 133 MHz
RXE-100 support 1 1 2
Active 64-bit PCI-X slots with RXE
2 x 66 MHz 2 x 100 MHz 14 x 133 MHz
4 x 66 MHz 4 x 100 MHz 16 x 133 MHz
8 x 66 MHz 8 x 100 MHz 32 x 133 MHz
Dual channel integrated Ultra320 SCSI support
124
Dual port integrated Ethernet controller
124
Keyboard and mouse 1 2 4
USB ports 3 6 12
Video ports 1 2
1
4
1
Wake on LAN® Ethernet cards 2 2 2
1. This is supported for Windows Server 2003 Enterprise Edition only
Component 1 node 2 nodes 4 nodes
Page 39
Chapter 1. Technical description 25
Figure 1-13 Two-node configuration with one RXE-100
Figure 1-14 Four-node configuration with two RXE-100s
1.7.2 Multinode configuration
Multinode system configurations are defined to each x455 using the EFI Configuration/Setup menus. These options are fairly self explanatory and allow you to configure the following:
– Partition size (two-node or four-node)
x455
Node A
x455
Node B
1
3 2
23
1
RXE-100
A
A
RXE-100
B
B
A
B
1
2
3
SMP Scalability ports
A
B
RXE-100 Expansion ports
Data cables
Management cables
x455
Node A
x455
Node C
x455
Node B
x455
Node D
1
3
2
3
2
1
23 2
3
1
1
1
2
3
SMP Scalability ports
A
B
RXE-100 Expansion ports
RXE-100
A
A
A
B
B
A
RXE-100 RXE-100
B
B
A
B
A
B
Data cables
Managem ent cables
Page 40
26 IBM Eserver xSeries 455 Planning and Installation Guide
– Partition ID – IP address of member nodes (IP address of the RSA) – Additional shared resources (video and/or CD-ROM)
All system node participants in a multinode system must be connected to the network via the Service Processor Ethernet port.
1.7.3 Integrated I/O function support
The following integrated I/O functions are available in multinode x455 systems.
Table 1-6 Integrated I/O function support
1.7.4 Error recovery
With a multinode configuration, in the event there is a major problem with one of the nodes, the remaining nodes will boot to EFI as single node partitions. This is done to facilitate diagnostics and reconfiguration.
Function Primary node Secondary nodes
USB ports Yes Yes
1
Serial port Yes No
Disk drive bays (SCSI) Yes Yes
SCSI port Yes Yes
Ethernet ports Yes Yes
Video ports Yes Yes
2/3
Media bays (IDE) Yes Yes
Wake on LAN Yes No
1. Keyboard or mouse can be used on any machine.
2. Windows Server 2003 Enterprise Edition only.
3. Windows Server 2003 Datacenter Edition does not support multiple monitors as there currently is no certified driver for the ATI RageXL chipset.
Page 41
Chapter 1. Technical description 27
1.8 Redundancy
The x455 has the following redundancy features to maintain high availability: Four hot-swap multi-speed fans.
With four hot-swap redundant fans, the x455 has adequate cooling for each of its major component areas. There are 2 fans located at the front of the server that direct air through the memory-board assembly and processor-board assembly. These fans are accessible from the top of the server without having to open the system panels. In the event of a fan failure, the other fan will speed up to continue to provide adequate cooling until the fan can be hot-swapped.
The other two fans are located just behind the power supplies and provide cooling for the I/O devices. Similar to the SMP Expansion Module fans, these fans will speed up in the event that one should fail and will compensate for the reduction in air flow. In general, failed fans should be replaced within 24 hours following failure.
Due to airflow requirements, fans should not be removed for longer than two minutes. The fan compartments need to be fully populated even if the fan is defective. Therefore, remove a defective fan only when a new fan is available for immediate replacement
Two hot-swap power supplies with separate power cords.
For large configurations, redundancy is achieved only when connected to a 220 V power supply. See 3.5, “Power considerations” on page 87 for details.
To ensure adequate power, a UPS with a rating of RMB 5000 or more is recommended.
Two hot-swap hard disk drive bays. Using either the onboard LSI chipset or a
ServeRAID adapter, these drives can be configured to form a RAID-1 disk array for the operating system.
The memory subsystem has a number of redundancy features, including
memory mirroring, as described in “System memory” on page 14.
The layout of the front panel of the x455, showing the location of the four fans, two drive bays, and two power supplies is shown in Figure 1-1 on page 5.
1.9 Light path diagnostics
To limit the need to slide the server out of the rack to diagnose problems, a light path diagnostics panel has been incorporated in the front of the x455, as shown in Figure 1-15. This panel can be ejected from the server to view all light path
Page 42
28 IBM Eserver xSeries 455 Planning and Installation Guide
diagnostics monitored server subsystems. In the event that maintenance is required the customer can slide the server out from the rack and, using the LEDs, find the failed or failing component.
As illustrated in Figure 1-15, light path diagnostics are able to monitor and report on the health of CPUs, main memory, hard disk drives, PCI-X slots, fans, power supplies, power modules and the internal system temperature.
Figure 1-15 Light path diagnostics panel on the x455
The light path diagnostics on the x455 has four levels:
The first level is the front panel fault LED.Level 2 is the pop-out panel, as shown in Figure 1-15. For further investigation, there are light path diagnostics LEDs visible through
the top of the server. This requires the server to be slid out of the rack.
For the fourth level of diagnostics, LEDs on major system components
indicate the component causing the error.
As the processor-board assembly is not visible during normal operation, a light path diagnostics button has been incorporated into it to assist with diagnosing errors. You can light up the LEDs for a maximum of two minutes. After that time, the circuit that powers the lights is exhausted.
Important: If a light path diagnostics LED has been illuminated and system power is removed, there is no way to redisplay the LEDs on the system tray without re-applying AC power. If the fault has not been rectified when power is restored, the LED will re-light.
C
P
U
V
R
M
M
E M
O
R
Y
DA
S
D
N
M
I
B
O
A
R
D
E
V
E
N
T L
O
G
FA
N
P
O
W
E
R
S
U P
P
L
Y
P
C
I B
U
S
2
1
N
O
N
R
E
D
OV
E
R
S
P
E
C
T
E
M
P
R
E
M
IN
D
i
!
CPU
VRM
MEMORY DASD
NMI
BOARD EVENT LOG
FAN
POWER SUPPLY
PCI-X BUS
2
1
NON REDUND OVER SPEC
TEMP
REMIND
Light Path Diagnostics™
Page 43
Chapter 1. Technical description 29
The pop-out panel (Figure 1-15 on page 28) also has a remind button. This places the front panel system-error LED into remind mode, which means it flashes briefly every two seconds. By pressing the button, you acknowledge the failure but indicate that you will not take immediate action. If a new failure occurs, the system-error LED will turn on again and no longer blink. The system-error LED remains in the remind mode until one of the following situations occurs:
All known problems are resolved.The system is restarted.A new problem occurs, at which time it then is illuminated continuously.
1.10 Extensible Firmware Interface
The Extensible Firmware Interface (EFI) specification describes an interface between the operating system and platform firmware as shown in Figure 1-16 on page 30. The interface offers platform-related information to the operating system as well as boot and runtime service calls that are available to the operating system and OS loader. Together, this makes a well-defined environment for booting the operating system and running pre-boot applications, such as diagnostics, system setup, and driver setup utilities.
In comparison to a BIOS-based, legacy system, the EFI is an additional layer between the operating system and the firmware. In a legacy system, the OS loader calls BIOS functions directly. Consequently, to provide a stable boot environment, changes in the OS loader and the platform firmware must go hand-in-hand. Figure 1-16 on page 30 shows a conceptual view of the EFI.
Tip: The remind button on the pop-out LPD panel does not function when AC power has been removed from the system. The button is just used to acknowledge a system error as described above.
Page 44
30 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 1-16 The EFI concept
The primary goal of this specification is to provide an abstract model for both operating system and hardware developers. With such a model in a place, OS loader customizations are not required if there are changes in the platform hardware or firmware such as added new boot or input devices, for instance. The EFI breaks up a tight dependency between the operating system and the firmware, thus speeding up the process of releasing the new products and introducing the new features and functionality to the hardware and/or operating system.
Consider, for example, the situation where a new type of boot device, say a USB key, is to be implemented. First the legacy BIOS would have to offer an option to choose this new device for booting, then new USB key-specific functions would have to be added to the firmware to support booting from a USB device, and finally, the OS loader would have to be modified to use these functions.
The same situation with the EFI would be dramatically simplified. The OS loader calls unified (not vendor-specific) EFI API functions for booting. These functions are not dependent on the boot device used, so when a new boot device type is
Operating system
Platform hardware
EFI operating system loader
EFI boot & runtime services
System abstraction layer (SAL)
Processor abstraction layer (PAL)
Page 45
Chapter 1. Technical description 31
added to the platform and the firmware is modified to recognize it, the operating system can immediately boot.
Although EFI was originally introduced with the Itanium architecture, it is not restricted to 64-bit platforms. The EFI architecture is modular, extensible, and offers backward compatibility for the older systems by default. This means that it is possible for non-EFI-aware operating systems to communicate directly with system BIOS/firmware; however, this depends on how a particular manufacturer implements the EFI on its products. The x455 has a small portion of legacy BIOS code used only for running the IA-32 video driver; however, it will only support EFI-aware operating systems. From a long-term perspective, for IA-32 systems there is expected to be a gradual transition from BIOS to the EFI.
EFI does offer an extensive development interface that enables it to be customized to load additional drivers, provide corporate identity, or provide authentication and security features. Tools are available from Intel’s Web site to assist with this.
If you have not used an EFI system before you will come across a number of new acronyms. These are listed in the table below with a brief description.
Table 1-7 EFI acronyms and descriptions
1.10.1 GUID Partition Table disk
The GUID Partition Table (GPT) was introduced as part of the EFI initiative. Every disk is assigned a global unique identifier (GUID) to allow self-identification
Acronym Description
ACPI Advanced Configuration and Power Interface - This is an industry
standard interface for OS-directed configuration and power management for servers, desktops, and laptops.
EFI Extensible Firmware Interface - A new interface between the system
firmware and operating system.
PAL Processor Abstraction Layer - This provides various processor
functions that vary between different implementations of a particular platform.
SAL System Abstraction Layer - This is a firmware layer and provides
various system functions that vary between different implementations of a particular platform.
SMBIOS Systems Management BIOS - A specification that defines how systems
management information is presented in a standard format on Intel architecture systems.
Page 46
32 IBM Eserver xSeries 455 Planning and Installation Guide
of the disks. GPT replaces the older Master Boot Record (MBR) partitioning scheme that has been common to PCs.
There are several reasons for introducing a new partitioning scheme: MBR disks support only four partition table entries and a volume size of 2 TB
(terabytes). If more partitions are required then one of these partitions must be an extended partition. Only one extended partition is allowed per disk drive. Extended partitions are then subdivided into one or more logical disks.
In theory, GPT disks support an unlimited number of partitions. The number
of partitions’ is limited only by the amount of space reserved for making partitions entries. GPT disks can grow to a very large size. This can be up to 2
64
logical blocks in length (logical blocks are typically 512 bytes). This equates to 18 EB (Exabytes). In practice, the maximum is less, as there are practical limitations with other parts of the system accessing this amount of disk space efficiently.
GPT disks use primary and backup partition tables for redundancy and
CRC32 fields for improved partition data structure integrity.
For backward compatibility with legacy MBR disk tools, all GPT disks contain a
protective MBR. The protective MBR, beginning in sector 0, precedes the GUID
Partition Table on the disk and contains only one partition entry that appears to span the entire disk. The legacy tools are not aware of GPT and do not know how to properly access a GPT disk. The benefit of a protective MBR is that these tools will view a GPT disk as having a single encompassing (possibly unrecognized) partition, rather than mistaking the disk for one that is unpartitioned. That is why a GPT-partitioned disk appears to have an MBR.
GPT disks can be converted to MBR disks and vice versa only if all existing partitioning is first deleted, with associated loss of data. To a legacy tool, the structure of any GPT disk appears to look like Figure 1-17. The protective MBR is followed by a theoretically unlimited number of data (or possibly unrecognized) partitions.
Figure 1-17 General GPT disk structure
Data partition(s)
Protective MBR
Page 47
Chapter 1. Technical description 33
Currently, only 64-bit operating systems have the ability to read, write, and boot from GPT disks. 32-bit operating systems do not have built-in support for GPT disks.
The specification for GPT disk partitioning can be found in Chapter 16 of the Extensible Firmware Interface (EFI) specification. This document is available at:
http://developer.intel.com/technology/efi/download.htm
1.10.2 EFI System Partition
A special partition on the GPT disk is the EFI System Partition (ESP). It contains the OS loader files of all installed operating systems. These files are stored in the EFI directory. The ESP may also contain other files necessary to boot the system, such as drivers. The EFI System Partition is shareable among all installed operating systems. To support multiple operating system installations, create multiple data partitions.
An example directory structure for an EFI System Partition present on a hard disk with Linux and Windows Server 2003 installed is as follows:
\EFI
\Microsoft
\WINNT50
\EFIDrivers \SuSE \RedHat
\MSUtils
There can be only one ESP on a single disk. The size of the ESP is determined using the following algorithm:
ESP = max(100 MB, min(1% of physical disk, 1GB))
In other words, the size of the ESP must be the larger of these two numbers, 100 MB or 1 percent of the physical disk size (up to 1 GB). For example, for an 18 GB disk, the size of the ESP is 184 MB. The value 1 percent of the physical disk is calculated at the time that the ESP is created and does not change if the disk is extended later (for example, via RAID).
Warning: Windows Server 2003 Enterprise and Datacenter Editions require a Microsoft Reserved Partition (MSR), which must come after the ESP. If Windows and Linux need to reside on the same machine then Windows must be installed first.
Page 48
34 IBM Eserver xSeries 455 Planning and Installation Guide
Each bootable GPT disk must contain an EFI System Partition and it should be the first partition on the disk, right after protective MBR, as shown in Figure 1-18.
Figure 1-18 Boot GPT disk structure
The EFI specification supports only FAT or FAT32 on the ESP partition. The ESP is not visible to the operating system’s users by default, but can be accessed for read/write operations from within the operating system by special commands. For Windows-specific information, see “Accessing EFI System Partition from Windows” on page 147. For Linux information, see “Partitions on IA-64 Linux” on page 166.
Note that a non EFI disk partitioning tool would not understand Figure 1-18 and will see it as Figure 1-17 on page 32.
1.10.3 EFI and the reduced-legacy concept
The EFI is reduced-legacy, which means that it eliminates certain hardware and firmware elements of the original PC architecture while advancing the PC's stability and usability. These I/O components have been part of the PC architecture for a long time. In the x455, there is no support in EFI for a parallel port, ISA slots, mouse and keyboard ports, or a diskette drive.
EFI replaces this support with the support for new technology such as bootstrap loading from USB devices and the use of USB mouse and keyboard interfaces.
For more information about the EFI specification, see the following:
http://www.intel.com/technology/efi/index.htm http://www.microsoft.com/hwdev/platform/firmware/EFI/default.asp
EFI
System
Partition
(ESP)
Data partition(s)
Protective MBR
GUID Partition Table
Tip: A diskette drive can be used if the 64-bit operating system supports the device, but the EFI does not support the use of a diskette drive. We recommend that you use a USB memory key instead.
Page 49
Chapter 1. Technical description 35
1.11 Operating system support
The following operating systems are supported
Windows Server 2003 Enterprise EditionWindows Server 2003 Datacenter EditionRed Hat Linux AS 2.1Red Hat Enterprise Linux AS 3.0SUSE LINUX Enterprise Server 8.0
These are further discussed in “Operating system support” on page 87. This list will increase as vendors port the operating systems to 64-bit architecture. For the latest operating system support information, go to:
http://www.pc.ibm.com/us/compat/nos/matrix.shtml
1.12 Enterprise X-Architecture
IBM’s Enterprise X-Architecture technologies yield revolutionary advances in the I/O, memory, and performance of xSeries servers. This design creates a flexible “pay as you grow” approach to buying high-end 32-bit and 64-bit xSeries systems that can be scaled quickly, easily, and inexpensively. Enterprise X-Architecture technology is designed to achieve unparalleled levels of availability, scalability, and performance in industry-standard enterprise computing.
Enterprise X-Architecture technology enables the following capabilities:
XpandOnDemand scalabilitySystem partitioningPCI-X I/O subsystemActive PCI-XRemote I/OActive memoryHigh-speed (DDR) memoryMemory ProteXionChipkill memoryMemory mirroringHot-add/hot-swap memoryXceL4 server accelerator cache
These features deliver application flexibility and new tools for managing e-business. They bring to industry-standard servers the kinds of capabilities formerly available only to users of mainframes and other high-end systems. Combined with existing X-Architecture technologies, these innovations result in
Page 50
36 IBM Eserver xSeries 455 Planning and Installation Guide
unmatched “economies of scale” and new levels of server availability and performance.
Much of the Enterprise X-Architecture offering is delivered through IBM-developed core logic. IBM has more proven product technology and expertise in designing core logic than anyone else in the industry. The IBM XA-32 and XA-64 families of chipsets for 32-bit and 64-bit industry-standard servers contain advanced core logic, which is the heart of a computer system. Core logic determines how the various parts of a system (processors, system cache, main memory, I/O, etc.) interact. These new chipsets bring to the next generation of industry-standard servers key advantages, including modular system nodes, system partitioning, high-performance clustering, and high-speed remote PCI-X I/O support.
The Enterprise X-Architecture paradigm sets IBM xSeries servers apart in the industry while maintaining the advantages of compliance with industry standards for processors, memory, I/O, storage and software. Enterprise X-Architecture also establishes a revolutionary new economic model for servers through its flexible modular design. The x455 Enterprise X-Architecture-based server allows customers to XpandOnDemand, the ability to grow when you need to, using the existing infrastructure.
More Information on Enterprise X-Architecture technology can be found at:
http://www.pc.ibm.com/us/eserver/xseries/xarchitecture/enterprise
1.12.1 NUMA architecture
SMP designs are currently being challenged by the demand for additional Intel processors within a single system. In SMP, CPUs have equal access to a single shared memory controller and memory space. Because processor speed has dramatically increased to be much faster than memory, for most CPU intensive workloads the single front-side bus and single memory controller are often a bottleneck, resulting in excessive queuing delays and long memory latencies when under heavy load.
Processor instruction and data caches have been introduced to help reduce the likelihood of a memory controller bottleneck. A performance boost is often obtained from using large caches because of two reasons:
Caches have lower latency and faster access time than a memory access.
Because a cache can be accessed more quickly than memory, the processor waits less time before receiving a response from a request.
Page 51
Chapter 1. Technical description 37
Caches also improve performance because they reduce queuing time of
accesses that miss the caches and require a physical memory access. For most commercial applications, cache hit rates are usually greater than 70
percent. In this case, the cache greatly reduces memory latency because most processor memory requests are serviced by the faster cache. The caches act as filters and reduce the load on the memory controller, which results in lower queuing delays (waiting in line) at the memory controller, thereby speeding up the average memory access time.
Another bottleneck in many SMP systems is the front-side bus. The front-side bus connects the processors to the shared memory controller. Process-to-memory requests travel across the front-side bus, which can become overloaded when three or more high-speed CPUs are added to the same bus. This, in turn, leads to a performance bottleneck and lower system scalability. Large processor caches also help improve performance because they assist in filtering many of the requests that must travel over the front-side bus (a processor cache-hit does not require a front-side bus for memory transaction).
However, even with a large L3 cache, the number of memory transactions that miss the cache is still so great that it often causes the memory controller to bottleneck. This happens when more than three or four processors are installed in the same system.
Non-uniform Memory Access (NUMA) is an architecture designed to improve performance and solve latency problems inherent in large (greater than four processors) SMP systems. The x455 implements a NUMA-based architecture and can scale up to 16 processors using multiple servers.
The servers each contain up to four CPUs and 28 memory DIMMs. Each servers also has a dedicated local Cache and Scalability Controller, Memory Controller, and 64 MB XceL4 Level 4 cache. The additional fourth level of cache greatly improves performance for the four processors in the server because it is able to respond to a majority of processor-to-memory requests, thereby reducing the load on the memory controller and speeding up average memory access times.
As shown in Figure 1-12 on page 23, each server is connected to another server using three independent 3.2 GBps scalability cables. These scalability cables mirror front-side bus operations to all other servers and are key to building large multiprocessing multinode systems.
By mirroring transactions on the front-side bus across the scalability links to other processors, the x455 is able to run standard SMP software. All SMP systems must perform processor-to-processor communication (also known as “snooping”) to ensure that all processors receive the most recent copy of requested data. Since any processor can store data in a local cache and modify that data at any
Page 52
38 IBM Eserver xSeries 455 Planning and Installation Guide
time, all processor data requests must first be sent to every processor in the system so that each processor can determine if a more recent copy of the requested data is in that processor cache.
Snooping traffic is an important factor affecting performance and scaling for all SMP systems. The overhead of this communication becomes greater with an increase in the number of processors in a system. Also, faster processors result in a greater percentage of time spent performing snooping because the speed of the communications does not improve as the processor clock speed increases, since latency is largely determined by the speed of the front-side bus.
Its easy to see that increasing the number of processors and using faster processors results in greater communication overhead and memory controller bottlenecks. But unlike traditional SMP designs, which send every request from every processor to all other processors, greatly increasing snooping traffic, the x455 has a more optimal design. The XceL4 cache in the x455 improves performance because it filters most snooping operations.
The IBM XceL4 cache improves scalability with more than four processors because it also caches remote data addresses. So, before any processor request is sent across the scalability link to a remote processor, the memory controller and cache controller determine whether the request should be sent at all. To do this, each cache controller keeps a directory of all the addresses of all data stored in all remote processor caches. By checking this directory first, the cache controller can determine if a data request must be sent to a remote processor and only send the request to that specific SMP Expansion Module where the processor caching the requested data is located.
The majority of data requests not found in the XceL4 cache can be sent directly to the memory controller to perform memory look-up without any remote processor-to-processor communication overhead. The directory-based coherency protocol used by the XceL4 cache greatly improves scalability of the x455 because processor-to-processor traffic is greatly reduced.
By using this cache and providing fast local memory access to all processors, typical bottlenecks that are associated with large SMP systems using a single memory controller design are eliminated.
Page 53
© Copyright IBM Corp. 2004. All rights reserved. 39
Chapter 2. Positioning
In this chapter we discuss topics that help you to understand how the x450 can be useful for your business. The topics covered are:
“Migrating to a 64-bit platform” on page 40“Scalable system partitioning” on page 41“Operating system support” on page 42“Server consolidation” on page 43“ServerProven®” on page 44“IBM Datacenter Solution Program” on page 45“Application solutions” on page 46“Why choose x455” on page 52
2
Page 54
40 IBM Eserver xSeries 455 Planning and Installation Guide
2.1 Migrating to a 64-bit platform
With the addition of 64-bit computing to the xSeries platform you can now run large scale, mission-critical applications with the x455. The Itanium 2 processor is suitable for the next generation of high-end applications that will take advantage of its performance benefits particularly with regards to its flat memory addressable space, with a theoretical addressable memory limit of 16 EB (Exabytes).
Migrating existing 32-bit applications can be a time-consuming and complex process particularly for large-scale mission-critical applications. It is also acknowledged that some applications will never need the power that 64-bit processing offers. To assist with this transition and offer backwards compatibility, the Itanium 2 offers support for IA-32 applications.
In addition, Microsoft has introduced an IA-32 Execution Layer software driver to further enhance 32-bit support on Windows Server 2003. The driver works by translating IA-32 code into native Itanium architecture code before it is executed. It is available from:
http://www.microsoft.com/windowsserver2003/64bit/ipf/ia32el.mspx
A similar driver for Linux is also planned.
To fully take advantage of 64-bit computing, it is important that your existing 32-bit applications are re-written and/or recompiled correctly. Using the 32-bit compatibility mode of the Itanium 2 or simply recompiling will not yield the potential benefits. In some cases it will even lead to poorer performance.
32-bit binaries (that is, those that run on Pentium and Xeon processor-based systems) should execute on Itanium 2 without modification. In fact, both 32-bit and 64-bit binaries can run simultaneously on the same processor. This allows a system that is only partially converted to 64 bits to run on the x455. However, unless the applications are recompiled, 32-bit applications will run slower on the Itanium 2 because of the emulation overhead and an inability to take advantage of all the benefits the 64-bit platform offers.
The advantages that 64-bit offers over 32-bit are realized most for applications that can take advantage of its large addressable flat memory, ability to execute instructions in parallel, and raw computational ability particularly for applications that use floating point calculations.
Before migrating, you need to consider many factors. Below are some of these considerations:
Does your current operating system support 64-bit hardware?
Page 55
Chapter 2. Positioning 41
Do your applications support 64-bit hardware?Will you application migrate to provide true 64-bit performance?How long will it take to migrate your applications and how complex will it be?Will you 64-bit platform integrate seamlessly with your existing 32-bit
platforms?
2.2 Scalable system partitioning
With the power of Enterprise X-Architecture, the x455 sets the pace in the market for 8-way and 16-way multi-node partitions. It blends XpandOnDemand processor and I/O scalability with OnForever™ availability for mission-critical applications.
The x455 has a modular and flexible design, which allows you to start with a configuration that meets your current needs and expand when your needs increase. This forms part of IBM’s pay-as-you-grow strategy that allows customers to only pay for what they need. With the x455 you can start with a minimum configuration and scale up though several stages up to the maximum configuration, which is designed to meet the most demanding needs. These are listed below.
Minimum configuration:
1 x server chassis1 x Itanium 2 processor1 GB RAM6 x PCI-X slots1 x 18.2 GB internal hard disk
Maximum configuration:
4 x server chassis 16 x Itanium 2 processors224 GB RAM48 x PCI-X slots with the use of two RXE-1008 x 146.8 GB internal hard disk = 1175 GB (587 GB in RAID-1 configuration)
With the 64-bit architecture, IBM’s X-Architecture, and its scalability options, the x455 has moved into territory that was traditionally the preserve of mini and mainframe systems. This offers the following advantages:
Customers who have invested in Microsoft technologies can now scale up
their applications.
Page 56
42 IBM Eserver xSeries 455 Planning and Installation Guide
Investment protection for customers who have skills in xSeries (or other PC
hardware).
Provides a cost-effective alternative for customers who need to invest in
large-scale Linux/UNIX® systems and would have traditionally considered a mini or mainframe solution.
Partitioning is invisible to the operating system and therefore does not require special installation techniques or configuration from a software point of view. The operating system is only required on the primary node (Node 1), which means simple but effective management.
At the time of writing the x455 supported hardware partitioning only. Software partitioning using products such as those from VMware were not supported.
2.2.1 RXE-100 Expansion Enclosure
The RXE-100 forms part of IBM’s XpandOnDemand technology and allows customers to expand their PCI-X I/O capability as demands increase. With the option to purchase a “6 pack” or “12 pack” of PCI-X slots and to share one RXE-100 between two servers, this provided a flexible and cost-effective solution.
2.3 Operating system support
The x455 will only support 64-bit operating systems. Microsoft and most of the Linux vendors have invested a great deal of time and effort in producing 64-bit versions of their operating systems. At the launch of the x455 the following operating systems are expected to be ready immediately or within three months.
Page 57
Chapter 2. Positioning 43
Table 2-1 Operating system support at x455 launch
2.4 Server consolidation
With its scalability options, the x455 is ideally suited to server consolidation. Enterprise applications such as those used for large databases, enterprise resource planning (ERP), customer relationship management (CRM) and messaging/collaboration have traditionally been distributed across several servers. The x455 now allows you to consolidate these applications onto less hardware.
Consolidation will allow business to centralize their business computing workloads and reduce cost, complexity, network traffic volumes, and management overhead. The x455 protects the investment you have already made by allowing you to expand to two then four nodes. Resources such as processors, memory, hard disks, network cards, monitors, and even CD-ROMs are accessible to the operating system and applications from all the nodes.
Server consolidation has many interpretations and covers many areas. There is no single strategy or methodology that can be applied that will yield an exact outcome, as each environment is different. However, there are broad categories that may have particular importance where benefits can be realized. Below is a summary of some of the different types of server consolidation and the potential benefits.
Operating system Support at x455 general availability:
4-way (1 node) 8-way (2 node) 16-way (4 node)
Windows Server 2003, Enterprise Edition Supported Supported No
1
Windows Server 2003, Datacenter Edition Supported Supported Supported
Red Hat Linux AS 2.1 Supported No No
Red Hat Enterprise Linux 3.0 Supported
2
Supported
2
No
SUSE LINUX Enterprise Server 8.0 Supported Supported
2, 3
No
Notes:
1. Enterprise Edition supports a maximum of eight processors.
2. Support will be after the general availability (GA) of the x455. See:
http://www.pc.ibm.com/us/compat/nos/matrix.shtml
3. Support approximately 30 days after GA.
Page 58
44 IBM Eserver xSeries 455 Planning and Installation Guide
Table 2-2 Server consolidation strategies
The x455 can play a role in all of these areas and help the IT manager to reduce costs while providing a better level of service.
2.5 ServerProven®
Originally announced in 1996, the IBM ServerProven Program enhances the position of xSeries servers as one of the most open server systems on the market today. The program establishes a relationship between IBM and leading vendors of options and applications often selected for installation into xSeries servers.
The program is designed to:
Test leading vendors' products for compatibility with xSeries servers.Improve ease of installation.Reduce total cost-of-ownership.
IBM ServerProven Program participants agree to work closely with IBM engineers during product test cycles and over product life cycles. This means that the IBM ServerProven Program is a commitment to resolve problems before they get to you.
Type of consolidation Potential benefit
Centralization Reduced management
Lower operational costs Greater security and monitoring Reduced network traffic Reduced staff costs
Physical Reduced facilities costs
Reduced hardware and software costs Lower operational costs Reduced management
Data Reduced storage costs
Better storage management Better data integrity Better data access Improved data recovery Improved disaster recovery
Application Reduced administration
Increased reliability and availability Reduced software and licence costs Lower operational costs
Page 59
Chapter 2. Positioning 45
ServerProven participants, in conjunction with IBM, test a set of solution building blocks—specific devices and applications—in one or more operating system environments. This allows customers to build a complete server solution. Compatibility testing by IBM and its program participants will reduce cost-of-ownership by reducing installation and setup time.
When you see the ServerProven logo, you can be assured that the product displaying this emblem has been tested and certified to run reliably on the xSeries hardware.
For additional information on the IBM ServerProven program, refer to the following Web site:
http://www.pc.ibm.com/ww/eserver/xseries/serverproven
For more information on the participants of the IBM ServerProven program, see:
http://www.pc.ibm.com/us/compat/serverproven
2.6 IBM Datacenter Solution Program
Windows Server 2003, Datacenter Edition is only available through an Original Equipment Manufacturer (OEM) partner, such as IBM. When choosing Datacenter Edition, a customer has the ability to choose the best and most appropriate qualified OEM to provide all hardware, services, and support. These components are essential to building and sustaining any high-availability environment. OEMs such as IBM are part of the Windows Datacenter High Availability Program. This program is based on best practices gathered since the inception of Microsoft Datacenter Server in September 2000. For OEMs and service providers to be a member of this program, they must follow a stringent set of reporting and audit features that have been defined by Microsoft.
The IBM Datacenter Solution Program provides a comprehensive set of hardware, service, and support offerings for Microsoft Datacenter 2003 Server. IBM has the ability and expertise to provide our customers with all the components required when implementing this high-availability, highly scalable solution.
Information about the IBM Datacenter Solution Program is available from:
http://www.pc.ibm.com/ww/eserver/xseries/windows/datacenter.html
Page 60
46 IBM Eserver xSeries 455 Planning and Installation Guide
2.7 Application solutions
With the x455 in the IA-64 environment, you are ready to deploy even larger implementations of enterprise solutions.
As companies' performance demands grow, 64-bit technology becomes an increasingly attractive option, due to increased memory addressability and true parallel architecture. There are a number of ways the x455 can be deployed in specific application solution environments. These include:
Database applicationsBusiness logice-Business and security transactionsIn-house developed compute-intensive applicationsScience and technology
Figure 2-1 xSeries 450-based solutions
2.7.1 Database applications
Sixteen-way x455 configurations can be used as database servers and application servers providing a high-performance reliable platform. These configurations require an external storage enclosure or SAN, depending on the size of the database that is driven by the number of users.
Solutions
Operating system
xSeries 450
Database applications
BI/ERP/SCM/CRM
e-Business
In-house developed
applications
Science and
technology
Page 61
Chapter 2. Positioning 47
Database applications with memory-intensive workloads that require working data sets larger than 4 GB to be loaded in memory will benefit from the larger memory support of the 64-bit platform.
The following is an example from the field. Microsoft SQL Server Enterprise Edition uses Advanced Windowing Extensions (AWE) memory only for the buffer pool. The AWE API allows applications up to 64 GB of RAM. However, due to the AWE mapping overhead, it is not practical to try to use it for sort areas, procedure cache, or any other type of work area. Many applications do make heavy use of this extra large buffer pool but will not fully exploit its benefits. The most efficient solution in such cases is to move the applications onto a 64-bit database server, which can access memory area above 4 GB as a flat address space without the need to move data in and out of a 4 GB memory area.
Even at the same clock speed a 64-bit processor will move twice as much data as a 32-bit platform. With the improvement Intel has made to the way the data is handled and the additional cache, you should see a noticeable performance increase.
The database server will also benefit from a larger 3 MB third-level and 64 MB XceL4 cache. With such large cache, the need to go to memory or disk for database transaction elements is greatly reduced, and this directly implies a performance increase, faster access to data, and improved throughput. Itanium 2 systems are likely to be able to hold database transaction records in cache during the entire transaction, which enables the I/O portion of the transaction to occur at speeds faster than memory access.
In-memory databases
Architectures with 64-bit addresses can store reasonably large databases in memory and access them with little or no paging overhead. This is often done for databases that are constantly being accessed and for databases that serve as the basis for complex analysis. The theoretical maximum of 16 Exabytes for memory has not yet been tested, but multi-Gigabyte databases are frequently run on 64-bit machines.
A major challenge to providing high-performance access to database information is the time it takes to access disk drives. When disk access is required, disk access times add what can be an intolerable delay to efficient information access and utilization. Access to disk is typically hundreds to thousands of times slower than access to memory.
Today, the disk access time challenge can be overcome. The price of random access memory has come down to affordable levels for many systems. This price reduction means that an entire database can be stored in system memory if the system processors can provide a very large linear address space.
Page 62
48 IBM Eserver xSeries 455 Planning and Installation Guide
A processor that supports a 64-bit address space may provide access to in-memory databases that range from tens of Gigabytes to thousands of Terabytes. In contrast, traditional 32-bit processors most often only address a maximum of 4 GB.
Multiple databases that required large amounts of memory and previously had to be distributed across several machines can now be consolidated onto fewer machines, taking advantage of the larger addressable linear address space.
Key database software available for Itanium 2 systems include:
IBM DB2® Universal Database™ 8.1 (both Linux and Windows versions)Microsoft SQL Server 2000Oracle 9i Database (Linux/Windows)
IBM DB2 Universal Database
The DB2 Universal Database offers increased performance, reliability, and scalability on Windows and Linux platforms by exploiting the IA-64 servers built around the Itanium Processor Family of chips from Intel. The 64-bit version of DB2 delivers significantly higher levels of performance and reliability by exploiting the capabilities of the new systems.
As the adoption of IA-64 architecture continues to move forward with most major server system OEMs using Windows and Linux operating systems, IBM DB2 Universal Database leads the software industry in providing business database application content to the IA-64 server platform. IBM DB2 Universal Database is currently enabling the most robust and varied database software application performance on the IA-64 architecture.
By investing in IBM DB2 Universal Database application software designed for the IA-64 architecture, starting with the Itanium microprocessor, companies will be assured of having a solid foundation for their electronic business far into the future.
Client/server configurations supported in this release are:
64-bit client to 64-bit DB2 UDB engine 64-bit client to 64-bit DB2 Connect™ gateway to DRDA® host 64-bit DB2 Connect Personal Edition to DRDA host
2.7.2 Business logic
More and more enterprise applications such as ERP, SCM, CRM, and BI are released or announced to be released on a 64-bit platform. Such applications process large amounts of data and the large flat memory model means that this
Page 63
Chapter 2. Positioning 49
processing will be more efficient. That combined with up to four Itanium 2 processors and a highly efficient cache system make the x450 an ideal choice.
Market leaders offer 64-bit optimized versions of their enterprise applications for use on the x455 now, including SAS Release 9.0 (Windows version) and SAP R/3 4.6C (Windows/Linux), among others.
Independent software vendors such as JD Edwards, Baan/Invensys, i2 Technologies, PeopleSoft, Veritas, Computer Associates, BMC Software, and many others already offer a variety of products available in 64-bit version.
Business intelligence
Business intelligence (BI) is a broad category of applications and technologies for gathering, storing, analyzing, and providing access to data to help enterprise users make better business decisions. BI applications include the activities of decision-support systems, query and reporting, online analytical processing (OLAP), statistical analysis, forecasting, and data mining.
The x455 brings high I/O bandwidth and performance to handle compute-intensive BI applications. A world-class floating-point engine, capable of addressing much larger amounts of memory, helps speed up data-intensive BI applications that help companies to increase employee productivity.
Enterprise Resource Planning
ERP is an industry term for the broad set of activities supported by multi-module application software that helps manufacturers manage the important parts of their businesses: Product planning, procurement, inventory maintenance, supplier interaction, customer service, and order tracking. ERP can also include application modules for the finance and human resources aspects of a business. Typically, an ERP system uses or is integrated with a relational database system.
Key server attributes for ERP applications are availability, scalability, and performance. The x455, with its flat memory model, Itanium 2 processors, and Enterprise X-Architecture technology such as Active Memory and XceL4 Server Accelerator Cache, provides a robust base on which to build and implement successful ERP solutions.
Supply chain management
Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies.
Page 64
50 IBM Eserver xSeries 455 Planning and Installation Guide
The x455 is a preferred platform for 64-bit SCM management applications. The x455 offers a range of leading technologies that will help to deliver the uptime required for business-critical applications at the lowest price/performance ratio. The x455 covers all high-availability features for customers looking for servers to power their SCM solutions.
Customer relationship management
Customer relationship management (CRM) is an information-industry term for methodologies, software, and usually Internet capabilities that help an enterprise manage customer relationships in an organized way.
The x455 provides a performance-based foundation upon which customers can build and deploy CRM solutions in which the x455 will most likely be implemented as an application server and/or a database server.
2.7.3 e-Business and security transactions
e-Business is the use of Internet technologies to improve and transform key business processes.
This includes Web-enabling core processes to strengthen customer service operations, streamlining supply chains, and reaching existing and new customers. In order to achieve these goals, e-business requires a highly scalable, reliable, and secure server platform.
The x455 is a strong candidate for an application integration server that integrates the back-end data with the servers containing end-user or client programs. This involves data transformation, process flow, and other capabilities, thus allowing companies to integrate applications and other data sources. These types of servers benefit from the processing power offered by the x455. The performance of the server shows promise in Web servers that perform secure e-commerce transactions.
When using x455 as a Web server for your e-commerce solutions, you will benefit from its highly parallel computation that can handle higher volumes of secure data transmissions using complex encryption/decryption techniques.
The Itanium 2 micro-architecture is a perfect match for the significant compute power requirements of Secure Sockets Layer (SSL), providing protection without incurring delay. SSL uses advanced public key encryption technology to safely move sensitive information across the Internet.
In today’s complex environments, encryption can occur at two levels simultaneously. For example, the IPSec standard performs encryption on every packet sent over the network. These packets frequently contain data that is
Page 65
Chapter 2. Positioning 51
encrypted itself. As a result, enterprises need platforms that can encrypt and decrypt data very quickly. The x455 is suited to this task.
To compare the performance in integer and floating-point operations with other processors on the market, see SPECint_base2000 and SPECfp_base2000 benchmarks at:
http://www.spec.org
2.7.4 In-house developed compute-intensive applications
For developers, the 64-bit architecture allows creation of applications using a familiar programming model that encourages the development of a wide-ranging set of enterprise solutions. The experiences gained on the IA-32 platform can be reused when creating applications for Itanium 2-based systems. Thus developers are not required to start from scratch to make their transition to the 64-bit world. In most cases, existing 32-bit code will not require a complete rewrite, but only recompilation. This, however, should not be used as a long-term and easy option if re-writing will yield significant performance benefits.
IA-64 parallelism is managed by the compiler itself. Application development does not require special techniques before it is compiled. Currently available compilers optimized for Itanium 2 include:
Intel C++ CompilerIntel Fortran CompilerMicrosoft SDK/C/C++ Compiler
2.7.5 Science and technology industries
Science and technology industries (S&TC) require the processing of large and complex calculations to solve challenging problems.
While S&TC industries are characterized by compute-intensive workloads that require special server characteristics to meet their performance needs, each industry, such as aerospace, automotive petroleum, research, or weather, also has its own set of S&TC applications, each demanding different computing solutions.
The Itanium 2-based systems are an ideal platform for compute-intensive applications. It is for this reason that the Itanium and Itanium 2 processors were chosen for the largest American-built supercomputer, the TeraGrid project sponsored by the National Center for Supercomputing Applications. When finally deployed with more than 3,000 Itanium 2 chips, TeraGrid will be capable of more than 14 billion floating-point operations per second (teraflops).
Page 66
52 IBM Eserver xSeries 455 Planning and Installation Guide
Itanium 2 features, such as increased system memory bandwidth, total number of 328 onboard registers (including 128 floating-point registers), and thus high floating-point performance, speed up calculations and data analysis in S&TC applications.
2.8 Why choose x455
Modern enterprise applications require servers with significant processing power that are able to process large quantities of data and store numerous transactions in cache, and do it all with the highest possible availability and reliability. Systems that cannot deliver at this level are destined to serve as solutions in non-critical settings.
IBM, as a market leader, has been delivering top-of-the-line solutions for the enterprise environment for years, and xSeries 455 is evidence that this will continue to develop with IA-64 technology. With close cooperation between IBM and Intel, the xSeries 455 will continue to be an excellent platform for the most demanding 64-bit enterprise applications.
The x455 combines and builds on a number of technologies developed in previous xSeries models and other IBM developed technology:
The mainframe inspired x445 that provides reliable and scalable partitioning.The x380, the first 64-bit xSeries server targeted, tested, and proven by
software and hardware developers.
The x450, the first xSeries server aimed at large, demanding, mission-critical
applications.
The IBM pSeries®, iSeries™, and zSeries® servers. Unlike most of our
competitors, IBM has many years and a vast amount of experience with 64-bit technology.
All of these advantages make the x455 an enterprise-level server and a viable alternative to RISC-based architectures, while protecting many years of investments and knowledge gained from IA-32 platform development.
The Itanium 2 processor used in the x455 is based on the Explicitly Parallel Instruction Computing (EPIC) architecture, which incorporates a number of new technologies, features, and capabilities that make it ideal for the high-end server and workstation markets. EPIC allows users to take advantage of its large memory addressability and parallel execution capabilities. The chip also supports intelligent prediction and speculation of events to reduce redundancy and improve performance. The Itanium 2's floating point engine enhances
Page 67
Chapter 2. Positioning 53
performance for complex computations that are required for data-mining, scientific, and technical applications.
The Itanium 2 processor is the second in a family of Intel 64-bit enterprise-class processors. For more information about the Intel Itanium 2 processor, see:
http://www.intel.com/products/server/processors/server/Itanium 2/
Intel Itanium Architecture-based microprocessors have the following features: Advanced parallelism
High performance requires parallel execution, which is either very limited or hardly achieved in today's architecture. The traditional PC systems are not designed for parallelism, which is critical for current demanding applications (for example, databases and application servers).
Today’s processors using limited parallelism are often 60 percent idle. When source code is compiled on today’s systems, the result of the compilation is sequential machine code. A regular (non-Itanium processor family) compiler takes sequential code, examines and optimizes it for parallelism, but then has to regenerate sequential code, but in such a way that the processor can re-extract the parallelization from it. The processor will then be required to read this implied parallelism from the machine code, re-build it, and execute it. The parallelism is there, but it is not as obvious to the processor and more work has to be done by the hardware before it can be utilized.
Itanium 2 supports parallelism on multiple levels. Instruction-level parallelism (ILP) is the ability to execute multiple bundles (three instructions in a bundle) at the same time. The Itanium 2 micro-architecture can deliver faster performance by executing multiple bundles per clock cycle. Parallelism, both at the instruction level and at the SMP system level, permits more efficient use of virtually all system resources to enable improved scalability.
The Itanium processor's instruction-level parallelism helps ensure the scalability necessary to manage large data warehouses.
Large memory addressability
Another key advantage is that 64-bit operating systems can support far more physical memory than a 32-bit operating system. The limit for directly addressable memory on 32-bit architecture is 4 GB. On 64-bit architecture is it 16 Exabytes (EB) or 2
64
. The genuinely takes the processor to a level where other hardware and software will take years to match its raw ability to address and handle data.
Note: Performance should no longer be measured by just the speed in MHz, but also by the degree of parallelism that the processor achieves.
Page 68
54 IBM Eserver xSeries 455 Planning and Installation Guide
The Itanium 2 processor cache subsystem has three levels, all of which are on-die cache. In addition to this, IBM provide a fourth level of cache to manage the data and logic between the processors.
The increased physical memory includes the following benefits for applications:
– Each application can support more users. For a comparison of the number
of maximum connected users in SAP for various hardware platforms (including Itanium 2), visit the following link:
http://www.sap.com/benchmark/
– Each application has better performance and more applications can run
simultaneously and remain completely resident in the system's main memory. This reduces or eliminates the performance penalty of swapping pages to and from disk.
– Each application has more memory for data storage and manipulation.
Databases can store more of their data in the physical memory of the system. Data access is faster because disk reads are not necessary.
– Applications can manipulate large amounts of data easily and more
reliably. Video composition and modeling for scientific and financial applications benefit greatly from memory-resident data structures that are not possible on 32-bit operating systems.
For the enterprise customer, a larger physical memory subsystem provides access to more data more quickly from system memory, since much more data can be held near the processor for faster calculations and data analysis. Large memory addressability allows larger file system caches for read-ahead and write-behind I/O operations, and also allows retention of large amounts of data in memory instead of repeatedly reading the data from disk. Combined with high-memory bandwidth and a variety of performance optimization techniques, this solution provides the performance the enterprise market needs.
Page 69
© Copyright IBM Corp. 2004. All rights reserved. 55
Chapter 3. Planning
In this chapter we discuss topics that you need to consider before you finalize the configuration of your x455 system and before you begin implementing the system. The topics covered are:
“System hardware” on page 56“Cabling and connectivity” on page 65“Storage considerations” on page 79“Rack installation” on page 85“Power considerations” on page 87“Operating system support” on page 87“IBM Director support” on page 89“Solution Assurance Review” on page 90
3
Page 70
56 IBM Eserver xSeries 455 Planning and Installation Guide
3.1 System hardware
The x455 provides a scalable and flexible hardware platform. There are a number of important aspects of the system hardware to consider when planning your configuration. These are discussed in this section.
“Processors” on page 56“Memory” on page 57“PCI-X slot configuration” on page 61“Broadcom Gigabit Ethernet controller” on page 64
3.1.1 Processors
xSeries 455 servers combine copper-based, XA-64 Enterprise X-Architecture technologies with 64-bit, Intel Itanium 2 processors.
Table 3-1 lists the processors standard in each x455 model and the part number of additional processors.
Table 3-1 Processors in each x455 model
The L2 and L3 cache run at the full speed of the processor.
A total of four processors can be installed in the x455. One, two, three, or four processors can be installed. Each processor must be the same speed and cache size. Each processor option includes the processor with heatsink (pre-assembled) and its associated power module.
When you have a multi-node configuration, all nodes must have four processors installed. That is, only 8-way and 16-way are supported.
Processors must be installed in a specific order, as shown in Figure 3-1 on page 57. Special tools are required to install the processors and these are included in the option. We recommend that you update the system abstraction layer/extensible firmware interface (SAL/EFI) code after the installation. To download the most current level of SAL/EFI code for the server, go to the x455 device driver page:
http://www.ibm.com/pc/support/site.wss/MIGR-53575.html
Model Standard CPUs L2 cache L3 cache Max SMP Extra CPUs
8855-1RX One 1.3 GHz 256 KB 3 MB Four-way 73P7076
8855-2RX Two 1.4 GHz 256 KB 4 MB Four-way 73P7077
8855-3RX Two 1.5 GHz 256 KB 6 MB Four-way 73P7078
Page 71
Chapter 3. Planning 57
Figure 3-1 Processor installation order
3.1.2 Memory
A maximum of 28 DIMMs may be installed in the memory-board assembly of the x455, depending on the size of the DIMMs used. Supported DIMMs are listed in Ta bl e 3 - 2.
Table 3-2 Supported DIMMs
Go to the ServerProven site for the latest information on supported memory modules:
http://www.pc.ibm.com/us/compat/machines/x455.html
When installing memory DIMMs, they must be installed in matched pairs (size and technology), in the order shown in Table 3-3.
Table 3-3 DIMM order
2
13
4
Center
plane
Side of chassis
Size Description Part number Max installable
512 MB PC2100 CL2.5 ECC DDR SDRAM 33L5038 28
1 GB PC2100 CL2.5 ECC DDR SDRAM 33L5039 28
2 GB PC2100 CL2.5 ECC DDR SDRAM 33L5040 28
Pair Port 1 DIMM slots Pair Port 2 DIMM slots
1 1 and 14 2 15 and 28
3 2 and 13 4 16 and 27
5 3 and 12 6 17 and 26
Page 72
58 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 3-2 shows the DIMM locations and the way the DIMMs are divided into the two memory ports.
Figure 3-2 DIMM locations on the memory-board assembly
Memory mirroring
Memory mirroring is supported by the x455 for increased fault tolerance and high levels of availability.
Key configuration rules relating to memory mirroring are: Memory mirroring must be enabled in the System Setup in the EFI (it is
disabled by default). See “Enabling memory mirroring” on page 118.
Enabling memory mirroring halves the amount of memory available to the
operating system.
7 4 and 11 8 18 and 25
9 5 and 10 10 19 and 24
11 6 and 9 12 20 and 23
13 7 and 8 14 21 and 22
Pair Port 1 DIMM slots Pair Port 2 DIMM slots
Port 1 Por t 2
Front of server
Page 73
Chapter 3. Planning 59
Both ports in a memory-board assembly must have the same total amount of
memory. Partial mirroring is not supported. If the same total amount of memory is not detected at boot, memory mirroring is automatically disabled by the system.
The DIMMs in partner banks across the two ports must be identical.You must install two pairs of DIMMs at a time. These four DIMMs (known as a
bank) must be identical. Table 3-4 shows the pairs that are in each bank.
Table 3-4 DIMMs that form a bank
Hot-swap memory
The hot-swap memory feature allows you to replace failed DIMMs without turning off the server. Hot-swap memory is operating system independent.
The rules for hot-swap are as follows:
Memory mirroring must be enabled in the Configuration/Setup utility.The configuration rules for memory mirroring apply here, specifically that the
DIMMs in partner banks across the two ports must be identical. See “Memory mirroring” on page 58.
When replacing a failed DIMM, you do not have to replace the other DIMM
within the bank. Ensure that the new DIMM is the same type and size as the other DIMM within the bank.
If a problem with a DIMM is detected, light path diagnostics will light the system-error LED on the front of the server, indicating that there is a problem and guiding you to the defective DIMM. When this occurs, first identify the defective DIMM and then remove and replace the DIMM.
Bank DIMM pairs (see Table 3-3)
1 1 and 2 (DIMMs 1, 14, 15, 28)
2 3 and 4 (DIMMs 2, 13, 16, 27)
3 5 and 6 (DIMMs 3, 12, 17, 26)
4 7 and 8 (DIMMs 4, 11, 18, 25)
5 9 and 10 (DIMMs 5, 10, 19, 24)
6 11 and 12 (DIMMs 6, 9, 20, 23)
7 13 and 14 (DIMMs 7, 8, 21, 22)
Page 74
60 IBM Eserver xSeries 455 Planning and Installation Guide
Complete the following steps to replace a DIMM in your server with the server turned on:
1. Open the cover and verify that the memory hot-plug enabled LED on the DIMM access door is lit before removing and replacing the DIMM.
2. Open the DIMM access door and verify that the memory port LED is off before replacing a DIMM. Then open the retaining clip on each end of the DIMM connector and remove the DIMM from the server.
3. Install the new DIMM. Take the usual anti-static precautions.
4. Close the DIMM access door.
Memory performance considerations
As shown in the server block diagram in Figure 1-4 on page 8, there are two memory ports to the memory controller, each with a throughput of up to 3.2 GBps. These ports correspond to the ports as shown in Figure 3-2 on page 58. The front-side bus of the processors is 6.4 GBps, so maximum performance is achieved when both memory ports are used to access memory simultaneously.
Consequently, for maximum performance, you should install four DIMMs of the same size at a time into a bank (see Table 3-4 on page 59). In this configuration, all memory addresses are spread across all four DIMMs in the bank and, when accessed, both memory ports are used.
Maximum performance can also be achieved with DIMMs of different sizes, as long as the total memory in port 1 matches the total memory in port 2. For example, if you have six DIMMs (four 512 MB DIMMs and two 1 GB DIMMs for a total of 4 GB), install all four 512 MB DIMMs (2 GB) in port 1 and the two 1 GB DIMMs (also 2 GB) in port 2.
If there is a mismatch between the total memory in port 1 and the total memory in port 2, then there will be a delay when accessing the upper memory, for example, if you have two 512 MB DIMMs and two 1 GB DIMMs.
Note: The memory hot-plug enabled LED flashes to indicate that data is being mirrored on the replacement DIMMs. Wait until the LED stops flashing before you hot-place DIMMs again.
Important: The XceL4 Accelerator Cache has been designed with commercial workloads in mind that tend to have high cache hit rates. This effectively boosts performance and compensates in part for the 3.2 GBps bandwidth between the Cache and Scalability Controller and Memory Controller.
Page 75
Chapter 3. Planning 61
3.1.3 PCI-X slot configuration
As shown in Figure 1-4 on page 8, there are six PCI-X slots internal to the x455. These six slots are implemented using four PCI buses:
Bus A (slot 1 and slot 2): Supporting two 64-bit adapters at up to 66 MHzBus B (slot 3 and slot 4): Supporting two 64-bit adapters at up to 100 MHz, or
one adapter at up to 133 MHz, provided the other slot is vacant
Bus C (slot 5): Supporting one 64-bit adapter at up to 133 MHzBus D (slot 6): Supporting one 64-bit adapter at up to 133 MHz
These slots can accept adapters rated at speeds ranging from 33 MHz to 133 MHz.
You should also consider the following: Each adapter has a maximum rated speed. Each bus also has a maximum
rated speed.
Installed adapters in a single bus will operate at the slowest of three speeds:
– The rated speed of adapter 1 – The rated speed of adapter 2 (if the bus the adapter is installed in has two
slots)
– The maximum speed of the bus
Bus B supports one adapter at up to 133 MHz or two adapters at up to 100
MHz.
32-bit adapters can be installed in any of the slots and will run in 32-bit mode.
32-bit and 64-bit adapters can coexist in 64-bit slots in the same bus. The 32-bit adapters will run in 32-bit mode, and the 64-bit adapters will run in 64-bit mode.
As extreme configuration examples, you could configure either of the following:
Six 33 MHz PCI adapters, all operating at 33 MHzSix 133 MHz PCI-X adapters, with two operating at 133 MHz (buses C and
D), two at 100 MHz (bus B), and two at 66 MHz (bus A)
Tip: Take the time to understand these rules and to select the best slots for your adapters. Incorrect choices can result in a loss of PCI adapter performance.
Page 76
62 IBM Eserver xSeries 455 Planning and Installation Guide
Table 3-5 summarizes the supported adapter speeds. Take into account the speed reductions when there are two adapters installed in a bus, as described above.
Table 3-5 Supported adapter speeds in each slot
The physical location of these slots in the server is shown in Figure 3-3 on page 63.
Important: A PCI-X and a PCI adapter can be installed in slots on the same bus. However, those two adapters will both operate in PCI mode.
In addition, if you have a PCI-X adapter installed, you cannot hot-add a PCI adapter to the same bus. This is because with just the PCI-X adapter installed, the bus is running in PCI-X mode, and you cannot hot-add a PCI adapter into a bus that is in PCI-X mode.
Slot Bus Width (bits) Supported adapter speed (MHz)
1 A 32 or 64 33 or 66
2 A 32 or 64 33 or 66
3 B 32 or 64 33, 66, or 100 (133 as long as no adapter is in slot 4)
4 B 32 or 64 33, 66, or 100 (133 as long as no adapter is in slot 3)
5 C 32 or 64 33, 66, 100 or 133
6 D 32 or 64 33, 66, 100 or 133
Page 77
Chapter 3. Planning 63
Figure 3-3 PCI-X slots in the x455
Other configuration information:
The x455 server supports connection to a RXE-100.Video adapters are not supported.The PCI slots support 3.3 V adapters only.
The on-board LSI SCSI/RAID controller and the ServeRAID adapter are
supported for connection to and booting from the internal drive bays.
Some long adapters have extension handles or brackets installed. Before
installing the adapter, you must remove the extension handle or bracket.
The system scans PCI-X slots to assign system resources. The system
attempts to start the first device found. The search order is: a. CD-ROM
b. Disk drives c. Integrated SCSI devices d. Integrated Ethernet controllers e. x455 PCI-X slots (in the order 1, 2, 3, 4, 5, 6)
If an RXE-100 is attached, the order is: a. CD-ROM
Important: 5.0 V adapters are not supported.
PCI-X slot 1 (66 )MHz
PCI-X slot 2 (66 )MHz
PCI-X slot 3 (100 )MHz
PCI-X slot 4 (100 )MHz
PCI-X slot 5 (133 )MHz
Back of server
Bus: D B AC
PCI-X slot 6 (133 MHz)
Page 78
64 IBM Eserver xSeries 455 Planning and Installation Guide
b. Disk drives c. Integrated SCSI devices d. Integrated Ethernet controller e. x455 PCI-X slots (in the order 1, 2, 3, 4, 5, 6) f. RXE-100 slots (A1, A2, A3, A4, A5, A6, B1, B2, B3, B4, B5, B6)
3.1.4 Broadcom Gigabit Ethernet controller
The x455 offers a dual Gigabit Ethernet controller integrated standard in the system. The x455 includes a single-port Broadcom BCM5704 10/100/1000 BASE-T controller on a PCI 64-bit 66 MHz bus.
The BCM5704 supports full- and half-duplex performance at all speeds (10/100/1000 Mbps, auto-negotiated) and includes integrated on-chip memory for buffering data transmissions, and dual onboard RISC processors for advanced packet parsing and backwards compatibility with 10/100 devices. The Broadcom controller also includes software support for Wake on LAN, failover, layer-3 load balancing, and comprehensive diagnostics.
Category 5 or better Ethernet cabling is required with RJ-45 connectors. If you plan to implement a Gigabit Ethernet connection, ensure that your network infrastructure is capable of the necessary throughput to match the server’s I/O capacity.
Adapter teaming
The Broadcom controller is capable of participating in an adapter team for the purposes of failover, load balancing, and port trunking. The choice of adapters to team with the onboard controller depends on whether you have a copper-only network or a mixed copper/fiber network. Our recommendations are:
If you have a copper Gigabit environment, use the Broadcom-based
NetXtreme 1000T Ethernet adapter, part 31P6301. Alternatively, use the Intel PRO/1000 XT Server adapter, part 22P6801.
If you have a mixed fiber/copper Gigabit server switch network, use the
Broadcom-based 22P7801, NetXtreme 1000 SX Fiber Ethernet adapter.
You can also team any of the onboard Gigabit cards with 10/100 cards such as 06P3601 and 22P4901, but this is not a recommended configuration. You can also team with the older Gigabit fiber card, 06P3701.
Adapter teaming and failover work by using software additional to the adapter driver to provide the failover functionality. This software is operating system
Tip: This scan order is different from that of the x450.
Page 79
Chapter 3. Planning 65
dependent. Detailed instructions for installing the individual driver and failover packages are available with the driver software.
For the latest network adapter drivers and software for the x455 server, go to the xSeries support page:
http://www.pc.ibm.com/support
For details about compatibility, see the ServerProven LAN adapter page:
http://www.pc.ibm.com/us/compat/lan/matrix.html
3.2 Cabling and connectivity
There are a number of unique factors to consider when cabling the x455 server. These are discussed as follows:
“SMP Expansion connectivity” on page 67“Remote Expansion Enclosure connectivity” on page 70“Remote Supervisor Adapter connectivity” on page 76“Serial connectivity” on page 78
We discuss each of these in this section.
The rear panel of the x455 showing the locations of cable connectors is illustrated in Figure 3-4 on page 66. For more details about ports on the Remote Supervisor Adapter, refer to “Remote Supervisor Adapter connectivity” on page 76.
Page 80
66 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 3-4 x455 rear view
Of note are the following items: There are no PS/2 keyboard or mouse ports on the x455. Only USB devices
are supported. For attachment to an Advanced Connectivity Technology (ACT) KVM switch, a
new USB Conversion Option (part number 73P5832) can be used. This smart cable is plugged into the USB and video ports on the server and it converts KVM signals to CAT5 signals for transmission over a CAT5 cable to either a Remote Console Manager (RCM) or Local Console Manager (LCM). The x455 can then be managed on the same set of switches as PS2- or C2T-based KVM servers.
The x455 includes a dedicated external serial port. During the boot process,
the port acts as an auxiliary console for the EFI. After the operating system has booted, the port is dedicated to the operating system as COM1.
The RXE Expansion Ports provide connectivity to a RXE-100. Either a single
cable in port A or two cables to both ports are supported. Using two cables provides redundancy as well as additional throughput.
There are two RJ-45 ports providing Gigabit Ethernet connectivity, as shown
in Figure 3-4. A third RJ-45 Ethernet connector is also on the Remote Supervisor Adapter. This connector is only used to connect to the Remote
System power connector (1)
System power connector (2)
RXE Expansion Port (B) connector
Remote Supervisor Adapter connectors and LEDs
Ethernet LEDs
Gigabit Ethernet connectors
RXE Expansion Port (A) connector
Video connector
USB 2 connector
USB 1 connector
RXE Management Port connector
SCSI connector
Serial connector
SMP Expansion Port 1 connector
SMP Expansion Port 2 connector
SMP Expansion Port 3 connector
Page 81
Chapter 3. Planning 67
Supervisor Adapter for out-of-band management, as described in “The Remote Supervisor Adapter” on page 177.
3.2.1 SMP Expansion connectivity
This section describes multi-node cabling. A node is one of multiple servers in a configuration interconnected through the SMP Expansion Ports to share system resources. The multi-node configuration is described in “Configuring scalable partitions” on page 131.
There are two optional SMP Expansion cable kits available to connect the SMP Expansion Port connectors:
Four-way to 8-way Scalability Kit, comes with two 2.5 m SMP Expansion
cables (part number 73P9911)
Eight-way to 16-way Supplemental Kit, comes with three 2.5 m SMP
Expansion cables and one 3.5 m SMP Expansion cable (part number 73P9715)
Follow these recommendations in each server before cabling: Update the system abstraction layer/extensible firmware interface (SAL/EFI)
firmware.
Update the service-processor firmware.
These are available from the x455 driver matrix:
http://www.ibm.com/pc/support/site.wss/MIGR-53575.html
Multinode event timing
Multinode configurations have different clock sources driving processors in each node. Therefore, the processor time stamp counter (TSC) should not be used by operating systems or applications for global system event timing. Applications should use an OS API to perform event timing.
Operating systems should use the Cyclone performance counter in node 0 to implement system event timing functions.
Note: Only two-node and four-node configurations are supported.
Page 82
68 IBM Eserver xSeries 455 Planning and Installation Guide
Two-node configuration
A two-node configuration requires the two cables from the 4-way to 8-way Scalability Kit option.
1. Connect the cables according Table 3-6.
Table 3-6 Connecting two nodes
The cabling will look like Figure 3-5.
Figure 3-5 Connecting two-node configuration
2. Connect the Remote Supervisor Adapter Ethernet connector on each server to a network or to each other with an Ethernet crossover cable. This connection is needed so that the Remote Supervisor Adapters can communicate and manage scalable partitions.
Four-node configuration
A four-node configuration requires one 4-way to 8-way Scalability Kit option and one 8-way to 16-way Supplemental Kit option.
Complete the following steps to cable a four-node configuration:
1. Name the servers, as shown in Figure 3-6 on page 69. We used Nodes A–D, from top to bottom. We also named the SMP Expansion ports 1–3, from left to right.
From To
Node A, Port 1 Node B, Port 1
Node A, Port 3 Node B, Port 3
12
3
12
3
Node 1
Node 2
Page 83
Chapter 3. Planning 69
Figure 3-6 Connecting four-node configuration
2. Label each end of the SMP Expansion cables according to where they will be connected to each server.
3. Connect the cables according Table 3-7.
Table 3-7 Connecting four nodes
From To
Node A, Port 1 Node C, Port 1
Node A, Port 2 Node D, Port 2
Node A, Port 3 Node B, Port 3
Node B, Port 1 Node D, Port 1
Node B, Port 2 Node C, Port 2
Node C, Port 3 Node D, Port 3
12
3
Node A
12
3
Node B
12
3
Node C
12
3
Node D
Ethernet
switch
Remote Supervisor Adapter Ethernet ports
Page 84
70 IBM Eserver xSeries 455 Planning and Installation Guide
4. Verify that the final cabling is like Figure 3-6 on page 69.
5. Connect the Remote Supervisor Adapter Ethernet connector on each server to your out-of-band management network (so the Remote Supervisor Adapters can also be accessed for remote management). This connection is needed so that the Remote Supervisor Adapters can communicate and manage scalable partitions.
3.2.2 Remote Expansion Enclosure connectivity
The RXE-100 can be connected to the x455 to provide an additional six or 12 PCI-X slots to the server. Only one RXE-100 can be connected to the x455, although this can be with either one or two data cables.
The RXE-100 has six 133 MHz 64-bit PCI-X slots as standard and can accept adapters with speeds ranging from 33 MHz to 133 MHz. With the optional six-slot expansion kit (part number 31P5998) installed, the RXE-100 has 12 slots. Each set of six adapter slots is divided into three buses of two slots each, as shown in Figure 3-7.
Figure 3-7 RXE-100 PCI-X expansion board (six slots)
Note: As described in “Connecting the RXE-100” on page 71, when connecting the RXE-100 to an x455 configuration using only one cable, the RXE-100 can have six or 12 PCI-X slots.
123
456
C
Slot:
Bus:
BA
RXE Expansion Port
Page 85
Chapter 3. Planning 71
For each of the three buses (A, B, C), one of the following can be installed: One 64-bit 3.3 V PCI-X 133 MHz adapter (in the odd-numbered slot), running
at up to 133 MHz
Two 64-bit 3.3 V PCI-X 133 MHz adapters running at up to 100 MHzTwo 64-bit 3.3 V PCI or PCI-X, 33 or 66 MHz adapters
These slots can accept adapters rated at speeds ranging from 33 MHz to 133 MHz. When deciding which adapters to put in which slots, consider the following:
Each adapter has a maximum rated speed. Each bus also has a maximum
rated speed.
Installed adapters will operate at the slowest of three speeds:
– The rated speed of adapter 1 in the bus – The rated speed of adapter 2 in the bus – The rated speed of the bus
32-bit adapters can be installed in any of the slots and will run in 32-bit mode.
32-bit and 64-bit adapters can coexist in 64-bit slots in the same bus. The 32-bit adapters will run in 32-bit mode, and the 64-bit adapters will run in 64-bit mode.
When installing a 133 MHz PCI-X adapter, it must be installed in the first or
odd-numbered slot in the bus (that is, in slots 1, 3, or 5).
A PCI-X and a PCI adapter can be installed in slots on the same bus in the
RXE-100. However, these two adapters will both operate in PCI mode. In addition, if you have a PCI-X adapter installed, you cannot hot-add a PCI
adapter to the same bus. This is because with just the PCI-X adapter installed, the bus is running in PCI-X mode, and you cannot hot-add a PCI adapter into a bus that is in PCI-X mode.
Connecting the RXE-100
There are two types of cables used to connect the RXE-100 to the x455: Remote I/O cable, for data. Two lengths are available:
– 3.5 m Remote I/O cable kit (part number 31P6102), like the one shipped
with the RXE-100
– 8 m Remote I/O cable kit (part number 31P6103)
Note: The PCI slots support 3.3 V adapters only. 5.0 V adapters are not supported.
Page 86
72 IBM Eserver xSeries 455 Planning and Installation Guide
Interconnect management cable, for remote I/O management. The RXE-100
ships with a 3.5 m cable. Two lengths are available:
– 3.5 m interconnect management cable kit (part number 31P6087) – 8 m interconnect management cable kit (part number 31P6088)
Use the 8 m versions of each of these cables if the distances between the two devices warrant the extra length (for example, in separate racks).
Power to the RXE-100 enclosures is controlled by the servers via the interconnect management cable and under the control of the Remote Supervisor Adapter.
The following connections to the RXE-100 are supported:
One server connecting to a six-slot RXE-100One server connecting to a 12-slot RXE-100A two-node server connecting to one shared RXE-100Two standalone servers connecting to one shared RXE-100Four-nodes connecting to one or two RXE-100s
One server connecting to a six-slot RXE-100
When the RXE-100 has only six PCI-X slots installed, connect a single RXE management cable and a single RXE data cable, as shown in Figure 3-8 on page 73.
Important: This interconnect management cable has standard RJ-45 connectors, but it is not wired the same as an Ethernet or crossover cable. Ensure that the proper cable is used to connect the server to the RXE-100.
The management cable has two twisted wire pairs: Pins 2 and 3 connect to one pair, pins 7 and 8 connect to another. With the Ethernet cable, the wire pairs are connected to the following pins: 1 and 2, 3 and 6, 4 and 5, 7 and
8.
Page 87
Chapter 3. Planning 73
Figure 3-8 Connecting the RXE-100 to the x455 (six slots in the RXE-100)
One server connecting to a 12-slot RXE-100
If the RXE-100 has 12 PCI-X slots installed, we recommend that you use two data cables to connect the RXE-100 to the server, as shown in Figure 3-9 on page 74.
The short interconnect management cable to connect Management A (out) Port to Management B (in) Port is supplied with the second set of PCI slots.
Tip: The second Remote I/O data cable is optional in this configuration. However, it is recommended because it provides redundancy if the other Remote I/O cable fails. The second cable also improves performance.
Tip: The short interconnect management cable should be removed if you will be managing the configuration using Scalable Systems Manager (once it becomes available; see “Scalable Systems Manager” on page 177).
If you are not using Scalable Systems Manager, leave the cable connected as shown.
RXE-100
RXE Expansion Port (A)
RXE Management Port
xSeries 455
RXE Expansion Port
RXE Management connector
A (in) Port
RXE Management B connector
(in) Port
RXE Management connector
A (out) Port
Page 88
74 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 3-9 Connecting the RXE-100 to the x455 (12 slots in the RXE-100)
A two-node server connecting to one shared RXE-100
In a two-node (that is, eight-way) x445 configuration, a single RXE-100 is supported in a shared configuration, where six RXE slots are connected to each node. The cabling is shown in Figure 3-10.
Figure 3-10 8-way configuration with a shared RXE-100 (6 PCI-X slots to each node)
RXE-100
A (in) PortRXE Management
A (out) PortRXE Management
(in) PortRXE Management B
RXE Expansion Port (A)
RXE Expansion Port (B)
RXE Management Port
xSeries 455
RXE Expansion Ports
x455
x455
Data cables
RXE-100
Management cables
A
B
A
B
A
B
Page 89
Chapter 3. Planning 75
In addition, an 8-way configuration can also be connected to a single RXE-100 that only has six PCI-X slots installed, not 12. In this configuration, the data and management cable from the secondary node are not connected.
Two standalone servers connecting to one shared RXE-100
This configuration is very similar to the two-node configuration shown in Figure 3-10 on page 74, except that the cable between the x455s is not present. Each independent server “sees” six slots in the RXE-100.
Figure 3-11 Single RXE shared between to x455s
Four-nodes connecting to one or two RXE-100s
In a four node configuration, one or two RXE-100s can be connected. If you have one RXE-100, you can connect it to either nodes 1 and 2, or nodes 3 and 4.
Connectivity of two RXE-100 enclosures is shown in Figure 3-12 on page 76. One RXE-100 is shared by nodes 1 and 2, and the second RXE-100 is shared by nodes 3 and 4.
x455
x455
Data cables
RXE-100
Management cables
A
B
A
B
A
B
Page 90
76 IBM Eserver xSeries 455 Planning and Installation Guide
Figure 3-12 16-way x455 with two shared RXE-100s (6 PCI-X slots to each node)
3.2.3 Remote Supervisor Adapter connectivity
The x455 features an integrated Remote Supervisor Adapter, one of the products in the Advanced System Management (ASM) family. It provides around-the-clock remote access and system management of your server and supports the following features:
Remote management regardless of the status of the serverRemote control of hardware and operating systemsWeb-based management with standard Web browsers (no other software is
required)
Text-based user interface terminal access
x455
x455
Data cables
RXE-100
Management cables
A
B
A
B
A
B
x455
x455
Data cables
RXE-100
Management cables
A
B
A
B
A
B
Page 91
Chapter 3. Planning 77
The configuration and use of the Remote Supervisor Adapter is discussed in Chapter 5, “Management” on page 175.
Figure 3-13 Remote Supervisor Adapter connectors
The following RSA connections need to be considered when cabling the x455 (see Figure 3-13):
External power supply connector. This connector allows the RSA to be
connected to its own independent power source. This external power supply is not included with the x455 and will need to be ordered as an option (order a ThinkPad® 56W or 72W AC Adapter with a suitable power cord for your country/region).
If this power supply is not used, the RSA will draw power from the server as long as the server is connected to a functioning power source.
9-pin serial port, which supports systems management functions through null
modem or modem connections. This port is dedicated and can only be used for RSA purposes.
Ethernet port, which provides system management functions over the LAN.Advanced Systems Management (ASM) RS-485 interconnect port to facilitate
advanced systems management connections to other servers. For detailed instructions on cabling ASM interconnect networks, refer to the
IBM Redbook Implementing Systems Management Solutions using IBM Director, SG24-6188.
Note: The x455 does not include the necessary dongle to connect the Remote Supervisor Adapter to an ASM interconnect bus using the RS-485 port on the adapter. Consequently, you will need the Advanced System Management Interconnect Cable Kit (part number 03K9309) for connection to an ASM interconnect network.
External power supply
Error LED (amber)
Power LED (green)
ASM interconnect (RS-485) port
10/100 Ethernet port
Management COM port
Page 92
78 IBM Eserver xSeries 455 Planning and Installation Guide
3.2.4 Serial connectivity
The x455 has an integrated serial port, as shown in Figure 3-4 on page 66. This port has two purposes:
During the boot process (before the OS loader starts), the Extensible
Firmware Interface (EFI) has control of the port and uses it as an auxiliary console where POST messages are transmitted, many even before the server’s video port is enabled. This is especially useful in performing problem determination on the system.
Once the operating system loads, it is made available to the operating system
as a standard (and dedicated) COM port.
To use the serial port, for use as an auxiliary console, you will need the following:
A null modem cableA system running a terminal emulation program, such as HyperTerminal in
Windows or minicom in Linux
First, connect the RS-232 cable between the two systems. The cable should be connected to the serial port of the server, not the serial port of the Remote Supervisor Adapter.
Once the cable is connected, start HyperTerminal or your emulation program. HyperTerminal should be set to the following settings:
Speed: 115200 bpsData bits: 8Parity: NoneStop Bits: 1Flow Control: None
For HyperTerminal, you can leave emulation set to auto detect. If you are using another program that does not have auto detect, you may need to set emulation to ANSIW.
Once your session is configured, click Connect and, assuming there is no operating system currently running, you should see POST messages or the EFI Boot Manager menu.
Tip: If you are running Linux, you can redirect the console messages to the serial port. For more information about this feature refer to “Using the serial port for the Linux console” on page 171.
Page 93
Chapter 3. Planning 79
3.3 Storage considerations
When you are planning the storage configuration to accompany the x455, there are important performance and sizing issues that need to be considered.
The two internal hot-swap 1” drive bays will typically be used for operating system installation. We recommend that these drives be configured as a two-drive RAID-1 array to provide a higher degree of system availability. Drives up to 15,000 RPM and the converged tray design are supported. To configure RAID-1, you can use the internal LSI controller that is provided onboard or buy an optional ServeRAID adapter.
Typically the x455 will be attached to an external disk enclosure for data storage requirements. Some of the supported IBM storage options are:
SCSI RAID adapters and storage enclosuresFibre Channel adapters and Storage Area Networks (SANs)Network Attached Storage (NAS)SCSI over IP (iSCSI)IBM Enterprise Storage Server® (ESS)ESCON® connectivity to a zSeries server
3.3.1 xSeries storage solutions
This section discusses some of the available xSeries storage solutions and related technologies, as well as tape backup and performance considerations.
All the IBM Storage Solutions are supported under Linux, as well as under Windows. For more information see:
http://www.storage.ibm.com/linux
ServeRAID with external storage enclosures
The x455 currently only supports the ServeRAID 4Mx and 6M. The ServeRAID-4Mx is a 66 MHz PCI adapter and features two Ultra160
SCSI channels, 64 MB of battery-backed up ECC SDRAM cache memory, and an Intel 100 MHz i80303 processor. Up to 28 Ultra160 and Ultra2 SCSI devices are supported.
Note: You can also add as an option an LS-120 diskette drive that can be used to upgrade the firmware or the EFI version. Please note that no diskette drive is provided with the default configuration, and that this solution is not supported.
Page 94
80 IBM Eserver xSeries 455 Planning and Installation Guide
The ServeRAID-6M is a 133 MHz PCI-X adapter and features two Ultra320
SCSI channels, according model 128 MB or 256 MB of battery-backed up ECC SDRAM cache memory, and an Intel 600 MHz xScale processor. Up to 28 Ultra320, Ultra160 and Ultra2 SCSI devices are supported.
Each ServeRAID adapter supports up to 14 drives (4Mx have 160 MBps throughput and 6M has 320 MBps throughput) per channel. Multiple adapters can be installed as needs and available slots dictate.
The EXP300 and the new EXP400 storage expansion units have a maximum
2 TB of disk storage (14 146.8 GB drives) in a 3U package, allowing up to 14 expansion units to be used in a standard 42U rack (meaning that a full rack of EXP300 or EXP400 units can hold an amazing 28 TB). EXP300 and EXP400 provide Predictive Failure Analysis (PFA) on key components, including hot-swap fans, hard drives, and redundant power supplies. The EXP300 is optimized for Ultra160 SCSI, with a sustained data transfer rate of 160 MBps, and the new EXP400 is optimized for Ultra320 SCSI, with a sustained data transfer rate of 320 MBps.
For more information on IBM SCSI RAID storage solutions, go to:
http://ibm.com/pc/ww/eserver/xseries/scsi_raid.html
For the latest list of SCSI hardware supported by the x455, see ServerProven:
http://www.pc.ibm.com/us/compat/machines/x455.html
IBM TotalStorage® FAStT
The IBM TotalStorage FAStT (Fibre Array Storage Technology) family of Fibre Channel storage solutions is designed for high-availability, high-capacity requirements. FAStT solutions can support transfers over distances up to 10 km (6.2 miles) at rates of up to 200 MBps.
The FAStT Storage Server is a RAID controller device that contains Fibre Channel (FC) interfaces to connect the host systems and the disk drive enclosures. The Storage Server provides high system availability through the use of hot-swappable and redundant components. We briefly discuss the following products:
The IBM TotalStorage FAStT200 Storage ServerThe IBM TotalStorage FAStT500 Storage ServerThe IBM TotalStorage FAStT600 Storage ServerThe IBM TotalStorage FAStT700 Storage ServerThe IBM TotalStorage FAStT900 Storage Server
Important: Check the tips in “Notes® on the use of RAID:” on page 108 before doing a installation.
Page 95
Chapter 3. Planning 81
For a complete list of supported Fibre Channel hardware, see
http://www.pc.ibm.com/us/compat/machines/x455.html#Storage
The IBM TotalStorage FAStT200 Storage Server
The FAStT200 Storage Server is a 3U rack-mountable Fibre Channel RAID controller and disk drive enclosure. It targets the entry and midrange segment of the FC storage market. A typical use of the FAStT200 would be in a two-node cluster environment with up to 66 Fibre Channel disk drives attached to the Storage Server.
Two models are available:
The FAStT200 Storage Server, with a single RAID controllerThe FAStT200 High Availability (HA) Storage Server, which contains two
RAID controllers and can therefore provide higher availability
Both models feature hot-swap and redundant power supplies and fans, and you can install up to 10 slim-line or half-high FC disk drives. If you need to connect more than 10 disks, you can use the EXP500 FC storage expansion enclosures.
Each EXP500 can accommodate 10 additional disk drives, and up to five EXP500s are supported on the FAStT200. This means that the maximum supported number of disk drives is 60.
Hot-swappable and redundant components provide high availability for the FAStT200 Storage Server. A fan or a power supply failure will not cause downtime, and such faults can be fixed while the system remains operational. The same is true for a disk drive failure if fault-tolerant RAID levels are used. With two RAID controller units and proper cabling, a RAID controller or path failure will not cause loss of access to data.
Each RAID controller has one host and one drive FC connection. The FAStT200 HA model can use the two host and drive connections to provide redundant connection to the host adapters and to EXP500 enclosures. Each RAID controller unit also contains 128 MB of battery-backup cache.
The IBM TotalStorage FAStT600 Storage Server
The FAStT600 Storage Server is a 3U rack-mountable Fibre Channel RAID controllers and disk drive enclosure. It targets the entry and midrange segment of
Note: Check the Microsoft Hardware Compatibility List (HCL) for the currently certified storage solution with FAStT for Microsoft clustered configurations and for updates to certified solutions:
http://www.microsoft.com/whdc/hcl/search.mspx
Page 96
82 IBM Eserver xSeries 455 Planning and Installation Guide
the Fibre Channel storage market. A typical usage of the FAStT600 is a two-node cluster directly attached to it. It can support up to 14 internal Fibre Channel disk drives.
The FAStT600 features dual 2 Gbps hot-swappable RAID controllers; supports RAID levels such as 1, 3, 5, and 10; and supports global hot spares.The FAStT600 has four host ports, two for each controller; this enables a cluster solution without the use of a switch.
There are two models, FASt600 Base and FAStT600 Turbo. To understand the differences see Table 3-8.
Table 3-8 Comparison between FAStT600 Base and FAStT600 Turbo
The IBM TotalStorage FAStT700 Storage Server
The FAStT700 Storage Server would typically implement in high-end cluster and server consolidation environments and where multiple servers are being consolidated onto a multi-node configuration of x455 systems.
The FAStT700 is a 4U rack-mountable Fibre Channel RAID controller device with higher performance controllers. These controllers are 2 Gbps and connect via mini-hubs to the new FAStT FC-2 Host Bus Adapter (HBA) and the 2109 F16 Fibre Channel switch to give full 2 Gbps fabric.
The FAStT700 attaches to up to 220 FC disks via 22 EXP500 expansion units or up to 224 FC disks via 16 EXP700 expansion units to provide scalability for easy growth (18 GB up to 32.6 TB using 146.8 GB drives). To avoid single points of failure, it also features dual hot-swappable RAID controllers, dual redundant FC disk loops, write cache mirroring, redundant hot-swappable power supplies, fans, and dual AC line cords.
Feature FAStT600 Base FAStT600 Turbo
Cache size per controller 256 MB 1 GB
Host interface 2 Gb Auto senses to connect to
1 Gb or 2 Gb
Maximum EXP700 supported
37
Maximum drives supported
56 112
Storage capacity Up to 8.2 TB Up to 16.4 TB
Storage partitions Up to 16 Up to 64
FAStT Storage Manager FSM v8.3 FSM V8.4
Page 97
Chapter 3. Planning 83
FAStT Storage Manager Version 8.21 supports FlashCopy®, Dynamic Volume Expansion, and Remote Mirroring with controller-based support for up to 64 storage partitions. RAID levels 0,1, 3, 5, and 10 are supported, and for performance it includes a total of 2 GB battery-backed cache (1 GB per controller).
The IBM TotalStorage FAStT900 Storage Server
The IBM TotalStorage FAStT900 Storage Server expands the FAStT family’s highly scalable offerings with improved performance capabilities. Built upon a new fourth-generation Fibre Channel RAID controller, the FAStT900 brings high availability, advanced functionality, scalable capacity, and connectivity to a wide range of storage area network (SAN) applications in mission-critical enterprise networks.
The FAStT900 is a 4U rack-mountable high-performance, highly available, Fibre Channel-based storage product for demanding applications. The FAStT900 features a new shared memory bus controller/PCI bridge/RAID assist engine (1.6 GB/s bandwidth), dual 2 Gbps hot swappable RAID controllers with 2 GB of cache memory (1 GB per RAID controller), redundant/hot-swap power supplies, redundant/hot-swap fans, and supports RAID levels such as 0 (implemented as a RAID level 10), 1, 3, and 5 and global hot spares. It also supports four host-side Mini-Hubs for two controllers (each Mini-Hub has two ports); four Host Ports (two on each controller) provide a cluster solution without using a switch.
The FAStT900 attaches up to 224 FC disks via 16 EXP700 expansion units to provide scalability for easy growth (32.6 TB using 146.8 GB drives).
The IBM TotalStorage FAStT EXP700 Expansion Unit
The IBM TotalStorage FAStT EXP700 storage expansion unit expands the highly scalable, high-performance FAStT family with a 14-bay, 2 GBps, rack-mountable Fiber Channel hard disk drive enclosure. Used with the FAStT900, FAStT700, FAStT600, FAStT500, and FAStT200 storage servers, the EXP700 is designed to provide incremental support for over one terabyte of disk storage per unit. When used with the FAStT900, FAStT700, and FAStT600 Turbo storage servers, it provides a full end-to-end 2 GBps Fiber Channel storage solution, extending the FAStT family with the latest Fiber Channel technology.
Note: The new version of FAStT Storage Manager 8.4 supports Windows 2003 Server 64-bit Edition and is only supported on the FAStT600, FAStT700, and FAStT900.
Page 98
84 IBM Eserver xSeries 455 Planning and Installation Guide
Additional information on the entire range of FAStT storage solutions can be found at:
http://www.storage.ibm.com/hardsoft/disk/fastt/index.html
Enterprise Storage Server (ESS)
ESS provides integrated caching and RAID support for the attached disk devices. ESS can be configured in a variety of ways to provide scalability in capacity and performance. One ESS can support in excess of 28 TB and can utilize 2 Gbps Fibre Channel connectivity.
Redundancy within ESS provides continuous availability. It is packaged in one or more enclosures, each with dual line cords and redundant power. The redundant power system allows ESS to continue normal operation when one of the line cords is deactivated.
ESS provides an image of a set of logical disk devices to attached servers. The logical devices are configured to emulate disk device types that are compatible with the attached servers. The logical devices access a logical volume that is implemented using multiple disk drives. This allows ESS to connect to all IBM servers, from zSeries to iSeries, pSeries and xSeries, directly or through a SAN, thus helping the x455 fit into a heterogeneous environment containing a variety of server architectures. ESS offers several choices of host I/O interface attachment methods, including SCSI and Fibre Channel for xSeries.
For more information on the ESS, go to:
http://www.storage.ibm.com/hardsoft/products/ess/index.html
3.3.2 Tape backup
As with your disk subsystem, you need to carefully analyze backup requirements before a tape solution is selected. Considerations when selecting a backup solution should include:
Currently implemented backup solutions
If you are consolidating a number of servers onto a single x445 solution, for example, you may want to take the opportunity to move from differing and distributed tape technologies (such as DDS and DLT) and consolidate those into a single, high-performance, automated solution. An example is the IBM Ultrium Autoloader.
Current and projected capacity requirements
Select a solution that has the ability to scale as capacity requirements increase.
Page 99
Chapter 3. Planning 85
Performance requirements
You need to consider the backup window available, as well as the amount of data being backed up when determining what your backup performance requirements will be. It is also important to consider the need for quick access to data committed to tape when selecting a solution.
Connection requirements
Will the tape solution be connected to an existing SAN fabric and, if so, will this require additional fabric hardware?
Hardware and software compatibility
If you implement a new tape solution, you need to ensure that current backup and management software is still suitable. IBM Tivoli® Storage Manager has 64-bit Windows and Linux versions of their client, and other software vendors have similar products or plans for 64-bit versions.
Disaster recovery procedures may also need to be revised.
IBM offers a full range of high-performance, high-capacity and automated tape solutions for xSeries servers. For detailed information on these products, go to:
http://ibm.com/pc/ww/eserver/xseries/tape.html
The following IBM Redbooks discuss IBM tape solutions in greater detail:
IBM Tape Solutions for Storage Area Networks and FICON, SG24-5474Netfinity Tape Solutions, SG24-5218The IBM LTO Ultrium Tape Libraries Guide, SG24-5946
3.4 Rack installation
The x455 is 4U high and is intended for use as a rack-drawer server. Due to power distribution considerations, it is recommended that no more than eight 4U x455 chassis be installed in a single 42U rack, leaving 10U available for RXE-100 Remote I/O enclosures, disk or tape storage, or other devices.
The x455 is 27.5 inches deep, and is designed to be installed in a 19-inch rack cabinet designed for 28-inch-deep devices, such as the NetBAY42 ER, NetBAY42 SR, or NetBAY25 SR. Although the x455 system is rack optimized, it may be converted into a tower by installing it in a NetBAY11 SR Standard Rack
Note: The x455 and RXE-100 support 3.3 Volt PCI adapters only. Make sure that any SCSI adapters you use to connect your tape subsystem are 3.3 V or dual-voltage adapters.
Page 100
86 IBM Eserver xSeries 455 Planning and Installation Guide
Cabinet. The NetBAY11 rack supports shipment of fully configured xSeries 455 and other rack-optimized xSeries servers.
Installation considerations include the following: The system is not designed to run vertically, and therefore must always be run
in a horizontal position.
For thermal considerations, the x455 must be installed with perforated doors
on both front and back. Do not install the x455 in a rack with a glass front door.
Although installation is supported in non-enterprise racks, it is not
recommended, since cable management then becomes an issue.
The maximum weight of the system, depending on your configuration, is 50
kg (110 lb). Therefore, this system requires two people to install it in a rack.
If you use a non-IBM rack, the cabinet must meet the EIA-310-D standards with a depth of at least 28 inches. Also, adequate space (approximately two inches for the front bezel and one inch for air flow) must be maintained from the slide assembly to the front door of the rack cabinet to allow sufficient space for the door to close and provide adequate air flow.
Make sure all the cables attached to the x455 are long enough to permit the server to be slid out of the rack. This would include the normal cables such as power, network, and fiber cables, but also includes the Remote I/O cable for connecting to the RXE-100. See “Remote Expansion Enclosure connectivity” on page 70 for RXE-100 cabling information.
Since the x455 is rack optimized, the IBM xSeries rack configurator should be used to ensure correct placement. The configurator can be downloaded from:
For EMEA:
http://www.pc.ibm.com/europe/configurators/
For USA:
http://www.pc.ibm.com/us/eserver/xseries/library/configtools.html
For other countries or regions:
a. Go to:
http://www.ibm.com
b. Click Select a country. c. Select your country. d. Click Products and Services. e. Click Intel-based servers. f. Click Tools. g. Scroll down to find the Rack Configurator section.
Loading...