This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except
as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform,
publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the
hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous
applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of
SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are
not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement
between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Ce logiciel et la documentation qui l'accompagne sont protégés par les lois sur la propriété intellectuelle. Ils sont concédés sous licence et soumis à des restrictions d'utilisation et
de divulgation. Sauf stipulation expresse de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, accorder de licence, transmettre,
distribuer, exposer, exécuter, publier ou afficher le logiciel, même partiellement, sous quelque forme et par quelque procédé que ce soit. Par ailleurs, il est interdit de procéder à toute
ingénierie inverse du logiciel, de le désassembler ou de le décompiler, excepté à des fins d'interopérabilité avec des logiciels tiers ou tel que prescrit par la loi.
Les informations fournies dans ce document sont susceptibles de modification sans préavis. Par ailleurs, Oracle Corporation ne garantit pas qu'elles soient exemptes d'erreurs et vous
invite, le cas échéant, à lui en faire part par écrit.
Si ce logiciel, ou la documentation qui l'accompagne, est livré sous licence au Gouvernement des Etats-Unis, ou à quiconque qui aurait souscrit la licence de ce logiciel pour le
compte du Gouvernement des Etats-Unis, la notice suivante s'applique :
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the
hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
Ce logiciel ou matériel a été développé pour un usage général dans le cadre d'applications de gestion des informations. Ce logiciel ou matériel n'est pas conçu ni n'est destiné à être
utilisé dans des applications à risque, notamment dans des applications pouvant causer un risque de dommages corporels. Si vous utilisez ce logiciel ou ce matériel dans le cadre
d'applications dangereuses, il est de votre responsabilité de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures nécessaires à son utilisation dans
des conditions optimales de sécurité. Oracle Corporation et ses affiliés déclinent toute responsabilité quant aux dommages causés par l'utilisation de ce logiciel ou matériel pour des
applications dangereuses.
Oracle et Java sont des marques déposées d'Oracle Corporation et/ou de ses affiliés. Tout autre nom mentionné peut correspondre à des marques appartenant à d'autres propriétaires
qu'Oracle.
Intel et Intel Xeon sont des marques ou des marques déposées d'Intel Corporation. Toutes les marques SPARC sont utilisées sous licence et sont des marques ou des marques
déposées de SPARC International, Inc. AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques déposées d'Advanced Micro Devices. UNIX est une
marque déposée de The Open Group.
Ce logiciel ou matériel et la documentation qui l'accompagne peuvent fournir des informations ou des liens donnant accès à des contenus, des produits et des services émanant de
tiers. Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers, sauf mention contraire stipulée
dans un contrat entre vous et Oracle. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des
dommages causés par l'accès à des contenus, produits ou services tiers, ou à leur utilisation, sauf mention contraire stipulée dans un contrat entre vous et Oracle.
Accès aux services de support Oracle
Les clients Oracle qui ont souscrit un contrat de support ont accès au support électronique via My Oracle Support. Pour plus d'informations, visitez le site http://www.oracle.com/
pls/topic/lookup?ctx=acc&id=info ou le site http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs si vous êtes malentendant.
Page 5
Contents
Using This Documentation ................................................................................ 13
Understanding the System ................................................................................ 15
10Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 11
Contents
Index ................................................................................................................ 353
11
Page 12
12Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 13
Using This Documentation
This document provides and overview of Oracle SuperCluster T5-8, and describes configuration
options, site preparation specifications, installation information, and administration tools.
■
Overview – Describes how to configure, install, tune, and monitor the system.
■
Audience – Technicians, system administrators, and authorized service providers.
■
Required knowledge – Advanced experience in system installation and administration.
Product Documentation Library
Documentation and resources for this product and related products are available on the system.
Access the documentation by using a browser to view this directory on the first compute server
installed in SuperCluster T5-8:
/opt/oracle/node/doc/E40166_01/index.html
Feedback
Provide feedback about this documentation at:
http://www.oracle.com/goto/docfeedback
Using This Documentation13
Page 14
14Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 15
Understanding the System
These topics describe the features and hardware components of Oracle SuperCluster T5-8.
These topics also describe the different software configurations that are available.
■
“Understanding Oracle SuperCluster T5-8” on page 15
■
“Identifying Hardware Components” on page 19
■
“Understanding the Hardware Components and Connections” on page 23
■
“Understanding the Software Configurations” on page 46
■
“Understanding Clustering Software” on page 84
■
“Understanding the Network Requirements” on page 86
Understanding Oracle SuperCluster T5-8
Oracle SuperCluster T5-8 is an integrated hardware and software system designed to provide
a complete platform for a wide range of application types and widely varied workloads.
Oracle SuperCluster T5-8 is intended for large-scale, performance-sensitive, mission-critical
application deployments. Oracle SuperCluster T5-8 combines industry-standard hardware and
clustering software, such as optional Oracle Database 11g Real Application Clusters (Oracle
RAC) and optional Oracle Solaris Cluster software. This combination enables a high degree of
isolation between concurrently deployed applications, which have varied security, reliability,
and performance requirements. Oracle SuperCluster T5-8 enables customers to develop a single
environment that can support end-to-end consolidation of their entire applications portfolio.
Oracle SuperCluster T5-8 provides an optimal solution for all database workloads, ranging from
scan-intensive data warehouse applications to highly concurrent online transaction processing
(OLTP) applications. With its combination of smart Oracle Exadata Storage Server Software,
complete and intelligent Oracle Database software, and the latest industry-standard hardware
components, Oracle SuperCluster T5-8 delivers extreme performance in a highly-available,
highly-secure environment. Oracle provides unique clustering and workload management
capabilities so Oracle SuperCluster T5-8 is well-suited for consolidating multiple databases into
a single grid. Delivered as a complete pre-optimized, and pre-configured package of software,
servers, and storage, Oracle SuperCluster T5-8 is fast to implement, and it is ready to tackle
your large-scale business applications.
Understanding the System15
Page 16
Understanding Oracle SuperCluster T5-8
Oracle SuperCluster T5-8 does not include any Oracle software licenses. Appropriate licensing
of the following software is required when Oracle SuperCluster T5-8 is used as a database
server:
■
Oracle Database Software
■
Oracle Exadata Storage Server Software
In addition, Oracle recommends that the following software is licensed:
■
Oracle Real Application Clusters
■
Oracle Partitioning
Oracle SuperCluster T5-8 is designed to fully leverage an internal InfiniBand fabric that
connects all of the processing, storage, memory, and external network interfaces within Oracle
SuperCluster T5-8 to form a single, large computing device. Each Oracle SuperCluster T5-8
is connected to data center networks through 10-GbE (traffic) and 1-GbE (management)
interfaces.
You can integrate Oracle SuperCluster T5-8 systems with Exadata or Exalogic machines
by using the available InfiniBand expansion ports and optional data center switches. The
InfiniBand technology used by Oracle SuperCluster T5-8 offers significantly high bandwidth,
low latency, hardware-level reliability, and security. If you are using applications that follow
Oracle's best practices for highly scalable, fault-tolerant systems, you do not need to make any
application architecture or design changes to benefit from Oracle SuperCluster T5-8. You can
connect many Oracle SuperCluster T5-8 systems, or a combination of Oracle SuperCluster T5-8
systems and Oracle Exadata Database Machines, to develop a single, large-scale environment.
You can integrate Oracle SuperCluster T5-8 systems with their current data center infrastructure
using the available 10-GbE ports in each SPARC T5-8 server.
Spares Kit Components
Oracle SuperCluster T5-8 includes a spares kit that includes the following components:
■
One of the following disks as a spare for the Exadata Storage Servers, depending on the
type of Exadata Storage Server:
■
X3-2 Exadata Storage Server:
- 600 GB 10 K RPM High Performance SAS disk
- 3 TB 7.2 K RPM High Capacity SAS disk
■
X4-2 Exadata Storage Server:
- 1.2 TB 10 K RPM High Performance SAS disk
- 4 TB 7.2 K RPM High Capacity SAS disk
16Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 17
Understanding Oracle SuperCluster T5-8
■
X5-2L Exadata Storage Server:
- 1.6 TB Extreme Flash drive
- 8 TB High Capacity SAS disk
■
X6-2L Exadata Storage Server:
- 3.2 TB Extreme Flash drive
- 8 TB High Capacity SAS disk
■
One High Capacity SAS disk as a spare for the ZFS storage appliance:
■
One 3 TB High Capacity SAS disk as a spare for the Sun ZFS Storage 7320 storage
appliance, or
■
One 4 TB High Capacity SAS disk as a spare for the Oracle ZFS Storage ZS3-ES
storage appliance
■
Exadata Smart Flash Cache card
■
InfiniBand cables, used to connect multiple racks together
Oracle SuperCluster T5-8 Restrictions
The following restrictions apply to hardware and software modifications to Oracle SuperCluster
T5-8. Violating these restrictions can result in loss of warranty and support.
■
Oracle SuperCluster T5-8 hardware cannot be modified or customized. There is one
exception to this. The only allowed hardware modification to Oracle SuperCluster T5-8
is to the administrative 48-port Cisco 4948 Gigabit Ethernet switch included with Oracle
SuperCluster T5-8. Customers may choose to the following:
■
Replace the Gigabit Ethernet switch, at customer expense, with an equivalent 1U
48-port Gigabit Ethernet switch that conforms to their internal data center network
standards. This replacement must be performed by the customer, at their expense and
labor, after delivery of Oracle SuperCluster T5-8. If the customer chooses to make
this change, then Oracle cannot make or assist with this change given the numerous
possible scenarios involved, and it is not included as part of the standard installation.
The customer must supply the replacement hardware, and make or arrange for this
change through other means.
■
Remove the CAT5 cables connected to the Cisco 4948 Ethernet switch, and connect
them to the customer's network through an external switch or patch panel. The customer
must perform these changes at their expense and labor. In this case, the Cisco 4948
Ethernet switch in the rack can be turned off and unconnected to the data center
network.
■
The Oracle Exadata Storage Expansion Rack can only be connected to Oracle SuperCluster
T5-8 or an Oracle Exadata Database Machine, and only supports databases running on the
Understanding the System17
Page 18
Understanding Oracle SuperCluster T5-8
Oracle Database (DB) Domains in Oracle SuperCluster T5-8 or on the database servers in
the Oracle Exadata Database Machine.
■
Standalone Exadata Storage Servers can only be connected to Oracle SuperCluster T5-8 or
an Oracle Exadata Database Machine, and only support databases running on the Database
Domains in Oracle SuperCluster T5-8 or on the database servers in the Oracle Exadata
Database Machine. The standalone Exadata Storage Servers must be installed in a separate
rack.
■
Earlier Oracle Database releases can be run in Application Domains running Oracle Solaris
10. Non-Oracle databases can be run in either Application Domains running Oracle Solaris
10 or Oracle Solaris 11, depending on the Oracle Solaris version they support.
■
Oracle Exadata Storage Server Software and the operating systems cannot be modified, and
customers cannot install any additional software or agents on the Exadata Storage Servers.
■
Customers cannot update the firmware directly on the Exadata Storage Servers. The
firmware is updated as part of an Exadata Storage Server patch.
■
Customers may load additional software on the Database Domains on the SPARC T5-8
servers. However, to ensure best performance, Oracle discourages adding software except
for agents, such as backup agents and security monitoring agents, on the Database Domains.
Loading non-standard kernel modules to the operating system of the Database Domains is
allowed but discouraged. Oracle will not support questions or issues with the non-standard
modules. If a server crashes, and Oracle suspects the crash may have been caused by a nonstandard module, then Oracle support may refer the customer to the vendor of the nonstandard module or ask that the issue be reproduced without the non-standard module.
Modifying the Database Domain operating system other than by applying official patches
and upgrades is not supported. InfiniBand-related packages should always be maintained at
the officially supported release.
■
Oracle SuperCluster T5-8 supports separate domains dedicated to applications, with high
throughput/low latency access to the database domains through InfiniBand. Since Oracle
Database is by nature a client server, applications running in the Application Domains can
connect to database instances running in the Database Domain. Applications can be run in
the Database Domain, although it is discouraged.
■
Customers cannot connect USB devices to the Exadata Storage Servers except as
documented in Oracle Exadata Storage Server Software User's Guide and this guide. In
those documented situations, the USB device should not draw more than 100 mA of power.
■
The network ports on the SPARC T5-8 servers can be used to connect to external nonExadata Storage Servers using iSCSI or NFS. However, the Fibre Channel Over Ethernet
(FCoE) protocol is not supported.
■
Only switches specified for use in Oracle SuperCluster T5-8, Oracle Exadata Rack
and Oracle Exalogic Elastic Cloud may be connected to the InfiniBand network. It is
not supported to connect third-party switches and other switches not used in Oracle
SuperCluster T5-8, Oracle Exadata Rack and Oracle Exalogic Elastic Cloud.
18Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 19
Identifying Hardware Components
Oracle SuperCluster T5-8 consists of SPARC T5-8 servers, Exadata Storage Servers, and the
ZFS storage appliance (Sun ZFS Storage 7320 appliance or Oracle ZFS Storage ZS3-ES storage
appliance), as well as required InfiniBand and Ethernet networking components.
This section contains the following topics:
■
“Full Rack Components” on page 20
■
“Half Rack Components” on page 22
Identifying Hardware Components
Understanding the System19
Page 20
Identifying Hardware Components
Full Rack Components
FIGURE 1
Oracle SuperCluster T5-8 Full Rack Layout, Front View
20Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 21
Identifying Hardware Components
Figure Legend
1
Exadata Storage Servers (8)
2
ZFS storage controllers (2)
3
Sun Datacenter InfiniBand Switch 36 leaf switches (2)
4
Sun Disk Shelf
5
Cisco Catalyst 4948 ethernet management switch
6
SPARC T5-8 servers (2, with four processor modules apiece)
7
Sun Datacenter InfiniBand Switch 36 spine switch
You can expand the amount of disk storage for your system using the Oracle Exadata Storage
Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287
for more information.
You can connect up to eight Oracle SuperCluster T5-8 systems together, or a combination
of Oracle SuperCluster T5-8 systems and Oracle Exadata or Exalogic machines on the same
InfiniBand fabric, without the need for any external switches. See “Connecting Multiple Oracle
SuperCluster T5-8 Systems” on page 261 for more information.
Understanding the System21
Page 22
Identifying Hardware Components
Half Rack Components
FIGURE 2
Oracle SuperCluster T5-8 Half Rack Layout, Front View
22Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 23
Understanding the Hardware Components and Connections
Figure Legend
1
ZFS storage controllers (2)
2
Sun Datacenter InfiniBand Switch 36 leaf switches (2)
3
Sun Disk Shelf
4
Cisco Catalyst 4948 ethernet management switch
5
SPARC T5-8 servers (2, with two processor modules apiece)
6
Exadata Storage Servers (4)
7
Sun Datacenter InfiniBand Switch 36 spine switch
You can expand the amount of disk storage for your system using the Oracle Exadata Storage
Expansion Rack. See “Oracle Exadata Storage Expansion Rack Components” on page 287
for more information.
You can connect up to eight Oracle SuperCluster T5-8 systems together, or a combination
of Oracle SuperCluster T5-8 systems and Oracle Exadata or Exalogic machines on the same
InfiniBand fabric, without the need for any external switches. See “Connecting Multiple Oracle
SuperCluster T5-8 Systems” on page 261 for more information.
Understanding the Hardware Components and Connections
These topics describe how the hardware components and connections are configured to provide
full redundancy for high performance or high availability in Oracle SuperCluster T5-8, as well
as connections to the various networks:
■
“Understanding the Hardware Components” on page 23
■
“Understanding the Physical Connections” on page 26
Understanding the Hardware Components
The following Oracle SuperCluster T5-8 hardware components provide full redundancy,
either through physical connections between components within the system, or through the
components:
■
“SPARC T5-8 Servers” on page 24
■
“Exadata Storage Servers” on page 24
■
“ZFS Storage Appliance” on page 25
■
“Sun Datacenter InfiniBand Switch 36 Switches” on page 25
Understanding the System23
Page 24
Understanding the Hardware Components and Connections
■
“Cisco Catalyst 4948 Ethernet Management Switch” on page 26
■
“Power Distribution Units” on page 26
SPARC T5-8 Servers
The Full Rack version of Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, each
with four processor modules. The Half Rack version of Oracle SuperCluster T5-8 also contains
two SPARC T5-8 servers, but each with two processor modules.
Redundancy in the SPARC T5-8 servers is achieved two ways:
■
Through connections between the servers (described in “Understanding the SPARC T5-8
Server Physical Connections” on page 26)
■
Through components within the SPARC T5-8 servers:
■
Fan modules – Each SPARC T5-8 server contains 10 fan modules. The SPARC T5-8
server will continue to operate at full capacity if one of the fan modules fails.
■
Disk drives – Each SPARC T5-8 server contains eight hard drives. Oracle SuperCluster
T5-8 software provides redundancy between the eight disk drives.
For the Full Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server
contains four processor modules. There are two sockets or PCIe root complex pairs on
each processor module, where 16 cores are associated with each socket, for a total of
eight sockets or PCIe root complex pairs (128 cores) for each SPARC T5-8 server.
For the Half Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server
contains two processor modules. There are two sockets or PCIe root complex pairs on
each processor module, where 16 cores associated with each socket, for a total of four
sockets or PCIe root complex pairs (64 cores) for each SPARC T5-8 server.
Exadata Storage Servers
The Full Rack version of Oracle SuperCluster T5-8 contains eight Exadata Storage Servers.
The Half Rack version of Oracle SuperCluster T5-8 contains four Exadata Storage Servers.
Redundancy in the Exadata Storage Servers is achieved two ways:
■
Through connections between the Exadata Storage Servers. For more information, see
“Understanding the Exadata Storage Server Physical Connections” on page 36.
■
Through components within the Exadata Storage Servers:
■
Power supplies – Each Exadata Storage Server contains two power supplies. The
Exadata Storage Server can continue to operate normally if one of the power supplies
fail, or if one of the power distribution units fail.
24Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 25
Understanding the Hardware Components and Connections
■
Disk drives – Each Exadata Storage Server contains 12 disk drives, where you can
choose between disk drives designed for either high capacity or high performance when
you first order Oracle SuperCluster T5-8. Oracle SuperCluster T5-8 software provides
redundancy between the 12 disk drives within each Exadata Storage Server. For more
information, see “Cluster Software for the Database Domain” on page 85.
ZFS Storage Appliance
Each Oracle SuperCluster T5-8 contains one ZFS storage appliance, in either the Full Rack or
Half Rack version. The ZFS storage appliance consists of the following:
■
Two ZFS storage controllers
■
One Sun Disk Shelf
Redundancy in the ZFS storage appliance is achieved two ways:
■
Through connections from the two ZFS storage controllers to the Sun Disk Shelf.
For more information, see “Understanding the ZFS Storage Appliance Physical
Connections” on page 39.
■
Through components within the ZFS storage appliance itself:
■
Power supplies – Each ZFS storage controller and Sun Disk Shelf contains two power
supplies. Each ZFS storage controller and the Sun Disk Shelf can continue to operate
normally if one of those power supplies fails.
■
Disk drives – Each ZFS storage controller contains two mirrored boot drives, so the
controller can still boot up and operate normally if one boot drive fails. The Sun Disk
Shelf contains 20 hard disk drives that are used for storage in Oracle SuperCluster
T5-8, and 4 solid-state drives that are used as write-optimized cache devices,
also known as logzillas. Oracle SuperCluster T5-8 software provides redundancy
between the disk drives. For more information, see “Understanding the Software
Configurations” on page 46.
Sun Datacenter InfiniBand Switch 36 Switches
Each Oracle SuperCluster T5-8 contains three Sun Datacenter InfiniBand Switch 36 switches,
in either the Full Rack or Half Rack version, two of which are leaf switches (the third is used as
a spine switch to connect two racks together). The two leaf switches are connected to each other
to provide redundancy should one of the two leaf switches fail. In addition, each SPARC T5-8
server, Exadata Storage Server, and ZFS storage controller has connections to both leaf switches
to provide redundancy in the InfiniBand connections should one of the two leaf switches fail.
For more information, see “Understanding the Physical Connections” on page 26.
Understanding the System25
Page 26
Understanding the Hardware Components and Connections
Cisco Catalyst 4948 Ethernet Management Switch
The Cisco Catalyst 4948 Ethernet management switch contains two power supplies. The Cisco
Catalyst 4948 Ethernet management switch can continue to operate normally if one of those
power supplies fails.
Power Distribution Units
Each Oracle SuperCluster T5-8 contains two power distribution units, in either the Full Rack
or Half Rack version. The components within Oracle SuperCluster T5-8 connect to both power
distribution units, so that power continues to be supplied to those components should one of the
two power distribution units fail. For more information, see “Power Distribution Units Physical
Connections” on page 45.
Understanding the Physical Connections
The following topics describe the physical connections between the components within Oracle
SuperCluster T5-8:
■
“Understanding the SPARC T5-8 Server Physical Connections” on page 26
■
“Understanding the Exadata Storage Server Physical Connections” on page 36
■
“Understanding the ZFS Storage Appliance Physical Connections” on page 39
■
“Power Distribution Units Physical Connections” on page 45
Understanding the SPARC T5-8 Server Physical Connections
These topics provide information on the location of the cards and ports that are used for the
physical connections for the SPARC T5-8 server, as well as information specific to the four sets
of physical connections for the server:
■
“PCIe Slots (SPARC T5-8 Servers)” on page 27
■
“Card Locations (SPARC T5-8 Servers)” on page 29
■
“NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31
Each Oracle SuperCluster T5-8 contains two SPARC T5-8 servers, regardless if it is a Full Rack
or a Half Rack. The distinguishing factor between a Full Rack or a Half Rack version of Oracle
SuperCluster T5-8 is not the number of SPARC T5-8 servers, but the number of processor
modules in each SPARC T5-8 server, where the Full Rack has four processor modules and the
Half Rack has two processor modules. See “SPARC T5-8 Servers” on page 24 for more
information.
Each SPARC T5-8 server has sixteen PCIe slots:
■
Full Rack – All 16 PCIe slots are accessible, and all 16 PCIe slots are occupied with either
InfiniBand HCAs or 10-GbE NICs.
■
Half Rack – All 16 PCIe slots are accessible, but only eight of the 16 PCIe slots are
occupied with either InfiniBand HCAs or 10-GbE NICs. The remaining eight PCIe slots are
available for optional Fibre Channel PCIe cards.
The following figures show the topology for the Full Rack and Half Rack versions of Oracle
SuperCluster T5-8. Also see “Card Locations (SPARC T5-8 Servers)” on page 29 for more
information.
Understanding the System27
Page 28
Understanding the Hardware Components and Connections
FIGURE 3
Topology for the Full Rack Version of Oracle SuperCluster T5-8
28Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 29
Understanding the Hardware Components and Connections
FIGURE 4
Topology for the Half Rack Version of Oracle SuperCluster T5-8
Card Locations (SPARC T5-8 Servers)
The following figures show the cards that will be used for the physical connections for the
SPARC T5-8 server in the Full Rack and Half Rack versions of Oracle SuperCluster T5-8.
Understanding the System29
Page 30
Understanding the Hardware Components and Connections
Note - For the Half Rack version of Oracle SuperCluster T5-8, eight of the 16 PCIe slots
are occupied with either InfiniBand HCAs or 10-GbE NICs. However, all 16 PCIe slots are
accessible, so the remaining eight PCIe slots are available for optional Fibre Channel PCIe
cards. See “Using an Optional Fibre Channel PCIe Card” on page 139 for more information.
FIGURE 5
Card Locations (Full Rack)
Figure Legend
1
Dual-port 10-GbE network interface cards, for connection to the 10-GbE client access network (see “10-GbE
Each SPARC T5-8 server contains several dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapters (HCAs). The number of InfiniBand HCAs and their locations in the
SPARC T5-8 servers varies, depending on the configuration of Oracle SuperCluster T5-8:
■
Full Rack: Eight InfiniBand HCAs, installed in these PCIe slots:
■
PCIe slot 3
■
PCIe slot 4
■
PCIe slot 7
■
PCIe slot 8
■
PCIe slot 11
■
PCIe slot 12
■
PCIe slot 15
■
PCIe slot 16
■
Half Rack: Four InfiniBand HCAs, installed in these PCIe slots:
■
PCIe slot 3
■
PCIe slot 8
■
PCIe slot 11
■
PCIe slot 16
32Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 33
Understanding the Hardware Components and Connections
See “Card Locations (SPARC T5-8 Servers)” on page 29 for more information on the
location of the InfiniBand HCAs.
The two ports in each InfiniBand HCA (ports 1 and 2) connect to a different leaf switch to
provide redundancy between the SPARC T5-8 servers and the leaf switches. The following
figures show how redundancy is achieved with the InfiniBand connections between the SPARC
T5-8 servers and the leaf switches in the Full Rack and Half Rack configurations.
FIGURE 8
InfiniBand Connections for SPARC T5-8 Servers, Full Rack
Understanding the System33
Page 34
Understanding the Hardware Components and Connections
FIGURE 9
InfiniBand Connections for SPARC T5-8 Servers, Half Rack
Note that only the physical connections for the InfiniBand private network are described in
this section. Once the logical domains are created for each SPARC T5-8 server, the InfiniBand
private network will be configured differently depending on the type of domain created on
the SPARC T5-8 servers. The number of IP addresses needed for the InfiniBand network will
also vary, depending on the type of domains created on each SPARC T5-8 server. For more
information, see “Understanding the Software Configurations” on page 46.
34Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 35
Understanding the Hardware Components and Connections
Each SPARC T5-8 server connects to the Oracle Integrated Lights Out Manager (ILOM)
management network through a single Oracle ILOM network port (NET MGT port) at the rear
of each SPARC T5-8 server. One IP address is required for Oracle ILOM management for each
SPARC T5-8 server.
See “NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31 for more
information on the location of the NET MGT port.
Each SPARC T5-8 server connects to the 1-GbE host management network through the four
1-GbE host management ports at the rear of each SPARC T5-8 server (NET0 - NET3 ports).
However, the way the 1-GbE host management connections are used differs from the physical
connections due to logical domains. For more information, see “Understanding the Software
Configurations” on page 46.
See “NET MGT and NET0-3 Port Locations (SPARC T5-8 Servers)” on page 31 for more
information on the location of the 1-GbE host management ports.
Each SPARC T5-8 server contains several dual-ported Sun Dual 10-GbE SFP+ PCIe 2.0 Low
Profile network interface cards (NICs). The number of 10-GbE NICs and their locations in the
SPARC T5-8 servers varies, depending on the configuration of Oracle SuperCluster T5-8:
■
Full Rack: Eight 10-GbE NICs, installed in these PCIe slots:
■
PCIe slot 1
■
PCIe slot 2
■
PCIe slot 5
■
PCIe slot 6
■
PCIe slot 9
■
PCIe slot 10
■
PCIe slot 13
■
PCIe slot 14
■
Half Rack: Four 10-GbE NICs, installed in these PCIe slots:
Understanding the System35
Page 36
Understanding the Hardware Components and Connections
■
PCIe slot 1
■
PCIe slot 6
■
PCIe slot 9
■
PCIe slot 14
See “Card Locations (SPARC T5-8 Servers)” on page 29 for the location of the 10-GbE
NICs.
Depending on the configuration, one or two of the ports on the 10-GbE NICs (ports 0 and 1)
will be connected to the client access network. In some configurations, both ports on the same
10-GbE NIC will be part of an IPMP group to provide redundancy and increased bandwidth. In
other configurations, one port from two separate 10-GbE NICs will be part of an IPMP group.
The number of physical connections to the 10-GbE client access network varies, depending
on the type of domains created on each SPARC T5-8 server. For more information, see
“Understanding the Software Configurations” on page 46.
Understanding the Exadata Storage Server Physical
Connections
Each Exadata Storage Server contains three sets of physical connections:
Each Exadata Storage Server contains one dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapter (HCA). The two ports in the InfiniBand HCA are bonded together to
increase available bandwidth. When bonded, the two ports appear as a single port, with a single
IP address assigned to the two bonded ports, resulting in one IP address for InfiniBand private
network connections for each Exadata Storage Server.
The two ports in the InfiniBand HCA connects to a different leaf switch to provide redundancy
between the Exadata Storage Servers and the leaf switches. The following figures show how
36Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 37
Understanding the Hardware Components and Connections
redundancy is achieved with the InfiniBand connections between the Exadata Storage Servers
and the leaf switches in the Full Rack and Half Rack configurations.
FIGURE 10
InfiniBand Connections for Exadata Storage Servers, Full Rack
Understanding the System37
Page 38
Understanding the Hardware Components and Connections
FIGURE 11
InfiniBand Connections for Exadata Storage Servers, Half Rack
Each Exadata Storage Server connects to the Oracle ILOM management network through a
single Oracle ILOM network port (NET MGT port) at the rear of each Exadata Storage Server.
One IP address is required for Oracle ILOM management for each Exadata Storage Server.
38Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 39
Understanding the Hardware Components and Connections
Each Exadata Storage Server connects to the 1-GbE host management network through the 1GbE host management port (NET 0 port) at the rear of each Exadata Storage Server. One IP
address is required for 1-GbE host management for each Exadata Storage Server.
Understanding the ZFS Storage Appliance Physical
Connections
The ZFS storage appliance has five sets of physical connections:
The ZFS storage appliance connects to the InfiniBand private network through one of the
two ZFS storage controllers. The ZFS storage controller contains one Sun Dual Port 40Gb
InfiniBand QDR HCA. The two ports in each InfiniBand HCA are bonded together to increase
available bandwidth. When bonded, the two ports appear as a single port, with a single IP
address assigned to the two bonded ports, resulting in one IP address for InfiniBand private
network connections for the ZFS storage controller.
The two ports in the InfiniBand HCA connect to a different leaf switch to provide redundancy
between the ZFS storage controller and the leaf switches. The following figures show how
redundancy is achieved with the InfiniBand connections between the ZFS storage controller and
the leaf switches in the Full Rack and Half Rack configurations.
Understanding the System39
Page 40
Understanding the Hardware Components and Connections
FIGURE 12
InfiniBand Connections for ZFS Storage Controllers, Full Rack
40Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 41
Understanding the Hardware Components and Connections
FIGURE 13
InfiniBand Connections for ZFS Storage Controllers, Half Rack
The ZFS storage appliance connects to the Oracle ILOM management network through
the two ZFS storage controllers. Each storage controller connects to the Oracle ILOM
management network through the NET0 port at the rear of each storage controller using
sideband management. One IP address is required for Oracle ILOM management for each
storage controller.
Understanding the System41
Page 42
Understanding the Hardware Components and Connections
The ZFS storage appliance connects to the 1-GbE host management network through the
two ZFS storage controllers. The storage controllers connect to the 1-GbE host management
network through the following ports at the rear of each storage controller:
■
NET0 on the first storage controller (installed in slot 25 in the rack)
■
NET1 on the second storage controller (installed in slot 26 in the rack)
One IP address is required for 1-GbE host management for each storage controller.
SAS Physical Connections (ZFS Storage Appliance)
Each ZFS storage controller is populated with a dual-port SAS-2 HBA card. The Sun Disk
Shelf also has two SIM Link In and two SIM Link Out ports. The two storage controllers
connect to the Sun Disk Shelf in the following manner:
■
Storage controller 1 – Both ports from the SAS-2 HBA card to the SIM Link Out ports on
the Sun Disk Shelf.
■
Storage controller 2 – Both ports from the SAS-2 HBA card to the SIM Link In ports on
the Sun Disk Shelf.
The following figures show the SAS connections between the two storage controllers and the
Sun Disk Shelf.
42Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 43
Understanding the Hardware Components and Connections
FIGURE 14
SAS Connections for the Sun ZFS Storage 7320 Storage Appliance
Figure Legend
1
Storage controller 1
2
Storage controller 2
3
Sun Disk Shelf
Understanding the System43
Page 44
Understanding the Hardware Components and Connections
FIGURE 15
SAS Connections for the Oracle ZFS Storage ZS3-ES Storage Appliance
Each ZFS storage controller contains a single cluster card. The cluster cards in the storage
controllers are cabled together as shown in the following figure. This allows a heartbeat signal
to pass between the storage controllers to determine if both storage controllers are up and
running.
44Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 45
Understanding the Hardware Components and Connections
Power Distribution Units Physical Connections
Oracle SuperCluster T5-8 contains two power distribution units. Each component in Oracle
SuperCluster T5-8 has redundant connections to the two power distribution units:
■
SPARC T5-8 servers – Each SPARC T5-8 server has four AC power connectors. Two
AC power connectors connect to one power distribution unit, and the other two AC power
connectors connect to the other power distribution unit.
■
Exadata Storage Servers – Each Exadata Storage Server has two AC power connectors.
One AC power connector connects to one power distribution unit, and the other AC power
connector connects to the other power distribution unit.
■
ZFS storage controllers – Each ZFS storage controller has two AC power connectors.
One AC power connector connects to one power distribution unit, and the other AC power
connector connects to the other power distribution unit.
■
Sun Disk Shelf – The Sun Disk Shelf has two AC power connectors. One AC power
connector connects to one power distribution unit, and the other AC power connector
connects to the other power distribution unit.
■
Sun Datacenter InfiniBand Switch 36 switches – Each Sun Datacenter InfiniBand Switch
36 switch has two AC power connectors. One AC power connector connects to one power
distribution unit, and the other AC power connector connects to the other power distribution
unit.
■
Cisco Catalyst 4948 Ethernet management switch – The Cisco Catalyst 4948 Ethernet
management switch has two AC power connectors. One AC power connector connects to
one power distribution unit, and the other AC power connector connects to the other power
distribution unit.
Understanding the System45
Page 46
Understanding the Software Configurations
Understanding the Software Configurations
Oracle SuperCluster T5-8 is set up with logical domains (LDoms), which provide users with the
flexibility to create different specialized virtual systems within a single hardware platform.
The following topics provide more information on the configurations available to you:
■
“Understanding Domains” on page 46
■
“Understanding General Configuration Information” on page 58
■
“Understanding Half Rack Configurations” on page 61
■
“Understanding Full Rack Configurations” on page 69
Understanding Domains
The number of domains supported on each SPARC T5-8 server depends on the type of Oracle
SuperCluster T5-8:
■
Half Rack version of Oracle SuperCluster T5-8: One to four domains
■
Full Rack version of Oracle SuperCluster T5-8: One to eight domains
These topics describe the domain types:
■
“Dedicated Domains” on page 46
■
“Understanding SR-IOV Domain Types” on page 48
Dedicated Domains
The following SuperCluster-specific domain types have always been available:
■
Application Domain running Oracle Solaris 10
■
Application Domain running Oracle Solaris 11
■
Database Domain
These SuperCluster-specific domain types have been available in software version 1.x and are
now known as dedicated domains.
Note - The Database Domains can also be in two states: with zones or without zones.
When a SuperCluster is set up as part of the initial installation, each domain is assigned one
of these three SuperCluster-specific dedicated domain types. With these dedicated domains,
every domain in a SuperCluster has direct access to the 10GbE NICs and IB HCAs (and Fibre
46Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 47
Understanding the Software Configurations
Channel cards, if those are installed in the card slots). The following graphic shows this concept
on a SuperCluster with four domains.
With dedicated domains, connections to the 10GbE client access network go through the
physical ports on each 10GbE NIC, and connections to the IB network go through the physical
ports on each IB HCA, as shown in the following graphic.
With dedicated domains, the domain configuration for a SuperCluster (the number of
domains and the SuperCluster-specific types assigned to each) are set at the time of the initial
installation, and can only be changed by an Oracle representative.
Understanding the System47
Page 48
Understanding the Software Configurations
Understanding SR-IOV Domain Types
In addition to the dedicated domain types (Database Domains and Application Domains running
either Oracle Solaris 10 or Oracle Solaris 11), the following version 2.x SR-IOV (Single-Root I/
O Virtualization) domain types are now also available:
■
“Root Domains” on page 48
■
“I/O Domains” on page 52
Root Domains
A Root Domain is an SR-IOV domain that hosts the physical I/O devices, or physical functions
(PFs), such as the IB HCAs and 10GbE NICs installed in the PCIe slots. Almost all of its CPU
and memory resources are parked for later use by I/O Domains. Logical devices, or virtual
functions (VFs), are created from each PF, with each PF hosting 32 VFs.
Because Root Domains host the physical I/O devices, just as dedicated domains currently do,
Root Domains essentially exist at the same level as dedicated domains.
With the introduction of Root Domains, the following parts of the domain configuration for a
SuperCluster are set at the time of the initial installation and can only be changed by an Oracle
representative:
■
Type of domain:
■
Root Domain
■
Application Domain running Oracle Solaris 10 (dedicated domain)
■
Application Domain running Oracle Solaris 11 (dedicated domain)
■
Database Domain (dedicated domain)
■
Number of Root Domains and dedicated domains on the server
A domain can only be a Root Domain if it has either one or two IB HCAs associated with it.
A domain cannot be a Root Domain if it has more than two IB HCAs associated with it. If
you have a domain that has more than two IB HCAs associated with it (for example, the H1-1
domain in an Oracle SuperCluster T5-8), then that domain must be a dedicated domain.
When deciding which domains will be a Root Domain, the last domain must always be the
first Root Domain, and you would start from the last domain in your configuration and go
in for every additional Root Domain. For example, assume you have four domains in your
configuration, and you want two Root Domains and two dedicated domains. In this case,
the first two domains would be dedicated domains and the last two domains would be Root
Domains.
48Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 49
Understanding the Software Configurations
Note - Even though a domain with two IB HCAs is valid for a Root Domain, domains with
only one IB HCA should be used as Root Domains. When a Root Domain has a single IB
HCA, fewer I/O Domains have dependencies on the I/O devices provided by that Root Domain.
Flexibility around high availability also increases with Root Domains with one IB HCA.
The following domains have only one or two IB HCAs associated with them and can therefore
be used as a Root Domain:
■
Small Domains (one IB HCA)
■
Medium Domains (two IB HCAs)
In addition, the first domain in the system (the Control Domain) will always be a dedicated
domain. The Control Domain cannot be a Root Domain. Therefore, you cannot have all of the
domains on your server as Root Domains, but you can have a mixture of Root Domains and
dedicated domains on your server or all of the domains as dedicated domains.
A certain amount of CPU core and memory is always reserved for each Root Domain,
depending on which domain is being used as a Root Domain in the domain configuration and
the number of IB HCAs and 10GbE NICs that are associated with that Root Domain:
■
The last domain in a domain configuration:
■
Two cores and 32 GB of memory reserved for a Root Domain with one IB HCA and
10GbE NIC
■
Four cores and 64 GB of memory reserved for a Root Domain with two IB HCAs and
10GbE NICs
■
Any other domain in a domain configuration:
■
One core and 16 GB of memory reserved for a Root Domain with one IB HCA and
10GbE NIC
■
Two cores and 32 GB of memory reserved for a Root Domain with two IB HCAs and
10GbE NICs
Note - The amount of CPU core and memory reserved for Root Domains is sufficient to support
only the PFs in each Root Domain. There is insufficient CPU core or memory resources to
support zones or applications in Root Domains, so zones and applications are supported only in
the I/O Domains.
The remaining CPU core and memory resources associated with each Root Domain are parked
in CPU and memory repositories, as shown in the following graphic.
Understanding the System49
Page 50
Understanding the Software Configurations
CPU and memory repositories contain resources not only from the Root Domains, but also
any parked resources from the dedicated domains. Whether CPU core and memory resources
originated from dedicated domains or from Root Domains, once those resources have been
parked in the CPU and memory repositories, those resources are no longer associated with their
originating domain. These resources become equally available to I/O Domains.
In addition, CPU and memory repositories contain parked resources only from the compute
server that contains the domains providing those parked resources. In other words, if you have
two compute servers and both compute servers have Root Domains, there would be two sets
of CPU and memory repositories, where each compute server would have its own CPU and
memory repositories with parked resources.
For example, assume you have four domains on your compute server, with three of the four
domains as Root Domains, as shown in the previous graphic. Assume each domain has the
following IB HCAs and 10GbE NICs, and the following CPU core and memory resources:
■
One IB HCA and one 10GbE NIC
■
16 cores
■
256 GB of memory
In this situation, the following CPU core and memory resources are reserved for each Root
Domain, with the remaining resources available for the CPU and memory repositories:
■
Two cores and 32 GB of memory reserved for the last Root Domains in this configuration.
14 cores and 224 GB of memory available from this Root Domain for the CPU and memory
repositories.
50Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 51
Understanding the Software Configurations
■
One core and 16 GB of memory reserved for the second and third Root Domains in this
configuration.
■
15 cores and 240 GB of memory available from each of these Root Domains for the
CPU and memory repositories.
■
A total of 30 cores (15 x 2) and 480 GB of memory (240 GB x 2) available for the CPU
and memory repositories from these two Root Domains.
A total of 44 cores (14 + 30 cores) are therefore parked in the CPU repository, and 704 GB of
memory (224 + 480 GB of memory) are parked in the memory repository and are available for
the I/O Domains.
With Root Domains, connections to the 10GbE client access network go through the physical
ports on each 10GbE NIC, and connections to the IB network go through the physical ports
on each IB HCA, just as they did with dedicated domains. However, cards used with Root
Domains must also be SR-IOV compliant. SR-IOV compliant cards enable VFs to be created on
each card, where the virtualization occurs in the card itself.
The VFs from each Root Domain are parked in the IB VF and 10GbE VF repositories, similar
to the CPU and memory repositories, as shown in the following graphic.
Even though the VFs from each Root Domain are parked in the VF repositories, the VFs are
created on each 10GbE NIC and IB HCA, so those VFs are associated with the Root Domain
Understanding the System51
Page 52
Understanding the Software Configurations
that contains those specific 10GbE NIC and IB HCA cards. For example, looking at the
example configuration in the previous graphic, the VFs created on the last (right most) 10GbE
NIC and IB HCA will be associated with the last Root Domain.
I/O Domains
An I/O Domain is an SR-IOV domain that owns its own VFs, each of which is a virtual device
based on a PF in one of the Root Domains. Root domains function solely as a provider of VFs
to the I/O Domains, based on the physical I/O devices associated with each Root Domain.
Applications and zones are supported only in I/O Domains, not in Root Domains.
You can create multiple I/O Domains using the I/O Domain Creation tool. As part of the
domain creation process, you also associate one of the following SuperCluster-specific domain
types to each I/O Domain:
■
Application Domain running Oracle Solaris 11
■
Database Domain
Note that only Database Domains that are dedicated domains can host database zones. Database
I/O Domains cannot host database zones.
The CPU cores and memory resources owned by an I/O Domain are assigned from the CPU and
memory repositories (the cores and memory released from Root Domains on the system) when
an I/O Domain is created, as shown in the following graphic.
52Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 53
Understanding the Software Configurations
You use the I/O Domain Creation tool to assign the CPU core and memory resources to the I/O
Domains, based on the amount of CPU core and memory resources that you want to assign to
each I/O Domain and the total amount of CPU core and memory resources available in the CPU
and memory repositories.
Similarly, the IB VFs and 10GbE VFs owned by the I/O Domains come from the IB VF and
10GbE VF repositories (the IB VFs and 10GbE VFs released from Root Domains on the
system), as shown in the following graphic.
Understanding the System53
Page 54
Understanding the Software Configurations
Again, you use the I/O Domain Creation tool to assign the IB VFs and 10GbE VFs to the I/
O Domains using the resources available in the IB VF and 10GbE VF repositories. However,
because VFs are created on each 10GbE NIC and IB HCA, the VFs assigned to an I/O Domain
will always come from the specific Root Domain that is associated with the 10GbE NIC and IB
HCA cards that contain those VFs.
The number and size of the I/O Domains that you can create depends on several factors,
including the amount of CPU core and memory resources that are available in the CPU and
memory repositories and the amount of CPU core and memory resources that you want to
assign to each I/O Domain. However, while it is useful to know the total amount of resources
are that are parked in the repositories, it does not necessarily translate into the maximum
number of I/O Domains that you can create for your system. In addition, you should not create
an I/O Domain that uses more than one socket's worth of resources.
54Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 55
Understanding the Software Configurations
For example, assume that you have 44 cores parked in the CPU repository and 704 GB of
memory parked in the memory repository. You could therefore create I/O Domains in any of the
following ways:
■
One or more large I/O Domains, with each large I/O Domain using one socket's worth of
resources (for example, 16 cores and 256 GB of memory)
■
One or more medium I/O Domains, with each medium I/O Domain using four cores and 64
GB of memory
■
One or more small I/O Domains, with each small I/O Domain using one core and 16 GB of
memory
When you go through the process of creating I/O Domains, at some point, the I/O Domain
Creation tool will inform you that you cannot create additional I/O Domains. This could be due
to several factors, such as reaching the limit of total CPU core and memory resources in the
CPU and memory repositories, reaching the limit of resources available specifically to you as a
user, or reaching the limit on the number of I/O Domains allowable for this system.
Note - The following examples describe how resources might be divided up between
domains using percentages to make the conceptual information easier to understand.
However, you actually divide CPU core and memory resources between domains at a socket
granularity or core granularity level. See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.
As an example configuration showing how you might assign CPU and memory resources to
each domain, assume that you have a domain configuration where one of the domains is a Root
Domain, and the other three domains are dedicated domains, as shown in the following figure.
Understanding the System55
Page 56
Understanding the Software Configurations
Even though dedicated domains and Root Domains are all shown as equal-sized domains
in the preceding figure, that does not mean that CPU core and memory resources must be
split evenly across all four domains (where each domain would get 25% of the CPU core and
memory resources). Using information that you provide in the configuration worksheets, you
can request different sizes of CPU core and memory resources for each domain when your
Oracle SuperCluster T5-8 is initially installed.
For example, you could request that each dedicated domain have 30% of the CPU core and
memory resources (for a total of 90% of the CPU cores and memory resources allocated to
the three dedicated domains), and the remaining 10% allocated to the single Root Domain.
Having this configuration would mean that only 10% of the CPU core and memory resources
are available for I/O Domains to pull from the CPU and memory repositories. However, you
could also request that some of the resources from the dedicated domains be parked at the time
of the initial installation of your system, which would further increase the amount of CPU core
and memory resources available for I/O Domains to pull from the repositories.
56Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 57
Understanding the Software Configurations
You could also use the CPU/Memory tool after the initial installation to resize the amount of
CPU core and memory resources used by the existing domains, depending on the configuration
that you chose at the time of your initial installation:
■
If all of the domains on your compute server are dedicated domains, you can use the CPU/
Memory tool to resize the amount of CPU core and memory resources used by those
domains. However, you must reboot those resized dedicated domains if you change the
amount of resources using the CPU/Memory tool.
■
If you have a mixture of dedicated domains and Root Domains on your compute server:
■
For the dedicated domains, you can use the CPU/Memory tool to resize the amount of
CPU core and memory resources used by those dedicated domains. You can also use the
tool to park some of the CPU core and memory resources from the dedicated domains,
which would park those resources in the CPU and Memory repositories, making them
available for the I/O Domains. However, you must reboot those resized dedicated
domains if you change the amount of resources using the CPU/Memory tool.
■
For the Root Domains, you cannot resize the amount of CPU core and memory
resources for any of the Root Domains after the initial installation. Whatever resources
that you asked to have assigned to the Root Domains at the time of initial installation are
set and cannot be changed unless you have the Oracle installer come back out to your
site to reconfigure your system.
See “Configuring CPU and Memory Resources (osc-setcoremem)” on page 170 for more
information.
Assume you have a mixture of dedicated domains and Root Domains as mentioned earlier,
where each dedicated domain has 30% of the CPU core and memory resources (total of 90%
resources allocated to dedicated domains), and the remaining 10% allocated to the Root
Domain. You could then make the following changes to the resource allocation, depending on
your situation:
■
If you are satisfied with the amount of CPU core and memory resources allocated to the
Root Domain, but you find that one dedicated domain needs more resources while another
needs less, you could reallocate the resources between the three dedicated domains (for
example, having 40% for the first dedicated domain, 30% for the second, and 20% for the
third), as long as the total amount of resources add up to the total amount available for all
the dedicated domains (in this case, 90% of the resources).
■
If you find that the amount of CPU core and memory resources allocated to the Root
Domain is insufficient, you could park resources from the dedicated domains, which would
park those resources in the CPU and Memory repositories, making them available for the I/
O Domains. For example, if you find that you need 20% of the resources for I/O Domains
created through the Root Domain, you could park 10% of the resources from one or more
of the dedicated domains, which would increase the amount of resources in the CPU and
Memory repositories by that amount for the I/O Domains.
Understanding the System57
Page 58
Understanding the Software Configurations
Understanding General Configuration Information
In order to fully understand the different configuration options that are available for Oracle
SuperCluster T5-8, you must first understand the basic concepts for the PCIe slots and the
different networks that are used for the system.
■
“Logical Domains and the PCIe Slots Overview” on page 58
■
“Management Network Overview” on page 59
■
“10-GbE Client Access Network Overview” on page 59
■
“InfiniBand Network Overview” on page 59
Logical Domains and the PCIe Slots Overview
Each SPARC T5-8 server in Oracle SuperCluster T5-8 has sixteen PCIe slots. The following
cards are installed in certain PCIe slots and are used to connect to these networks:
■
10-GbE network interface cards (NICs) – Used to connect to the 10-GbE client access
network
■
InfiniBand host channel adapters (HCAs) – Used to connect to the private InfiniBand
network
See “PCIe Slots (SPARC T5-8 Servers)” on page 27 and “Card Locations (SPARC T5-8
Servers)” on page 29 for more information.
Optional Fibre Channel PCIe cards are also available to facilitate migration of data from legacy
storage subsystems to the Exadata Storage Servers integrated with Oracle SuperCluster T5-8
for Database Domains, or to access SAN-based storage for the Application Domains. The PCIe
slots that are available for those optional Fibre Channel PCIe cards will vary, depending on
your configuration. See “Using an Optional Fibre Channel PCIe Card” on page 139 for more
information.
Note - If you have the Full Rack version of Oracle SuperCluster T5-8, you cannot install a Fibre
Channel PCIe card in a slot that is associated with a Small Domain. See “Understanding Small
Domains (Full Rack)” on page 80 for more information.
The PCIe slots used for each configuration varies, depending on the type and number of logical
domains that are used for that configuration.
58Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 59
Understanding the Software Configurations
Management Network Overview
The management network connects to your existing management network, and is used for
administrative work. Each SPARC T5-8 server provides access to the following management
networks:
■
Oracle Integrated Lights Out Manager (ILOM) management network – Connected through
the Oracle ILOM Ethernet interface on each SPARC T5-8 server. Connections to this
network are the same, regardless of the type of configuration that is set up on the SPARC
T5-8 server.
■
1-GbE host management network – Connected through the four 1-GbE host management
interfaces (NET0 - NET3) on each SPARC T5-8 server. Connections to this network will
vary, depending on the type of configuration that is set up on the system. In most cases, the
four 1-GbE host management ports at the rear of the SPARC T5-8 servers use IP network
multipathing (IPMP) to provide redundancy for the management network interfaces to the
logical domains. However, the ports that are grouped together, and whether IPMP is used,
varies depending on the type of configuration that is set up on the SPARC T5-8 server.
10-GbE Client Access Network Overview
This required 10-GbE network connects the SPARC T5-8 servers to your existing client
network and is used for client access to the servers. 10-GbE NICs installed in the PCIe slots are
used for connection to this network. The number of 10-GbE NICs and the PCIe slots that they
are installed in varies depending on the type of configuration that is set up on the SPARC T5-8
server.
InfiniBand Network Overview
The InfiniBand network connects the SPARC T5-8 servers, ZFS storage appliance, and Exadata
Storage Servers using the InfiniBand switches on the rack. This non-routable network is fully
contained in Oracle SuperCluster T5-8, and does not connect to your existing network.
When Oracle SuperCluster T5-8 is configured with the appropriate types of domains, the
InfiniBand network is partitioned to define the data paths between the SPARC T5-8 servers, and
between the SPARC T5-8 servers and the storage appliances.
The defined InfiniBand data path coming out of the SPARC T5-8 servers varies, depending on
the type of domain created on each SPARC T5-8 server:
■
“InfiniBand Network Data Paths for a Database Domain” on page 60
Understanding the System59
Page 60
Understanding the Software Configurations
■
“InfiniBand Network Data Paths for an Application Domain” on page 60
InfiniBand Network Data Paths for a Database Domain
Note - The information in this section applies to a Database Domain that is either a dedicated
domain or a Database I/O Domain.
When a Database Domain is created on a SPARC T5-8 server, the Database Domain has the
following InfiniBand paths:
■
SPARC T5-8 server to both Sun Datacenter InfiniBand Switch 36 leaf switches
■
SPARC T5-8 server to each Exadata Storage Server, through the Sun Datacenter InfiniBand
Switch 36 leaf switches
■
SPARC T5-8 server to the ZFS storage appliance, through the Sun Datacenter InfiniBand
Switch 36 leaf switches
The number of InfiniBand HCAs that are assigned to the Database Domain varies, depending
on the type of configuration that is set up on the SPARC T5-8 server.
For the InfiniBand HCAs assigned to a Database Domain, the following InfiniBand private
networks are used:
■
Storage private network: One InfiniBand private network for the Database Domains to
communicate with each other and with the Application Domains running Oracle Solaris 10,
and with the ZFS storage appliance
■
Exadata private network: One InfiniBand private network for the Oracle Database 11g
Real Application Clusters (Oracle RAC) interconnects, and for communication between the
Database Domains and the Exadata Storage Servers
The two ports on each InfiniBand HCA connect to different Sun Datacenter InfiniBand
Switch 36 leaf switches to provide redundancy between the SPARC T5-8 servers and the
leaf switches. For more information on the physical connections for the SPARC T5-8 servers
to the leaf switches, see “InfiniBand Private Network Physical Connections (SPARC T5-8
Servers)” on page 32.
InfiniBand Network Data Paths for an Application Domain
Note - The information in this section applies to an Application Domain that is either a
dedicated domain or an Application I/O Domain.
60Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 61
Understanding the Software Configurations
When an Application Domain is created on a SPARC T5-8 server (an Application Domain
running either Oracle Solaris 10 or Oracle Solaris 11), the Application Domain has the
following InfiniBand paths:
■
SPARC T5-8 server to both Sun Datacenter InfiniBand Switch 36 leaf switches
■
SPARC T5-8 server to the ZFS storage appliance, through the Sun Datacenter InfiniBand
Switch 36 leaf switches
Note that the Application Domain would not access the Exadata Storage Servers, which are
used only for the Database Domain.
The number of InfiniBand HCAs that are assigned to the Application Domain varies, depending
on the type of configuration that is set up on the SPARC T5-8 server.
For the InfiniBand HCAs assigned to an Application Domain, the following InfiniBand private
networks are used:
■
Storage private network: One InfiniBand private network for Application Domains to
communicate with each other and with the Database Domains, and with the ZFS storage
appliance
■
Oracle Solaris Cluster private network: Two InfiniBand private networks for the optional
Oracle Solaris Cluster interconnects
The two ports on each InfiniBand HCA will connect to different Sun Datacenter InfiniBand
Switch 36 leaf switches to provide redundancy between the SPARC T5-8 servers and the
leaf switches. For more information on the physical connections for the SPARC T5-8 servers
to the leaf switches, see “InfiniBand Private Network Physical Connections (SPARC T5-8
Servers)” on page 32.
Understanding Half Rack Configurations
In the Half Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server includes two
processor modules (PM0 and PM3), with two sockets or PCIe root complex pairs on each
processor module, for a total of four sockets or PCIe root complex pairs for each SPARC T5-8
server. You can therefore have from one to four logical domains on each SPARC T5-8 server in
a Half Rack.
These topics provide information on the domain configurations available for the Half Rack:
■
“Logical Domain Configurations and PCIe Slot Mapping (Half Rack)” on page 62
■
“Understanding Large Domains (Half Rack)” on page 63
Understanding the System61
Page 62
Understanding the Software Configurations
■
“Understanding Medium Domains (Half Rack)” on page 65
■
“Understanding Small Domains (Half Rack)” on page 67
Logical Domain Configurations and PCIe Slot Mapping (Half
Rack)
The following figure provides information on the available configurations for the Half Rack.
It also provides information on the PCIe slots and the InfiniBand (IB) HCAs or 10-GbE NICs
installed in each PCIe slot, and which logical domain those cards would be mapped to, for the
Half Rack.
FIGURE 16
Logical Domain Configurations and PCIe Slot Mapping (Half Rack)
Related Information
■
“Understanding Large Domains (Half Rack)” on page 63
■
“Understanding Medium Domains (Half Rack)” on page 65
62Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 63
Understanding the Software Configurations
■
“Understanding Small Domains (Half Rack)” on page 67
Understanding Large Domains (Half Rack)
These topics provide information on the Large Domain configuration for the Half Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 63
■
“Management Network” on page 63
■
“10-GbE Client Access Network” on page 63
■
“InfiniBand Network” on page 64
Percentage of CPU and Memory Resource Allocation
One domain is set up on each SPARC T5-8 server in this configuration, taking up 100% of the
server. Therefore, 100% of the CPU and memory resources are allocated to this single domain
on each server (all four sockets).
Note - You can use the CPU/Memory tool (setcoremem) to change this default allocation
after the initial installation of your system, if you want to have some CPU or memory
resources parked (unused). See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.
Management Network
Two out of the four 1-GbE host management ports are part of one IPMP group for this domain:
■
NET0
■
NET3
10-GbE Client Access Network
All four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the
logical domain on the server in this configuration. However, only two of the four 10-GbE NICs
are used with this domain. One port is used on each dual-ported 10-GbE NIC. The two ports on
the two separate 10-GbE NICs would be part of one IPMP group. One port from the dual-ported
10-GbE NICs is connected to the 10-GbE network in this case, with the remaining unused ports
and 10-GbE NICs unconnected.
Understanding the System63
Page 64
Understanding the Software Configurations
The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Large Domain in a Half Rack, connections will be made through all four
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■
Application Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
So, for a Large Domain in a Half Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the third CPU in the domain.
64Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 65
Understanding the Software Configurations
So, for a Large Domain in a Half Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand HCA installed in
slot 8 (standby).
Understanding Medium Domains (Half Rack)
These topics provide information on the Medium Domain configuration for the Half Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 65
■
“Management Network” on page 65
■
“10-GbE Client Access Network” on page 66
■
“InfiniBand Network” on page 66
Percentage of CPU and Memory Resource Allocation
The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■
Config H2-1 (Two Medium Domains): The following options are available for CPU and
memory resource allocation:
■
Two sockets for each Medium Domain
■
One socket for the first Medium Domain, three sockets for the second Medium Domain
■
Three sockets for the first Medium Domain, one socket for the second Medium Domain
■
Four cores for the first Medium Domain, the remaining cores for the second Medium
Domain (first Medium Domain must be either a Database Domain or an Application
Domain running Oracle Solaris 11 in this case)
■
Config H3-1 (One Medium Domain and two Small Domains): The following options are
available for CPU and memory resource allocation:
■
Two sockets for the Medium Domain, one socket apiece for the two Small Domains
■
One socket for the Medium Domain, two sockets for the first Small Domain, one socket
for the second Small Domain
■
One socket for the Medium Domain and the first Small Domain, two sockets for the
second Small Domain
Management Network
Two 1-GbE host management ports are part of one IPMP group for each Medium Domain.
Following are the 1-GbE host management ports associated with each Medium Domain,
depending on how many domains are on the SPARC T5-8 server in the Half Rack:
Understanding the System65
Page 66
Understanding the Software Configurations
■
First Medium Domain: NET0-1
■
Second Medium Domain, if applicable: NET2-3
10-GbE Client Access Network
Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium
Domain on the SPARC T5-8 server in the Half Rack. One port is used on each dual-ported 10GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.
The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■
First Medium Domain:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 9, port 1 (standby)
■
Second Medium Domain, if applicable:
■
PCIe slot 6, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
So, for the first Medium Domain in a Half Rack, these connections would be through
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA
installed in slot 11 (standby).
■
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
66Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 67
Understanding the Software Configurations
So, for the first Medium Domain in a Half Rack, connections would be made through
both InfiniBand HCAs (slot 3 and slot 11), with P0 on each as the active connection and
P1 on each as the standby connection.
■
Application Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
So, for the first Medium Domain in a Half Rack, these connections would be through
P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA
installed in slot 11 (standby).
■
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the second CPU in the domain.
So, for first Medium Domain in a Half Rack, these connections would be through P0 on
the InfiniBand HCA installed in slot 3 (active) and P1 on the InfiniBand HCA installed
in slot 11 (standby).
Understanding Small Domains (Half Rack)
These topics provide information on the Small Domain configuration for the Half Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 67
■
“Management Network” on page 68
■
“10-GbE Client Access Network” on page 68
■
“InfiniBand Network” on page 69
Percentage of CPU and Memory Resource Allocation
The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■
One Medium Domain and two Small Domains: The following options are available for CPU
and memory resource allocation:
■
Two sockets for the Medium Domain, one socket apiece for the two Small Domains
■
One socket for the Medium Domain, two sockets for the first Small Domain, one socket
for the second Small Domain
■
One socket for the Medium Domain and the first Small Domain, two sockets for the
second Small Domain
Understanding the System67
Page 68
Understanding the Software Configurations
■
Four Small Domains: One socket for each Small Domain
Management Network
The number and type of 1-GbE host management ports that are assigned to each Small Domain
varies, depending on the CPU that the Small Domain is associated with:
One PCI root complex pair, and therefore one 10-GbE NIC, is associated with the Small
Domain on the SPARC T5-8 server in the Half Rack. Both ports are used on each dual-ported
10-GbE NIC, and both ports on each 10-GbE NIC would be part of one IPMP group. Both ports
from each dual-ported 10-GbE NIC is connected to the 10-GbE network for the Small Domains.
The following 10-GbE NICs and ports are used for connection to the client access network for
the Small Domains, depending on the CPU that the Small Domain is associated with:
■
CPU0:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 1, port 1 (standby)
■
CPU1:
■
PCIe slot 9, port 0 (active)
■
PCIe slot 9, port 1 (standby)
■
CPU6:
■
PCIe slot 6, port 0 (active)
■
PCIe slot 6, port 1 (standby)
■
CPU7:
■
PCIe slot 14, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.
68Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 69
Understanding the Software Configurations
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■
Exadata private network: Connections through P0 (active) and P1 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
■
Application Domain:
■
Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■
Oracle Solaris Cluster private network: Connections through P0 (active) and P1
(standby) on the InfiniBand HCA associated with the CPU associated with each Small
Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
Understanding Full Rack Configurations
In the Full Rack version of Oracle SuperCluster T5-8, each SPARC T5-8 server has four
processor modules (PM0 through PM3), with two sockets or PCIe root complex pairs on each
Understanding the System69
Page 70
Understanding the Software Configurations
processor module, for a total of eight sockets or PCIe root complexes for each SPARC T5-8
server. You can therefore have from one to eight logical domains on each SPARC T5-8 server in
a Full Rack.
■
“Logical Domain Configurations and PCIe Slot Mapping (Full Rack)” on page 70
■
“Understanding Giant Domains (Full Rack)” on page 72
■
“Understanding Large Domains (Full Rack)” on page 74
■
“Understanding Medium Domains (Full Rack)” on page 77
■
“Understanding Small Domains (Full Rack)” on page 80
Logical Domain Configurations and PCIe Slot Mapping (Full
Rack)
The following figure provides information on the available configurations for the Full Rack.
It also provides information on the PCIe slots and the InfiniBand (IB) HCAs or 10-GbE NICs
installed in each PCIe slot, and which logical domain those cards would be mapped to, for the
Full Rack.
70Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 71
Understanding the Software Configurations
FIGURE 17
Logical Domain Configurations and PCIe Slot Mapping (Full Rack)
Related Information
■
“Understanding Giant Domains (Full Rack)” on page 72
■
“Understanding Large Domains (Full Rack)” on page 74
■
“Understanding Medium Domains (Full Rack)” on page 77
■
“Understanding Small Domains (Full Rack)” on page 80
Understanding the System71
Page 72
Understanding the Software Configurations
Understanding Giant Domains (Full Rack)
These topics provide information on the Giant Domain configuration for the Full Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 72
■
“Management Network” on page 72
■
“10-GbE Client Access Network” on page 72
■
“InfiniBand Network” on page 73
Percentage of CPU and Memory Resource Allocation
One domain is set up on each SPARC T5-8 server in this configuration, taking up 100% of the
server. Therefore, 100% of the CPU and memory resources are allocated to this single domain
on each server (all eight sockets).
Note - You can use the CPU/Memory tool (setcoremem) to change this default allocation
after the initial installation of your system, if you want to have some CPU or memory
resources parked (unused). See “Configuring CPU and Memory Resources (osc-
setcoremem)” on page 170 for more information.
Management Network
Two out of the four 1-GbE host management ports are part of one IPMP group for this domain:
■
NET0
■
NET3
10-GbE Client Access Network
All eight PCI root complex pairs, and therefore eight 10-GbE NICs, are associated with the
logical domain on the server in this configuration. However, only two of the eight 10-GbE NICs
are used with this domain. One port is used on each dual-ported 10-GbE NIC. The two ports on
the two separate 10-GbE NICs would be part of one IPMP group. One port from the dual-ported
10-GbE NICs is connected to the 10-GbE network in this case, with the remaining unused ports
and 10-GbE NICs unconnected.
The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 14, port 1 (standby)
72Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 73
Understanding the Software Configurations
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Giant Domain in a Full Rack, connections will be made through all eight
InfiniBand HCAs, with P0 on each as the active connection and P1 on each as the
standby connection.
■
Application Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the first processor module (PM0) in the domain and
P0 (standby) on the InfiniBand HCA associated with the last CPU in the last processor
module (PM3) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P1 on the
InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand HCA installed in
slot 16 (standby).
■
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the second processor module (PM1)
in the domain and P1 (standby) on the InfiniBand HCA associated with the first CPU in
the third processor module (PM2) in the domain.
So, for a Giant Domain in a Full Rack, these connections would be through P0 on the
InfiniBand HCA installed in slot 4 (active) and P1 on the InfiniBand HCA installed in
slot 7 (standby).
Understanding the System73
Page 74
Understanding the Software Configurations
Understanding Large Domains (Full Rack)
These topics provide information on the Large Domain configuration for the Half Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 74
■
“Management Network” on page 75
■
“10-GbE Client Access Network” on page 75
■
“InfiniBand Network” on page 76
Percentage of CPU and Memory Resource Allocation
The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■
Config F2-1 (Two Large Domains): The following options are available for CPU and
memory resource allocation:
■
Four sockets for each Large Domain
■
Two sockets for the first Large Domain, six sockets for the second Large Domain
■
Six sockets for the first Large Domain, two sockets for the second Large Domain
■
One socket for the first Large Domain, seven sockets for the second Large Domain
■
Seven sockets for the first Large Domain, one socket for the second Large Domain
■
Config F3-1 (One Large Domain and two Medium Domains): The following options are
available for CPU and memory resource allocation:
■
Four sockets for the Large Domain, two sockets apiece for the two Medium Domains
■
Two sockets for the Large Domain, four sockets for the first Medium Domain, two
sockets for the second Medium Domain
■
Two sockets for the Large Domain and the first Medium Domain, four sockets for the
second Medium Domain
■
Six sockets for the Large Domain, one socket apiece for the two Medium Domains
■
Five sockets for the Large Domain, two sockets for the first Medium Domain, one
socket for the second Medium Domain
■
Five sockets for the Large Domain, one socket for the first Medium Domain, two
sockets for the second Medium Domain
■
Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:
■
Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■
Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain
74Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 75
Understanding the Software Configurations
■
Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■
Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■
Config F5-2 (One Large Domain, four Small Domains): The following options are available
for CPU and memory resource allocation:
■
Four sockets for the Large Domain, one socket apiece for the four Small Domains
■
Three sockets for the Large Domain, one socket apiece for the first three Small
Domains, two sockets for the fourth Small Domain
■
Two sockets for the Large Domain, one socket apiece for the first and second Small
Domains, two sockets apiece for the third and fourth Small Domains
■
Two sockets for the Large Domain, one socket apiece for the first three Small Domains,
three sockets for the fourth Small Domain
■
Two sockets for the Large Domain and the for the first Small Domain, one socket apiece
for the second and third Small Domains, two sockets for the fourth Small Domain
Management Network
Two 1-GbE host management ports are part of one IPMP group for each Large Domain.
Following are the 1-GbE host management ports associated with each Large Domain,
depending on how many domains are on the SPARC T5-8 server in the Full Rack:
■
First Large Domain: NET0-1
■
Second Large Domain, if applicable: NET2-3
10-GbE Client Access Network
Four PCI root complex pairs, and therefore four 10-GbE NICs, are associated with the Large
Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.
The following 10-GbE NICs and ports are used for connection to the client access network for
this configuration:
■
First Large Domain:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 10, port 1 (standby)
■
Second Large Domain, if applicable:
■
PCIe slot 5, port 0 (active)
Understanding the System75
Page 76
Understanding the Software Configurations
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if one of the two 10-GbE NICs fail.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 12 (standby).
■
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
So, for a Large Domain in a Full Rack, connections will be made through the four
InfiniBand HCAs associated with the domain, with P0 on each as the active connection
and P1 on each as the standby connection.
■
Application Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the last CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 12 (standby).
■
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the second CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the third CPU in the domain.
For example, for the first Large Domain in a Full Rack, these connections would be
through P0 on the InfiniBand HCA installed in slot 11 (active) and P1 on the InfiniBand
HCA installed in slot 4 (standby).
76Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 77
Understanding the Software Configurations
Understanding Medium Domains (Full Rack)
These topics provide information on the Medium Domain configuration for the Full Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 77
■
“Management Network” on page 78
■
“10-GbE Client Access Network” on page 79
■
“InfiniBand Network” on page 79
Percentage of CPU and Memory Resource Allocation
The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■
Config F3-1 (One Large Domain and two Medium Domains): The following options are
available for CPU and memory resource allocation:
■
Four sockets for the Large Domain, two sockets apiece for the two Medium Domains
■
Two sockets for the Large Domain, four sockets for the first Medium Domain, two
sockets for the second Medium Domain
■
Two sockets for the Large Domain and the first Medium Domain, four sockets for the
second Medium Domain
■
Six sockets for the Large Domain, one socket apiece for the two Medium Domains
■
Five sockets for the Large Domain, two sockets for the first Medium Domain, one
socket for the second Medium Domain
■
Five sockets for the Large Domain, one socket for the first Medium Domain, two
sockets for the second Medium Domain
■
Config F4-1 (Four Medium Domains): The following options are available for CPU and
memory resource allocation:
■
Two sockets apiece for all four Medium Domains
■
One socket for the first Medium Domain, two sockets apiece for the second and third
Medium Domains, and three sockets for the fourth Medium Domain
■
Three sockets for the first Medium Domain, one socket for the second Medium Domain,
and two sockets apiece for the third and fourth Medium Domains
■
Three sockets for the first Medium Domain, two sockets apiece for the second and
fourth Medium Domain, and one socket for the third Medium Domain
■
Three sockets for the first Medium Domain, two sockets apiece for the second and third
Medium Domains, and one socket for the fourth Medium Domain
■
Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:
Understanding the System77
Page 78
Understanding the Software Configurations
■
Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■
Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain
■
Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■
Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■
Config F5-1 (Three Medium Domains, two Small Domains): The following options are
available for CPU and memory resource allocation:
■
Two sockets apiece for the first and second Medium Domains, one socket apiece for the
first and second Small Domains, two sockets for the third Medium Domain
■
One socket for the first Medium Domain, two sockets for the second Medium Domain,
one socket for the first Small Domain, two sockets apiece for the second Small Domain
and the third Medium Domain
■
Two sockets apiece for the first and second Medium Domains, one socket for the first
Small Domain, two sockets for the second Small Domain, one socket for the third
Medium Domain
■
Two sockets for the first Medium Domain, one socket for the second Medium Domain,
two sockets for the first Small Domain, one socket for the second Small Domain, two
sockets for the third Medium Domain
■
Config F6-1 (Two Medium Domains, four Small Domains): The following options are
available for CPU and memory resource allocation:
■
Two sockets for the first Medium Domain, one socket apiece for the four Small
Domains, two sockets for the second Medium Domain
■
Three sockets for the first Medium Domain, one socket apiece for the four Small
Domains and the second Medium Domain
■
One socket apiece for the first Medium Domain and the four Small Domains, three
sockets for the second Medium Domain
■
Config F7-1 (One Medium Domains, six Small Domains): Two sockets for the Medium
Domain, one socket apiece for the six Small Domains
Management Network
The number and type of 1-GbE host management ports that are assigned to each Medium
Domain varies, depending on the CPUs that the Medium Domain is associated with:
Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium
Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.
The following 10-GbE NICs and ports are used for connection to the client access network for
the Medium Domains, depending on the CPUs that the Medium Domain is associated with:
■
CPU0/CPU1:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 9, port 1 (standby)
■
CPU2/CPU3:
■
PCIe slot 2, port 0 (active)
■
PCIe slot 10, port 1 (standby)
■
CPU4/CPU5:
■
PCIe slot 5, port 0 (active)
■
PCIe slot 13, port 1 (standby)
■
CPU6/CPU7:
■
PCIe slot 6, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
Understanding the System79
Page 80
Understanding the Software Configurations
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
For example, for the first Medium Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 11 (standby).
■
Exadata private network: Connections through P0 (active) and P1 (standby) on all
InfiniBand HCAs associated with the domain.
For example, for the first Medium Domain in a Full Rack, connections would be made
through both InfiniBand HCAs (slot 3 and slot 11), with P0 on each as the active
connection and P1 on each as the standby connection.
■
Application Domain:
■
Storage private network: Connections through P1 (active) on the InfiniBand HCA
associated with the first CPU in the domain and P0 (standby) on the InfiniBand HCA
associated with the second CPU in the domain.
For example, for the first Medium Domain in a Full Rack, these connections would be
through P1 on the InfiniBand HCA installed in slot 3 (active) and P0 on the InfiniBand
HCA installed in slot 11 (standby).
■
Oracle Solaris Cluster private network: Connections through P0 (active) on the
InfiniBand HCA associated with the first CPU in the domain and P1 (standby) on the
InfiniBand HCA associated with the second CPU in the domain.
For example, for first Medium Domain in a Full Rack, these connections would be
through P0 on the InfiniBand HCA installed in slot 3 (active) and P1 on the InfiniBand
HCA installed in slot 11 (standby).
Understanding Small Domains (Full Rack)
Note - If you have the Full Rack, you cannot install a Fibre Channel PCIe card in a slot that
is associated with a Small Domain. In Full Rack configurations, Fibre Channel PCIe cards
can only be added to domains with more than one 10-GbE NIC. One 10-GbE NIC must be
left for connectivity to the client access network, but for domains with more than one 10-GbE
NICs, other 10-GbE NICs can be replaced with Fibre Channel HBAs. See “Understanding
Full Rack Configurations” on page 69 for more information on the configurations with
Small Domains and “Using an Optional Fibre Channel PCIe Card” on page 139 for more
information on the Fibre Channel PCIe card.
These topics provide information on the Small Domain configuration for the Full Rack:
■
“Percentage of CPU and Memory Resource Allocation” on page 81
80Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 81
Understanding the Software Configurations
■
“Management Network” on page 82
■
“10-GbE Client Access Network” on page 82
■
“InfiniBand Network” on page 83
Percentage of CPU and Memory Resource Allocation
The amount of CPU and memory resources that you allocate to the logical domain varies,
depending on the size of the other domains that are also on the SPARC T5-8 server:
■
Config F4-2 (One Large Domain, two Small Domains, one Medium Domain): The
following options are available for CPU and memory resource allocation:
■
Four sockets for the Large Domain, one socket apiece for the two Small Domains, two
sockets for the Medium Domain
■
Three sockets for the Large Domain, one socket apiece for the two Small Domains,
three sockets for the Medium Domain
■
Two sockets for the Large Domain, one socket apiece for the two Small Domains, four
sockets for the Medium Domain
■
Five sockets for the Large Domain, one socket apiece for the two Small Domains and
the Medium Domain
■
Config F5-1 (Three Medium Domains, two Small Domains): The following options are
available for CPU and memory resource allocation:
■
Two sockets apiece for the first and second Medium Domains, one socket apiece for the
first and second Small Domains, two sockets for the third Medium Domain
■
One socket for the first Medium Domain, two sockets for the second Medium Domain,
one socket for the first Small Domain, two sockets apiece for the second Small Domain
and the third Medium Domain
■
Two sockets apiece for the first and second Medium Domains, one socket for the first
Small Domain, two sockets for the second Small Domain, one socket for the third
Medium Domain
■
Two sockets for the first Medium Domain, one socket for the second Medium Domain,
two sockets for the first Small Domain, one socket for the second Small Domain, two
sockets for the third Medium Domain
■
Config F5-2 (One Large Domain, four Small Domains): The following options are available
for CPU and memory resource allocation:
■
Four sockets for the Large Domain, one socket apiece for the four Small Domains
■
Three sockets for the Large Domain, one socket apiece for the first three Small
Domains, two sockets for the fourth Small Domain
■
Two sockets for the Large Domain, one socket apiece for the first and second Small
Domains, two sockets apiece for the third and fourth Small Domains
Understanding the System81
Page 82
Understanding the Software Configurations
■
Two sockets for the Large Domain, one socket apiece for the first three Small Domains,
three sockets for the fourth Small Domain
■
Two sockets for the Large Domain and the for the first Small Domain, one socket apiece
for the second and third Small Domains, two sockets for the fourth Small Domain
■
Config F6-1 (Two Medium Domains, four Small Domains): The following options are
available for CPU and memory resource allocation:
■
Two sockets for the first Medium Domain, one socket apiece for the four Small
Domains, two sockets for the second Medium Domain
■
Three sockets for the first Medium Domain, one socket apiece for the four Small
Domains and the second Medium Domain
■
One socket apiece for the first Medium Domain and the four Small Domains, three
sockets for the second Medium Domain
■
Config F7-1 (One Medium Domains, six Small Domains): Two sockets for the Medium
Domain, one socket apiece for the six Small Domains
■
Config F8-1 (Eight Small Domains): One socket for each Small Domain
Management Network
The number and type of 1-GbE host management ports that are assigned to each Small Domain
varies, depending on the CPU that the Small Domain is associated with:
One PCI root complex pair, and therefore one 10-GbE NIC, is associated with the Small
Domain on the SPARC T5-8 server in the Full Rack. Both ports are used on each dual-ported
10-GbE NIC, and both ports on each 10-GbE NIC would be part of one IPMP group. Both ports
from each dual-ported 10-GbE NIC is connected to the 10-GbE network for the Small Domains.
The following 10-GbE NICs and ports are used for connection to the client access network for
the Small Domains, depending on the CPU that the Small Domain is associated with:
82Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 83
■
■
■
■
■
■
■
■
CPU0:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 1, port 1 (standby)
CPU1:
■
PCIe slot 9, port 0 (active)
■
PCIe slot 9, port 1 (standby)
CPU2:
■
PCIe slot 2, port 0 (active)
■
PCIe slot 2, port 1 (standby)
CPU3:
■
PCIe slot 10, port 0 (active)
■
PCIe slot 10, port 1 (standby)
CPU4:
■
PCIe slot 5, port 0 (active)
■
PCIe slot 5, port 1 (standby)
CPU5:
■
PCIe slot 13, port 0 (active)
■
PCIe slot 13, port 1 (standby)
CPU6:
■
PCIe slot 6, port 0 (active)
■
PCIe slot 6, port 1 (standby)
CPU7:
■
PCIe slot 14, port 0 (active)
■
PCIe slot 14, port 1 (standby)
Understanding the Software Configurations
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
Understanding the System83
Page 84
Understanding Clustering Software
■
Database Domain:
■
Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■
Exadata private network: Connections through P0 (active) and P1 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
■
Application Domain:
■
Storage private network: Connections through P1 (active) and P0 (standby) on the
InfiniBand HCA associated with the CPU associated with each Small Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P1 (active) and P0
(standby) on that InfiniBand HCA.
■
Oracle Solaris Cluster private network: Connections through P0 (active) and P1
(standby) on the InfiniBand HCA associated with the CPU associated with each Small
Domain.
For example, for a Small Domain that is associated with CPU0, these connections would
be through the InfiniBand HCA installed in slot 3, through ports P0 (active) and P1
(standby) on that InfiniBand HCA.
Understanding Clustering Software
Clustering software is typically used on multiple interconnected servers so that they appear as
if they are one server to end users and applications. For Oracle SuperCluster T5-8, clustering
software is used to cluster certain logical domains on the SPARC T5-8 servers together with the
same type of domain on other SPARC T5-8 servers. The benefits of clustering software include
the following:
■
Reduce or eliminate system downtime because of software or hardware failure
■
Ensure availability of data and applications to end users, regardless of the kind of failure
that would normally take down a single-server system
■
Increase application throughput by enabling services to scale to additional processors by
adding nodes to the cluster and balancing the load
■
Provide enhanced availability of the system by enabling you to perform maintenance
without shutting down the entire cluster
84Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 85
Understanding Clustering Software
Oracle SuperCluster T5-8 uses the following clustering software:
■
“Cluster Software for the Database Domain” on page 85
■
“Cluster Software for the Oracle Solaris Application Domains” on page 85
Cluster Software for the Database Domain
Oracle Database 11g Real Application Clusters (Oracle RAC) enables the clustering of the
Oracle Database on the Database Domain. Oracle RAC uses Oracle Clusterware for the
infrastructure to cluster the Database Domain on the SPARC T5-8 servers together.
Oracle Clusterware is a portable cluster management solution that is integrated with the
Oracle database. The Oracle Clusterware is also a required component for using Oracle RAC.
The Oracle Clusterware enables you to create a clustered pool of storage to be used by any
combination of single-instance and Oracle RAC databases.
Single-instance Oracle databases have a one-to-one relationship between the Oracle database
and the instance. Oracle RAC environments, however, have a one-to-many relationship between
the database and instances. In Oracle RAC environments, the cluster database instances access
one database. The combined processing power of the multiple servers can provide greater
throughput and scalability than is available from a single server. Oracle RAC is the Oracle
Database option that provides a single system image for multiple servers to access one Oracle
database.
Oracle RAC is a unique technology that provides high availability and scalability for all
application types. The Oracle RAC infrastructure is also a key component for implementing
the Oracle enterprise grid computing architecture. Having multiple instances access a single
database prevents the server from being a single point of failure. Applications that you deploy
on Oracle RAC databases can operate without code changes.
Cluster Software for the Oracle Solaris Application
Domains
The Oracle Solaris Cluster software is an optional clustering tool used for the Oracle Solaris
Application Domains. On Oracle SuperCluster T5-8, the Oracle Solaris Cluster software is used
to cluster the Oracle Solaris Application Domain on the SPARC T5-8 servers together.
Understanding the System85
Page 86
Understanding the Network Requirements
Understanding the Network Requirements
These topics describe the network requirements for Oracle SuperCluster T5-8.
■
“Network Requirements Overview” on page 86
■
“Network Connection Requirements for Oracle SuperCluster T5-8” on page 89
■
“Default IP Addresses” on page 90
Network Requirements Overview
Oracle SuperCluster T5-8 includes SPARC T5-8 servers, Exadata Storage Servers, and the ZFS
storage appliance, as well as equipment to connect the SPARC T5-8 servers to your network.
The network connections enable the servers to be administered remotely and enable clients to
connect to the SPARC T5-8 servers.
Each SPARC T5-8 server consists of the following network components and interfaces:
■
4 embedded Gigabit Ethernet ports (NET0, NET1, NET2, and NET3) for connection to the
host management network
■
1 Ethernet port (NET MGT) for Oracle Integrated Lights Out Manager (Oracle ILOM)
remote management
■
Either 4 (Half Rack) or 8 (Full Rack) dual-ported Sun QDR InfiniBand PCIe Low Profile
host channel adapters (HCAs) for connection to the InfiniBand private network
■
Either 4 (Half Rack) or 8 (Full Rack) dual-ported Sun Dual 10-GbE SFP+ PCIe 2.0 Low
Profile network interface cards (NICs) for connection to the 10-GbE client access network
Note - The QSFP modules for the 10-GbE PCIe 2.0 network cards are purchased separately.
Each Exadata Storage Server consists of the following network components and interfaces:
■
1 embedded Gigabit Ethernet port (NET0) for connection to the host management network
■
1 dual-ported Sun QDR InfiniBand PCIe Low Profile host channel adapter (HCA) for
connection to the InfiniBand private network
■
1 Ethernet port (NET MGT) for Oracle ILOM remote management
Each ZFS storage controller consists of the following network components and interfaces:
■
1 embedded Gigabit Ethernet port for connection to the host management network:
86Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 87
Understanding the Network Requirements
■
NET0 on the first storage controller (installed in slot 25 in the rack)
■
NET1 on the second storage controller (installed in slot 26 in the rack)
■
1 dual-port QDR InfiniBand Host Channel Adapter for connection to the InfiniBand private
network
■
1 Ethernet port (NET0) for Oracle ILOM remote management using sideband management.
The dedicate Oracle ILOM port is not used due to sideband.
The Cisco Catalyst 4948 Ethernet switch supplied with Oracle SuperCluster T5-8 is minimally
configured during installation. The minimal configuration disables IP routing, and sets the
following:
■
Host name
■
IP address
■
Subnet mask
■
Default gateway
■
Domain name
■
Domain Name Server
■
NTP server
■
Time
■
Time zone
Additional configuration, such as defining multiple virtual local area networks (VLANs) or
enabling routing, might be required for the switch to operate properly in your environment and
is beyond the scope of the installation service. If additional configuration is needed, then your
network administrator must perform the necessary configuration steps during installation of
Oracle SuperCluster T5-8.
To deploy Oracle SuperCluster T5-8, ensure that you meet the minimum network requirements.
There are three networks for Oracle SuperCluster T5-8. Each network must be on a distinct and
separate subnet from the others. The network descriptions are as follows:
■
Management network – This required network connects to your existing management
network, and is used for administrative work for all components of Oracle SuperCluster
T5-8. It connects the servers, Oracle ILOM, and switches connected to the Ethernet switch
in the rack. There is one uplink from the Ethernet switch in the rack to your existing
management network.
Note - Network connectivity to the PDUs is only required if the electric current will be
monitored remotely.
Understanding the System87
Page 88
Understanding the Network Requirements
Each SPARC T5-8 server and Exadata Storage Server use two network interfaces for
management. One provides management access to the operating system through the 1-GbE
host management interface(s), and the other provides access to the Oracle Integrated Lights
Out Manager through the Oracle ILOM Ethernet interface.
Note - The SPARC T5-8 servers have four 1-GbE host management interfaces (NET 0
- NET3). All four NET interfaces are physically connected, and use IPMP to provide
redundancy. See “Understanding the Software Configurations” on page 46 for more
information.
The method used to connect the ZFS storage controllers to the management network varies
depending on the controller:
■
Storage controller 1: NET0 used to provide access to the Oracle ILOM network using
sideband management, as well as access to the 1-GbE host management network.
■
Storage controller 2: NET0 used to provide access to the Oracle ILOM network using
sideband management, and NET1 used to provide access to the 1-GbE host management
network.
Oracle SuperCluster T5-8 is delivered with the 1-GbE host management and Oracle ILOM
interfaces connected to the Ethernet switch on the rack. The 1-GbE host management
interfaces on the SPARC T5-8 servers should not be used for client or application network
traffic. Cabling or configuration changes to these interfaces is not permitted.
■
Client access network – This required 10-GbE network connects the SPARC T5-8 servers
to your existing client network and is used for client access to the servers. Database
applications access the database through this network using Single Client Access Name
(SCAN) and Oracle RAC Virtual IP (VIP) addresses.
■
InfiniBand private network – This network connects the SPARC T5-8 servers, ZFS
storage appliance, and Exadata Storage Servers using the InfiniBand switches on the rack.
For SPARC T5-8 servers configured with Database Domains, Oracle Database uses this
network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata
Storage Servers and the ZFS storage appliance. For SPARC T5-8 servers configured with
the Application Domain, Oracle Solaris Cluster uses this network for cluster interconnect
traffic and to access data on the ZFS storage appliance. This non-routable network is fully
contained in Oracle SuperCluster T5-8, and does not connect to your existing network. This
network is automatically configured during installation.
Note - All networks must be on distinct and separate subnets from each other.
The following figure shows the default network diagram.
88Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 89
Understanding the Network Requirements
FIGURE 18
Network Diagram for Oracle SuperCluster T5-8
Network Connection Requirements for Oracle
SuperCluster T5-8
The following connections are required for Oracle SuperCluster T5-8 installation:
TABLE 1
Connection TypeNumber of connectionsComments
Management network1 for Ethernet switchConnect to the existing management network
Client access networkTypically 2 per logical domain.Connect to the client access network. (You will not
New Network Connections Required for Installation
have redundancy through IPMP if there is only one
connection per logical domain.)
Understanding the System89
Page 90
Understanding the Network Requirements
Understanding Default IP Addresses
These topics list the default IP addresses assigned to Oracle SuperCluster T5-8 components
during manufacturing.
■
“Default IP Addresses” on page 90
■
“Default Host Names and IP Addresses” on page 90
Default IP Addresses
Four sets of default IP addresses are assigned at manufacturing:
■
Management IP addresses – IP addresses used by Oracle ILOM for the SPARC T5-8
servers, Exadata Storage Servers, and the ZFS storage controllers.
■
Host IP addresses – Host IP addresses used by the SPARC T5-8 servers, Exadata Storage
Servers, ZFS storage controllers, and switches.
■
InfiniBand IP addresses – InfiniBand interfaces are the default channel of communication
among SPARC T5-8 servers, Exadata Storage Servers, and the ZFS storage controllers. If
you are connecting Oracle SuperCluster T5-8 to another Oracle SuperCluster T5-8 or to an
Oracle Exadata or Exalogic machine on the same InfiniBand fabric, the InfiniBand interface
enables communication between the SPARC T5-8 servers and storage server heads in one
Oracle SuperCluster T5-8 and the other Oracle SuperCluster T5-8 or Oracle Exadata or
Exalogic machine.
■
10-GbE IP addresses – The IP addresses used by the 10-GbE client access network
interfaces.
Tip - For more information about how these interfaces are used, see Figure 18, “Network
Diagram for Oracle SuperCluster T5-8,” on page 89.
Default Host Names and IP Addresses
Refer to the following topics for the default IP addresses used in Oracle SuperCluster T5-8:
■
“Default Host Names and IP Addresses for the Oracle ILOM and Host Management
Networks” on page 91
■
“Default Host Names and IP Addresses for the InfiniBand and 10-GbE Client Access
Networks” on page 92
90Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 91
Understanding the Network Requirements
Default Host Names and IP Addresses for the Oracle ILOM and Host
Management Networks
TABLE 2
Unit
Number
Default Host Names and IP Addresses for the Oracle ILOM and Host Management Networks
Information Assigned at Manufacturing
Rack Component (Front View)Oracle ILOM Host
Names
Oracle ILOM IP
Addresses
Host Management
Host Names
Host Management
IP Addresses
PDU-A (left from rear view)sscpdua192.168.1.210N/AN/A
PCU-B (right from rear view)sscpdub192.168.1.211N/A
17SPARC T5-8 Server 1ssccn1-ib8192.168.10.79ssccn1-tg16192.168.40.16
Understanding the System93
Page 94
Understanding the Network Requirements
Information Assigned at Manufacturing
Unit
Number
Rack Component (Front View)InfiniBand Host
Names
InfiniBand IP
Addresses
10-GbE Client
Access Host
Names
ssccn1-tg15192.168.40.15
16ssccn1-ib7192.168.10.69ssccn1-tg14
10-GbE Client
Access IP
Addresses
192.168.40.14
ssccn1-tg13
15ssccn1-ib6192.168.10.59ssccn1-tg12
ssccn1-tg11
14ssccn1-ib5192.168.10.49ssccn1-tg10
ssccn1-tg9
13ssccn1-ib4192.168.10.39ssccn1-tg8
ssccn1-tg7
12ssccn1-ib3192.168.10.29ssccn1-tg6
ssccn1-tg5
11ssccn1-ib2192.168.10.19ssccn1-tg4
ssccn1-tg3
10ssccn1-ib1192.168.10.9ssccn1-tg2
ssccn1-tg1
9
Exadata Storage Server 4ssces4-stor192.168.10.104N/AN/A
8
7
Exadata Storage Server 3ssces3-stor192.168.10.103N/AN/A
6
5
Exadata Storage Server 2ssces2-stor192.168.10.102N/AN/A
4
3
Exadata Storage Server 1ssces1-stor192.168.10.101N/AN/A
2
1Sun Datacenter InfiniBand Switch 36
N/AN/AN/AN/A
(Spine)
192.168.40.13
192.168.40.12
192.168.40.11
192.168.40.10
192.168.40.9
192.168.40.8
192.168.40.7
192.168.40.6
192.168.40.5
192.168.40.4
192.168.40.3
192.168.40.2
192.168.40.1
94Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 95
Preparing the Site
This section describes the steps you should take to prepare the site for your system.
■
“Cautions and Considerations” on page 95
■
“Reviewing System Specifications” on page 96
■
“Reviewing Power Requirements” on page 99
■
“Preparing for Cooling” on page 106
■
“Preparing the Unloading Route and Unpacking Area” on page 111
■
“Preparing the Network” on page 113
Cautions and Considerations
Consider the following when selecting a location for the new rack.
■
Do not install the rack in a location that is exposed to:
■
Direct sunlight
■
Excessive dust
■
Corrosive gases
■
Air with high salt concentrations
■
Frequent vibrations
■
Sources of strong radio frequency interference
■
Static electricity
■
Use power outlets that provide proper grounding.
■
A qualified electrical engineer must perform any grounding work.
■
Each grounding wire for the rack must be used only for the rack.
■
The grounding resistance must not be greater than 10 ohms.
■
Verify the grounding method for the building.
■
Observe the precautions, warnings, and notes about handling that appear on labels on the
equipment.
Preparing the Site95
Page 96
Reviewing System Specifications
■
(CHECKLIST) Operate the air conditioning system for 48 hours to bring the room
temperature to the appropriate level.
■
(CHECKLIST) Clean and vacuum the area thoroughly in preparation for installation.
Reviewing System Specifications
■
“Physical Specifications” on page 96
■
“Installation and Service Area” on page 96
■
“Rack and Floor Cutout Dimensions” on page 97
Physical Specifications
Ensure that the installation site can properly accommodate the system by reviewing its physical
specifications and space requirements.
ParameterMetricEnglish
Height1998 mm78.66 in.
Width with side panels600 mm23.62 in.
Depth (with doors)1200 mm47.24 in.
Depth (without doors)1112 mm43.78 in.
Minimum ceiling height2300 mm90 in.
Minimum space between top of cabinet and ceiling914 mm36 in.
Weight (full rack)869 kg1,916 lbs
Weight (half rack)706 kg1,556 lbs
Related Information
■
“Installation and Service Area” on page 96
■
“Rack and Floor Cutout Dimensions” on page 97
■
“Shipping Package Dimensions” on page 111
Installation and Service Area
Select an installation site that provides enough space to install and service the system.
96Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 97
Reviewing System Specifications
LocationMaintenance Access
Rear maintenance914 mm36 in.
Front maintenance914 mm36 in.
Top maintenance914 mm36 in.
Related Information
■
“Physical Specifications” on page 96
■
“Rack and Floor Cutout Dimensions” on page 97
Rack and Floor Cutout Dimensions
If you plan to route cables down through the bottom of the rack, cut a rectangular hole in the
floor tile. Locate the hole below the rear portion of the rack, between the two rear casters and
behind the rear inner rails. The suggested hole width is 280 mm (11 inches).
If you want a separate grounding cable, see “Install a Ground Cable (Optional)” on page 124.
Caution - Do not create a hole where the rack casters or leveling feet will be placed.
Preparing the Site97
Page 98
Reviewing System Specifications
FIGURE 19
Figure Legend
1
Distance from mounting hole slots to the edge of the rack is 113 mm (4.45 inches)
2
Width between the centers of the mounting hole slots is 374 mm (14.72 inches)
3
Distance between mounting hole slots to the edge of the rack is 113 mm (4.45 inches)
4
Distance between the centers of the front and rear mounting hole slots is 1120 mm (44.1 inches)
5
Depth of cable-routing floor cutout is 330 mm (13 inches)
6
Distance between the floor cutout and the edge of the rack is 160 mm (6.3 inches)
7
Width of floor cutout is 280 mm (11 inches)
Dimensions for Rack Stabilization
Related Information
■
“Perforated Floor Tiles” on page 110
■
“Physical Specifications” on page 96
■
“Installation and Service Area” on page 96
98Oracle SuperCluster T5-8 Owner's Guide • May 2016
Page 99
Reviewing Power Requirements
■
“Power Consumption” on page 99
■
“Facility Power Requirements” on page 100
■
“Grounding Requirements” on page 100
■
“PDU Power Requirements” on page 101
■
“PDU Thresholds” on page 104
Power Consumption
These tables describe power consumption of SuperCluster T5-8 and expansion racks.
These are measured values and not the rated power for the rack. For rated power specifications,
see “PDU Power Requirements” on page 101.
Reviewing Power Requirements
TABLE 4
ConditionFull RackHalf Rack
Maximum
Typical
TABLE 5
ProductConditionkWkVA
Extreme Flash quarter rackMaximum
High capacity quarter rackMaximum
Individual Extreme Flash storage serverMaximum
Individual High capacity storage serverMaximum
SuperCluster T5-8
Exadata X5-2 Expansion Rack
15.97 kVA
15.17 kW
13.31 kVA
12.64 kW
Typical
Typical
Typical
Typical
9.09 kVA
8.6 kW
7.5 kVA
7.13 kW
3.6
2.5
3.4
2.4
.6
.4
.5
.4
3.7
2.6
3.4
2.4
.6
.4
.5
.4
Preparing the Site99
Page 100
Reviewing Power Requirements
Related Information
■
“Facility Power Requirements” on page 100
■
“Grounding Requirements” on page 100
■
“PDU Power Requirements” on page 101
Facility Power Requirements
Provide a separate circuit breaker for each power cord.
Use dedicated AC breaker panels for all power circuits that supply power to the PDU. Breaker
switches and breaker panels should not be shared with other high-powered equipment.
Balance the power load between AC supply branch circuits.
To protect the rack from electrical fluctuations and interruptions, you should have a dedicated
power distribution system, an uninterruptible power supply (UPS), power-conditioning
equipment, and lightning arresters.
Related Information
■
“Power Consumption” on page 99
■
“Grounding Requirements” on page 100
■
“PDU Power Requirements” on page 101
Grounding Requirements
Always connect the cords to grounded power outlets. Computer equipment requires electrical
circuits to be grounded to the Earth.
Because different grounding methods vary by locality, refer to documentation such as IEC
documents for the correct grounding method. Ensure that the facility administrator or qualified
electrical engineer verifies the grounding method for the building, and performs the grounding
work.
Related Information
■
“Facility Power Requirements” on page 100
100Oracle SuperCluster T5-8 Owner's Guide • May 2016
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.