IBM has previewed z/VSE 4.2. When available, z/VSE 4.2
is designed help address the needs of VSE clients with
growing core VSE workloads. z/VSE V4.2 is designed to
support:
z/TPF
z/TPF is a 64-bit operating system that allows you to move
legacy applications into an open development environ-
ment, leveraging large scale memory spaces for increased
speed, diagnostics and functionality. The open develop-
ment environment allows access to commodity skills and
enhanced access to open code libraries, both of which can
be used to lower development costs. Large memory spaces
can be used to increase both system and application effi -
ciency as I/Os or memory management can be eliminated.
z/TPF is designed to support:
• 64-bit mode
• Linux development environment (GCC and HLASM for
Linux)
• 32 processors/cluster
• Up to 84* engines/processor
• 40,000 modules
• Workload License Charge
• More than 255 VSE tasks to help clients grow their CICS
workloads and to ease migration from CS/VSE to CICS
Transaction Server for VSE/ESA
™
• Up to 32 GB of processor storage
• Sub-Capacity Reporting Tool running “natively”
• Encryption Facility for z/VSE as an optional priced feature
• IBM System Storage TS3400 Tape Library (via the
TS1120 Controller)
• IBM System Storage TS7740 Virtualization Engine
Release 1.3
z/VSE V4.2 plans to continue the focus on hybrid solu-
tions exploiting z/VSE and Linux on System z, service-ori-
ented architecture (SOA), and security. It is the preferred
replacement for z/VSE V4.1, z/VSE V3, or VSE/ESA. It is
designed to protect and leverage existing VSE information
assets.
Linux on System z
The System z10 EC supports the following Linux on
System z distributions (most recent service levels):
• Novell SUSE SLES 9
• Novell SUSE SLES 10
• Red Hat RHEL 4
• Red Hat RHEL 5
10
z10 EC
Operating System ESA/390 z/Architecture
(31-bit) (64-bit)
z/OS V1R8, 9 and 10 No Yes
z/OS V1R7
(1)(2)
with BM Lifecycle
Extension for z/OS V1.7 No Yes
Linux on System z
(2)
, Red Hat
RHEL 4, & Novell SUSE SLES 9 Yes Yes
Linux on System z
(2)
, Red Hat
RHEL 5, & Novell SUSE SLES 10 No Yes
(3)
z/VM V5R2
z/VSE V3R1
z/VSE V4R1
(3)
, 3
and 4 No* Yes
(2)(4)
(2)(5)
and 2
(5)
Yes No
No Yes
z/TPF V1R1 No Yes
TPF V4R1 (ESA mode only) Yes No
1. z/OS V1.7 support on the z10 BC™ requires the Lifecycle Extension for
z/OS V1.7, 5637-A01. The Lifecycle Extension for z/OS R1.7 + zIIP Web
Deliverable required for z10 to enable HiperDispatch on z10 (does not
require a zIIP). z/OS V1.7 support was withdrawn September 30, 2008.
The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2009. With
this Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain
functions and features of the z10 BC server require later releases of
z/OS. For a complete list of software support, see the PSP buckets and
the Software Requirements section of the System z10 BC announcement
letter, dated October 21, 2008.
2. Compatibility Support for listed releases. Compatibility support allows
OS to IPL and operate on z10 BC
3. Requires Compatibility Support which allows z/VM to IPL and operate
on the z10 providing IBM System z9
Guests. *z/VM supports 31-bit and 64-bit guests
4. z/VSE V3 operates in 31-bit mode only. It does not implement z/
Architecture, and specifi cally does not implement 64-bit mode capabilities. z/VSE is designed to exploit select features of IBM System z10,
System z9, and IBM eServer
5. z/VSE V4 is designed to exploit 64-bit real memory addressing, but will
not support 64-bit virtual memory addressing
Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive
Planning (PSP) bucket prior to installing a z10 BC
®
functionality for the base OS and
™
zSeries® hardware.
Everyday the IT system needs to be available to users
– customers that need access to the company Web site,
line of business personnel that need access to the system,
application development that is constantly keeping the
environment current, and the IT staff that is operating and
maintaining the environment. If applications are not consis-
tently available, the business can suffer.
The z10 EC continues our commitment to deliver improve-
ments in hardware Reliability, Availability and Serviceability
(RAS) with every new System z server. They include micro-
code driver enhancements, dynamic segment sparing for
memory as well as the fi xed HSA. The z10 EC is a server
that can help keep applications up and running in the
event of planned or unplanned disruptions to the system.
IBM System z servers stand alone against competition and
have stood the test of time with our business resiliency
solutions. Our coupling solutions with Parallel Sysplex tech-
nology allows for greater scalability and availability. The
Infi niBand Coupling Links on the z10 EC provides a high
speed solution to the 10 meter limitation of ICB-4 since they
will be available in lengths up to 150 meters.
What the z10 EC provides over its predecessors are
improvements in the processor granularity offerings, more
options for specialty engines, security enhancements,
additional high availability characteristics, Concurrent
Driver Upgrade (CDU) improvements, enhanced network-
ing and on demand offerings. The z10 EC provides our
IBM customers an option for continued growth, continuity,
and upgradeability.
The IBM System z10 EC builds upon the structure
introduced on the IBM System z9 EC – scalability and
z/Architecture. The System z10 EC expands upon a key
attribute of the platform – availability – to help ensure a
resilient infrastructure designed to satisfy the demands
of your business. With the potential for increased perfor-
mance and capacity, you have an opportunity to continue
to consolidate diverse applications on a single platform.
The z10 EC is designed to provide up 1.7 times the total
system capacity than the z9 EC, and has up to triple the
available memory. The maximum number of Processor
Units (PUs) has grown from 54 to 64, and memory has
increased from 128 GB per book and 512 GB per system
to 384 GB per book and 1.5 TB per system.
The z10 EC will continue to use the Cargo cage for its I/O,
supporting up to 960 Channels on the Model E12 (64 I/O
features) and up to 1,024 (84 I/O features) on the Models
E26, E40, E56 and E64.
HiperDispatch helps provide increased scalability and per-
formance of higher n-way and multi-book z10 EC systems
by improving the way workload is dispatched across the
server. HiperDispatch accomplishes this by recognizing
the physical processor where the work was started and
then dispatching subsequent work to the same physical
processor. This intelligent dispatching helps reduce the
movement of cache and data and is designed to improve
CPU time and performance. HiperDispatch is available
only with new z10 EC PR/SM and z/OS functions.
Processor Units (cores) defi ned as Internal Coupling
Facilities (ICFs), Integrated Facility for Linux (IFLs), System
z10 Application Assist Processor (zAAPs) and System
z10 Integrated Information Processor (zIIPs) are no longer
grouped together in one pool as on the z990, but are
grouped together in their own pool, where they can be
managed separately. The separation signifi cantly simpli-
fi es capacity planning and management for LPAR and can
have an effect on weight management since CP weights
and zAAP and zIIP weights can now be managed sepa-
rately. Capacity BackUp (CBU) features are available for
IFLs, ICFs, zAAPs and zIIPs.
For LAN connectivity, z10 EC provides a OSA-Express3
2-port 10 Gigabit Ethernet (GbE) Long Reach feature along
with the OSA-Express3 Gigabit Ethernet SX and LX with
four ports per features. The z10 EC continues to support
OSA-Express2 1000BASE-T and GbE Ethernet features,
and supports IP version 6 (IPv6) on HiperSockets. OSA-
Express2 OSN (OSA for NCP) is also available on System
z10 EC to support the Channel Data Link Control (CDLC)
protocol, providing direct access from the host operating
system images to the Communication Controller for Linux
on the z10 EC, z10 BC, z9 EC and z9 (CCL) using OSA-
Express3 or OSA-Express2 to help eliminate the require-
ment for external hardware for communications.
Additional channel and networking improvements include
support for Layer 2 and Layer 3 traffi c, FCP management
facility for z/VM and Linux for System z, FCP security
improvements, and Linux support for HiperSockets IPv6.
STP enhancements include the additional support for NTP
clients and STP over Infi niBand links.
Like the System z9 EC, the z10 EC offers a confi gurable
Crypto Express2 feature, with PCI-X adapters that can
be individually confi gured as a secure coprocessor or
an accelerator for SSL, the TKE workstation with optional
Smart Card Reader, and provides the following CP Assist
for Cryptographic Function (CPACF):
• DES, TDES, AES-128, AES-192, AES-256
• SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
• Pseudo Random Number Generation (PRNG)
z10 EC is designed to deliver the industry leading Reli-
ability, Availability and Serviceability (RAS) custom-
ers expect from System z servers. RAS is designed to
reduce all sources of outages by reducing unscheduled,
scheduled and planned outages. Planned outages are
further designed to be reduced by reducing preplanning
requirements.
12
z10 EC preplanning improvements are designed to avoid
planned outages and include:
• Flexible Customer Initiated Upgrades
• Enhanced Driver Maintenance
– Multiple “from” sync point support
• Reduce Pre-planning to avoid Power-On-Reset
– 16 GB for HSA
– Dynamic I/O enabled by default
– Add Logical Channel Subsystems (LCSS)
– Change LCSS Subchannel Sets
– Add/delete Logical partitions
• Designed to eliminate a logical partition deactivate/
activate/IPL
– Dynamic Change to Logical Processor Defi nition –
z/VM 5.3
– Dynamic Change to Logical Cryptographic Coproces-
sor Defi nition – z/OS ICSF
Additionally, several service enhancements have also
been designed to avoid scheduled outages and include
concurrent parts replacement, and concurrent hardware
upgrades. Exclusive to the z10 EC is the ability to hot swap
ICB-4 and Infi niBand hub cards.
Enterprises with IBM System z9 EC and IBM z990 may
upgrade to any z10 Enterprise Class model. Model
upgrades within the z10 EC are concurrent with the excep-
tion of the E64, which is disruptive. If you desire a con-
solidation platform for your mainframe and Linux capable
applications, you can add capacity and even expand you
current application workloads in a cost-effective manner. If
your traditional and new applications are growing, you may
fi nd the z10 EC a good fi t with its base qualities of service
and its specialty processors designed for assisting with
new workloads. Value is leveraged with improved hardware
price/performance and System z10 EC software pricing
strategies.
The z10 EC processor introduces IBM System z10
Enterprise Class with Quad Core technology, advanced
pipeline design and enhanced performance on CPU inten-
sive workloads. The z10 EC is specifi cally designed and
optimized for full z/Architecture compatibility. New features
enhance enterprise data serving performance, industry
leading virtualization capabilities, energy effi ciency at
system and data center levels. The z10 EC is designed
to further extend and integrate key platform characteris-
tics such as dynamic fl exible partitioning and resource
management in mixed and unpredictable workload envi-
ronments, providing scalability, high availability and Quali-
ties of Service (QoS) to emerging applications such as
WebSphere, Java and Linux.
With the logical partition (LPAR) group capacity limit on
z10 EC, z10 BC, z9 EC and z9 BC, you can now specify
LPAR group capacity limits allowing you to defi ne each
LPAR with its own capacity and one or more groups of
LPARs on a server. This is designed to allow z/OS to
manage the groups in such a way that the sum of the
LPARs’ CPU utilization within a group will not exceed the
group’s defi ned capacity. Each LPAR in a group can still
optionally continue to defi ne an individual LPAR capacity
limit.
The z10 EC has fi ve models with a total of 100 capacity
settings available as new build systems and as upgrades
from the z9 EC and z990.
The fi ve z10 EC models are designed with a multi-book
r
system structure that provides up to 64 Processor Units
(PUs) that can be characterized as either Central Proces-
sors (CPs), IFLs, ICFs, zAAPs or zIIPs.
Some of the signifi cant enhancements in the z10 EC that
help bring improved performance, availability and function
to the platform have been identifi ed. The following sections
highlight the functions and features of the z10 EC.
13
z10 EC Design and Technology
The System z10 EC is designed to provide balanced
system performance. From processor storage to the
system’s I/O and network channels, end-to-end bandwidth
is provided and designed to deliver data where and when
it is needed.
The processor subsystem is comprised of one to four
books connected via a point-to-point SMP network. The
change to a point-to-point connectivity eliminates the need
for the jumper book, as had been used on the System z9
and z990 systems. The z10 EC design provides growth
paths up to a 64 engine system where each of the 64
PUs has full access to all system resources, specifi cally
memory and I/O.
Each book is comprised of a Multi-Chip Module (MCM),
memory cards and I/O fanout cards. The MCMs, which
measure approximately 96 x 96 millimeters, contain the
Processor Unit (PU) chips, the “SCD” and “SCC” chips of
z990 and z9 have been replaced by a single “SC” chip
which includes both the L2 cache and the SMP fabric
(“storage controller”) functions. There are two SC chips
on each MCM, each of which is connected to all fi ve CP
chips on that MCM. The MCM contain 103 glass ceramic
layers to provide interconnection between the chips and
the off-module environment. Four models (E12, E26, E40
and E56) have 17 PUs per book, and the high capacity
z10 EC Model E64 has one 17 PU book and three 20 PU
books. Each PU measures 21.973 mm x 21.1658 mm and
has an L1 cache divided into a 64 KB cache for instruc-
tions and a 128 KB cache for data. Each PU also has an
L1.5 cache. This cache is 3 MB in size. Each L1 cache
has a Translation Look-aside Buffer (TLB) of 512 entries
associated with it. The PU, which uses a high-frequency
z/Architecture microprocessor core, is built on CMOS 11S
chip technology and has a cycle time of approximately
0.23 nanoseconds.
The design of the MCM technology on the z10 EC pro-
vides the fl exibility to confi gure the PUs for different uses;
there are two spares and up to 11 System Assist Proces-
sors (SAPs) standard per system. The remaining inactive
PUs on each installed MCM are available to be charac-
terized as either CPs, ICF processors for Coupling Facil-
ity applications, or IFLs for Linux applications and z/VM
hosting Linux as a guest, System z10 Application Assist
Processors (zAAPs), System z10 Integrated Information
Processors (zIIPs) or as optional SAPs and provide you
with tremendous fl exibility in establishing the best system
for running applications. Each model of the z10 EC must
always be ordered with at least one CP, IFL or ICF.
Each book can support from the 16 GB minimum memory,
up to 384 GB and up to 1.5 TB per system. 16 GB of
the total memory is delivered and reserved for the fi xed
Hardware Systems Area (HSA). There are up to 48 IFB
links per system at 6 GBps each.
The z10 EC supports a combination of Memory Bus
Adapter (MBA) and Host Channel Adapter (HCA) fanout
cards. New MBA fanout cards are used exclusively for
ICB-4. New ICB-4 cables are needed for z10 EC and are
only available on models E12, E26, E40 and E56. The E64
model may not have ICBs. The Infi niBand Multiplexer (IFB-
MP) card replaces the Self-Timed Interconnect Multiplexer
(STI-MP) card. There are two types of HCA fanout cards:
HCA2-C is copper and is always used to connect to I/O
(IFB-MP card) and the HCA2-O which is optical and used
for customer Infi niBand coupling.
Data transfers are direct between books via the level 2
cache chip in each MCM. Level 2 Cache is shared by all
PU chips on the MCM. PR/SM provides the ability to con-
fi gure and operate as many as 60 Logical Partitions which
may be assigned processors, memory and I/O resources
from any of the available books.
14
z10 EC Model
The z10 EC has been designed to offer high performance
and effi cient I/O structure. All z10 EC models ship with two
frames: an A-Frame and a Z-Frame, which together sup-
port the installation of up to three I/O cages. The z10 EC
will continue to use the Cargo cage for its I/O, supporting
up to 960 ESCON
®
and 256 FICON channels on the Model
E12 (64 I/O features) and up to 1,024 ESCON and 336
FICON channels (84 I/O features) on the Models E26, E40,
E56 and E64.
To increase the I/O device addressing capability, the I/O
subsystem provides support for multiple subchannels
sets (MSS), which are designed to allow improved device
connectivity for Parallel Access Volumes (PAVs). To sup-
port the highly scalable multi-book system design, the z10
EC I/O subsystem uses the Logical Channel Subsystem
(LCSS) which provides the capability to install up to 1024
CHPIDs across three I/O cages (256 per operating system
image). The Parallel Sysplex Coupling Link architecture
and technology continues to support high speed links pro-
viding effi cient transmission between the Coupling Facility
and z/OS systems. HiperSockets provides high-speed
capability to communicate among virtual servers and logi-
cal partitions. HiperSockets is now improved with the IP
version 6 (IPv6) support; this is based on high-speed TCP/
IP memory speed transfers and provides value in allowing
applications running in one partition to communicate with
applications running in another without dependency on
an external network. Industry standard and openness are
design objectives for I/O in System z10 EC.
The z10 EC has fi ve models offering between 1 to 64 pro-
cessor units (PUs), which can be confi gured to provide
a highly scalable solution designed to meet the needs
of both high transaction processing applications and On
Demand Business. Four models (E12, E26, E40 and E56)
have 17 PUs per book, and the high capacity z10 EC
Model E64 has one 17 PU book and three 20 PU books.
The PUs can be characterized as either CPs, IFLs, ICFs,
zAAPs or zIIPs. An easy-to-enable ability to “turn off” CPs
or IFLs is available on z10 EC, allowing you to purchase
capacity for future use with minimal or no impact on
software billing. An MES feature will enable the “turned
off” CPs or IFLs for use where you require the increased
capacity. There are a wide range of upgrade options avail-
able in getting to and within the z10 EC.
The z10 EC hardware model numbers (E12, E26, E40, E56
and E64) on their own do not indicate the number of PUs
which are being used as CPs. For software billing pur-
poses only, there will be a Capacity Identifi er associated
with the number of PUs that are characterized as CPs. This
15
number will be reported by the Store System Information
(STSI) instruction for software billing purposes only. There
is no affi nity between the hardware model and the number
of CPs. For example, it is possible to have a Model E26
which has 13 PUs characterized as CPs, so for software
billing purposes, the STSI instruction would report 713.
z10 EC model upgrades
There are full upgrades within the z10 EC models and
upgrades from any z9 EC or z990 to any z10 EC. Upgrade
of z10 EC Models E12, E26, E40 and E56 to the E64 is
disruptive. When upgrading to z10 EC Model E64, unlike
the z9 EC, the fi rst book is retained. There are no direct
upgrades from the z9 BC or IBM eServer zSeries 900
(z900), or previous generation IBM eServer zSeries.
z10 EC Base and Sub-capacity Offerings
IBM is increasing the number of sub-capacity engines on
the z10 EC. A total of 36 sub-capacity settings are avail-
able on any hardware model for 1-12 CPs. Models with 13
CPs or greater must be full capacity.
For the z10 EC models with 1-12 CPs, there are four
capacity settings per engine for central processors (CPs).
The entry point (Model 401) is approximately 23.69% of
a full speed CP (Model 701). All specialty engines con-
tinue to run at full speed. Sub-capacity processors have
availability of z10 EC features/functions and any-to-any
upgradeability is available within the sub-capacity matrix.
All CPs must be the same capacity setting size within one
z10 EC.
z10 EC Model Capacity Identifi ers:
• 700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764
• Capacity setting 700 does not have any CP engines
• Nxx, where n = the capacity setting of the engine, and
xx = the number of PU characterized as CPs in the CEC
Once xx exceeds 12, then all CP engines are full capacity
•
•
The z10 EC has 36 additional capacity settings at the low end
• Available on ANY H/W Model for 1 to 12 CPs. Models with 13
CPs or greater have to be full capacity
• All CPs must be the same capacity within the z10 EC
• All specialty engines run at full capacity. The one for one entitlement to purchase one zAAP or one zIIP for each CP purchased
is the same for CPs of any capacity.
• Only 12 CPs can have granular capacity, other PUs must be
CBU or characterized as specialty engines
16
z10 EC Performance
The performance design of the z/Architecture can enable
the server to support a new standard of performance for
applications through expanding upon a balanced system
approach. As CMOS technology has been enhanced to
support not only additional processing power, but also
more PUs, the entire server is modifi ed to support the
increase in processing power. The I/O subsystem supports
a greater amount of bandwidth than previous generations
through internal changes, providing for larger and faster
volume of data movement into and out of the server. Sup-
port of larger amounts of data within the server required
improved management of storage confi gurations, made
available through integration of the operating system and
hardware support of 64-bit addressing. The combined bal-
anced system design allows for increases in performance
across a broad spectrum of work.
Large System Performance Reference
IBM’s Large Systems Performance Reference (LSPR)
method is designed to provide comprehensive z/Archi-
tecture processor capacity ratios for different confi gura-
tions of Central Processors (CPs) across a wide variety
of system control programs and workload environments.
For z10 EC, z/Architecture processor capacity identifi er is
defi ned with a (7XX) notation, where XX is the number of
installed CPs.
may experience will vary depending upon considerations
such as the amount of multiprogramming in the user’s job
stream, the I/O confi guration, and the workload processed.
LSPR workloads have been updated to refl ect more
closely your current and growth workloads. The classifi ca-
tion Java Batch (CB-J) has been replaced with a new clas-
sifi cation for Java Batch called ODE-B. The remainder of
the LSPR workloads are the same as those used for the z9
EC LSPR. The typical LPAR confi guration table is used to
establish single-number-metrics such as MIPS and MSUs.
The z10 EC LSPR will rate all z/Architecture processors
running in LPAR mode, 64-bit mode, and assumes that
HiperDispatch is enabled.
For more detailed performance information, consult the
Large Systems Performance Reference (LSPR) available
at: http://www.
ibm.com
/servers/eserver/zseries/lspr/.
CPU Measurement Facility
The CPU Measurement Facility is a hardware facility which
consists of counters and samples. The facility provides a
means to collect run-time data for software performance
tuning. The detailed architecture information for this facility
™
can be found in the System z10 Library in Resource Link
.
Based on using an LSPR mixed workload, the perfor-
mance of the z10 EC (2097) 701 is expected to be up to
1.62 times that of the z9 EC (2094) 701.
The LSPR contains the Internal Throughput Rate Ratios
(ITRRs) for the z10 EC and the previous-generation
zSeries processor families based upon measurements
and projections using standard IBM benchmarks in a con-
trolled environment. The actual throughput that any user
17
z10 EC I/O Subsystem
The z10 EC contains an I/O subsystem infrastructure
which uses an I/O cage that provides 28 I/O slots and
the ability to have one to three I/O cages delivering a
total of 84 I/O slots. ESCON, FICON Express4, FICON
(multimode fi ber), and 1000BASE-T (copper) are designed
for use in high-speed enterprise backbones, for local
area network connectivity between campuses, to connect
server farms to System z10, and to consolidate fi le servers
25
onto System z10. With reduced latency, improved through-
put, and up to 96 ports of LAN connectivity, (when all are
4-port features, 24 features per server), you can “do more
with less.”
The key benefi ts of OSA-Express3 compared to OSA-
Express2 are:
• Reduced latency (up to 45% reduction) and increased
throughput (up to 4x) for applications
• More physical connectivity to service the network and
fewer required resources:
– Fewer CHPIDs to defi ne and manage
– Reduction in the number of required I/O slots
– Possible reduction in the number of I/O drawers
– Double the port density of OSA-Express2
– A solution to the requirement for more than 48 LAN
ports (now up to 96 ports)
Medium Access Control (MAC) address.
– QDIO Layer 3 (Network or IP layer) – for IP workloads.
Packet forwarding decisions are based upon the IP
address. All guests share OSA’s MAC address.
• Jumbo frames in QDIO mode (8992 byte frame size)
when operating at 1 Gbps (fi ber or copper) and 10 Gbps
(fi ber).
• 640 TCP/IP stacks per CHPID – for hosting more images.
• Large send for IPv4 packets – for TCP/IP traffi c and CPU
effi ciency, offl oading the TCP segmentation processing
from the host TCP/IP stack to the OSA-Express feature.
• Concurrent LIC update – to help minimize the disrup-
tion of network traffi c during an update; when properly
confi gured, designed to avoid a confi guration off or on
(applies to CHPID types OSD and OSN).
• Multiple Image Facility (MIF) and spanned channels – for
sharing OSA among logical channel subsystems
The OSA-Express3 features are exclusive to System z10.
OSA-Express2 availability
OSA-Express2 Gigabit Ethernet and 1000BASE-T Ethernet
continue to be available for ordering, for a limited time, if
you are not yet in a position to migrate to the latest release
of the operating system for exploitation of two ports per
PCI-E adapter and if you are not resource-constrained.
Historical summary: Functions that continue to be sup-
ported by OSA-Express3 and OSA-Express2
• Queued Direct Input/Output (QDIO) – uses memory
queues and a signaling protocol to directly exchange
data between the OSA microprocessor and the network
software for high-speed communication.
– QDIO Layer 2 (Link layer) – for IP (IPv4, IPv6) or non-
IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA) workloads. Using this mode the Open Systems Adapter
(OSA) is protocol-independent and Layer-3 independent. Packet forwarding decisions are based upon the
The OSA-Express3 and OSA-Express2 Ethernet features
:
support the following CHPID types
CHPID OSA-Express3, Purpose/Traffi c
Type OSA-Express2
Features
and reducing contention in heavily utilized system con-
fi gurations. It also allows for one CHPID to be directed
to one CF, and another CHPID directed to another CF on
the same target server, using the same port.
• Like other coupling links, external Infi niBand coupling
links are also valid to pass time synchronization signals
for Server Time Protocol (STP). Therefore the same
coupling links can be used to exchange timekeeping
information and Coupling Facility messages in a Parallel
Sysplex environment.
• The IBM System z10 EC also takes advantage of
Infi niBand as a higher-bandwidth replacement for the
Self-Timed Interconnect (STI) I/O interface features
found in prior System z servers.
The IBM System z10 EC will support up to 32 PSIFB links
as compared to 16 PSIFB links on System z9 servers. For
either z10 EC or z9, there must be less than or equal to a
total of 32 PSIFBs and ICB-4 links.
Infi niBand coupling links are CHPID type CIB.
Coupling Connectivity for Parallel Sysplex
You now have fi ve coupling link options for communication
in a Parallel Sysplex environment:
1.
Internal Coupling Channels (ICs)
can be used for
internal communication between Coupling Facilities
(CFs) defi ned in LPARs and z/OS images on the same
server.
Integrated Cluster Bus-4 (ICB-4)
2.
is for short distances.
ICB-4 links use 10 meter (33 feet) copper cables, of
which 3 meters (10 feet) is used for internal routing and
strain relief. ICB-4 is used to connect z10 EC-to-z10 EC,
z10 BC, z9 EC, z9 BC, z990, and z890. Note. If connect-
ing to a z10 BC or a z9 BC with ICB-4, those servers
cannot be installed with the nonraised fl oor feature. Also,
if the z10 BC is ordered with the nonraised fl oor feature,
ICB-4 cannot be ordered.
12x Infi niBand coupling links (12x IB-SDR or 12x
3.
IB-DDR)
offer an alternative to ISC-3 in the data center
and facilitate coupling link consolidation. Physical links
can be shared by multiple operating system images or
Coupling Facility images on a single system. The 12x
Infi niBand links support distances up to 150 meters (492
feet) using industry-standard OM3 50 micron multimode
fi ber optic cables.
Long Reach 1x Infi niBand coupling links (1x IB-SDR
4.
or 1x IB-DDR)
are an alternative to ISC-3 and offer
greater distances with support for point-to-point unre-
peated distances up to 10 km (6.2 miles) using 9 micron
single mode fi ber optic cables. Greater distances can
be supported with System z-qualifi ed optical networking
solutions. Long Reach 1x Infi niBand coupling links sup-
port the same sharing capabilities as the 12x Infi niBand
version, allowing one physical link to be shared by
multiple operating system images or Coupling Facility
images on a single system.
51
System z now supports 12x Infi niBand single data rate
(12x IB-SDR) coupling link attachment between System
z10 and System z9 general purpose (no longer limited to
standalone coupling facility)
InterSystem Channel-3 (ISC-3)
5.
supports communica-
tion at unrepeated distances up to 10 km (6.2 miles)
using 9 micron single mode fi ber optic cables and
greater distances with System z-qualifi ed optical net-
working solutions. ISC-3s are supported exclusively in
peer mode (CHPID type CFP).
Note: The Infi niBand link data rates do not represent the
performance of the link. The actual performance is depen-
dent upon many factors including latency through the
adapters, cable lengths, and the type of workload. Spe-
cifi cally, with 12x Infi niBand coupling links, while the link
data rate is higher than that of ICB, the service times of
coupling operations are greater, and the actual throughput
is less.
Refer to the Coupling Facility Confi guration Options white-
paper for a more specifi c explanation of when to continue
using the current ICB or ISC-3 technology versus migrat-
ing to Infi niBand coupling links.
The whitepaper is available at: http://www.
ibm.com
/
systems/z/advantages/pso/whitepaper.html.
z10 Coupling Link Options
Type Description Use Link Distance
d
ata rate
Max
PSIFB
1x IB-DDR LR
(6.2 miles)
100 km repeated
PSIFB 12x IB-DDR z10 to z10 6 GBps 150 meters 12*/32*
z10 to z9 3 GBps
IC Internal
Coupling Communi- Speeds
Channel cation
ICB-4 Copper z10, z9 2 GBps 10 meters*** 12/16
connection z990, z890 (33 ft)
between OS
and CF
ISC-3 Fiber z10, z9 2 Gbps 10 km 48/48
connection z990, z890 unrepeated
between OS (6.2 miles)
and CF 100 km repeated
• The maximum number of Coupling Links combined cannot exceed 64
per server (PSIFB, ICB-4, ISC-3). There is a maximum of 64 Coupling
CHPIDs (CIB, ICP, CBP, CFP) per server.
• For each MBA fanout installed for ICB-4s, the number of possible
customer HCA fanouts is reduced by one
* Each link supports defi nition of multiple CIB CHPIDs, up to 16 per
fanout
** z10 negotiates to 3 GBps (12x IB-SDR) when connected to a System
z9*
** 3 meters (10 feet) reserved for internal routing and strain relief
Note: The Infi niBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5
Gbps do not represent the performance of the link. The actual performance
is dependent upon many factors including latency through the adapters,
cable lengths, and the type of workload. With Infi niBand coupling links,
while the link data rate may be higher than that of ICB (12x IB-SDR or 12x
IB-DDR) or ISC-3 (1x IB-SDR or 1x IB-DDR), the service times of coupling
operations are greater, and the actual throughput may be less than with ICB
links or ISC-3 links.
z10 to z10
5 Gbps 10 km unrepeated 12*/32*
**
(492 ft)***
Internal Internal N/A 32/32
z10 BC z10
z10 EC Max
64
CHPIDS
52
Time synchronization and time accuracy on z10 EC
If you require time synchronization across multiple servers
(for example you have a Parallel Sysplex environment) or
you require time accuracy either for one or more System
z servers or you require the same time across heteroge-
®
neous platforms (System z, UNIX, AIX
, etc.) you can meet
these requirements by either installing a Sysplex Timer
Model 2 (9037-002) or by implementing Server Time Proto-
col (STP).
The Sysplex Timer Model 2 is the centralized time source
that sets the Time-Of-Day (TOD) clocks in all attached
servers to maintain synchronization. The Sysplex Timer
Model 2 provides the stepping signal that helps ensure
that all TOD clocks in a multi-server environment incre-
ment in unison to permit full read or write data sharing with
integrity. The Sysplex Timer Model 2 is a key component of
®
an IBM Parallel Sysplex environment and a GDPS
avail-
ability solution for On Demand Business.
The z10 EC server requires the External Time Reference
(ETR) feature to attach to a Sysplex Timer. The ETR fea-
ture is standard on the z10 EC and supports attachment
at an unrepeated distance of up to three kilometers (1.86
miles) and a link data rate of 8 Megabits per second.
The distance from the Sysplex Timer to the server can be
extended to 100 km using qualifi ed Dense Wavelength
Division Multiplexers (DWDMs). However, the maximum
repeated distance between Sysplex Timers is limited to
40 km.
Server Time Protocol (STP)
STP is a message-based protocol in which timekeeping
information is transmitted between servers over externally
defi ned coupling links. ICB-4, ISC-3, and Infi niBand cou-
pling links can be used to transport STP messages.
Server Time Protocol (STP) Enhancements
STP confi guration and time information restoration
after Power on Resets (POR) or power outage:
This
enhancement delivers system management improvements
by restoring the STP confi guration and time information
after Power on Resets (PORs) or power failure that affects
both servers of a two server STP-only Coordinated Timing
Network (CTN). To enable this function the customer has to
select an option that will assure than no other servers can
join the two server CTN. Previously, if both the Preferred
Time Server (PTS) and the Backup Time Server (BTS)
experienced a simultaneous power outage (site failure),
or both experienced a POR, reinitialization of time, and
special roles (PTS, BTS, and CTS) was required. With this
enhancement, you will no longer need to reinitialize the
time or reassign the roles for these events.
Preview - Improved STP System Management with
new z/OS Messaging:
This is a new function planned to
generate z/OS messages when various hardware events
that affect the External Time Sources (ETS) confi gured for
an STP-only CTN occur. This may improve problem deter-
mination and correction times. Previously, the messages
were generated only on the Hardware Management Con-
sole (HMC).
The ability to generate z/OS messages will be supported
on IBM System z10 and System z9 servers with z/OS 1.11
(with enabling support rolled back to z/OS 1.9) in the
second half of 2009.
53
The following STP enhancements are available on System
z10 and System z9 servers.
The STP feature and the latest Machine Change Levels are
required.
Enhanced Network Time Protocol (NTP) client support:
This enhancement addresses the requirements for those
who need to provide the same accurate time across het-
erogeneous platforms in an enterprise.
The STP design has been enhanced to include support
for a Simple Network Time Protocol (SNTP) client on the
Support Element. By confi guring an NTP server as the
STP External Time Source (ETS), the time of an STP-only
Coordinated Timing Network (CTN) can track to the time
provided by the NTP server, and maintain a time accuracy
of 100 milliseconds.
Note: NTP client support has been available since October
2007.
Enhanced accuracy to an External Time Source: The
time accuracy of an STP-only CTN has been improved by
adding the capability to confi gure an NTP server that has
a pulse per second (PPS) output signal as the ETS device.
This type of ETS device is available worldwide from sev-
eral vendors that provide network timing solutions.
STP has been designed to track to the highly stable,
accurate PPS signal from the NTP server, and maintain
an accuracy of 10 microseconds as measured at the PPS
input of the System z server. A number of variables such
as accuracy of the NTP server to its time source (GPS,
radio signals for example), and cable used to connect the
PPS signal will determine the ultimate accuracy of STP to
Coordinated Universal Time (UTC).
In comparison, the IBM Sysplex Timer is designed to
maintain an accuracy of 100 microseconds when attached
to an ETS with a PPS output. If STP is confi gured to use
a dial-out time service or an NTP server without PPS, it is
designed to provide a time accuracy of 100 milliseconds
to the ETS device.
For this enhancement, the NTP output of the NTP server
has to be connected to the Support Element (SE) LAN,
and the PPS output of the same NTP server has to be con-
nected to the PPS input provided on the External Time
Reference (ETR) feature of the System z10 or System z9
server.
Continuous availability of NTP servers used as Exter-
nal Time Source: Improved External Time Source (ETS)
availability can now be provided if you confi gure different
NTP servers for the Preferred Time Server (PTS) and the
Backup Time Server (BTS). Only the PTS or the BTS can
be the Current Time Server (CTS) in an STP-only CTN.
Prior to this enhancement, only the CTS calculated the
time adjustments necessary to maintain time accuracy.
With this enhancement, if the PTS/CTS cannot access the
NTP server or the pulse per second (PPS) signal from the
NTP server, the BTS, if confi gured to a different NTP server,
may be able to calculate the adjustment required and
propagate it to the PTS/CTS. The PTS/CTS in turn will per-
form the necessary time adjustment steering.
This avoids a manual reconfi guration of the BTS to be the
CTS, if the PTS/CTS is not able to access its ETS. In an
ETR network when the primary Sysplex Timer is not able
to access the ETS device, the secondary Sysplex Timer
takes over the role of the primary - a recovery action not
54
always accepted by some environments. The STP design
provides continuous availability of ETS while maintaining
the special roles of PTS and BTS assigned by the enter-
prise.
The improvement is available when the ETS is confi gured
as an NTP server or an NTP server using PPS.
NTP server on Hardware Management Console (HMC):
Improved security can be obtained by providing NTP
server support on the HMC. If an NTP server (with or with-
out PPS) is confi gured as the ETS device for STP, it needs
to be attached directly to the Support Element (SE) LAN.
The SE LAN is considered by many users to be a private
dedicated LAN to be kept as isolated as possible from the
intranet or Internet.
Since the HMC is normally attached to the SE LAN, pro-
viding an NTP server capability on the HMC addresses
the potential security concerns most users may have for
attaching NTP servers to the SE LAN. The HMC, using
a separate LAN connection, can access an NTP server
available either on the intranet or Internet for its time
source. Note that when using the HMC as the NTP server,
there is no pulse per second capability available. There-
fore, you should not confi gure the ETS to be an NTP server
using PPS.
Enhanced STP recovery when Internal Battery Feature
is in use: Improved availability can be obtained when
power has failed for a single server (PTS/CTS), or when
there is a site power outage in a multisite confi guration
where the PTS/CTS is installed (the site with the BTS is
a different site not affected by the power outage). If an
Internal Battery Feature (IBF) is installed on your System
z server, STP now has the capability of receiving notifi ca-
tion that customer power has failed and that the IBF is
engaged. When STP receives this notifi cation from a server
that has the role of the PTS/CTS, STP can automatically
reassign the role of the CTS to the BTS, thus automating
the recovery action and improving availability.
STP confi guration and time information saved across
Power-on-Resets (POR) or power outages: This
enhancement delivers system management improvements
by saving the STP confi guration across PORs and power
failures for a single server STP-only CTN. Previously, if
there was a POR of the server or the server experienced
a power outage, the time and assignment of the PTS and
CTS roles would have to be reinitialized. You will no longer
need to reinitialize the time or reassign the role of PTS/CTS
across POR or power outage events.
Note: This enhancement is also available on the z990 and
z890 servers, in addition to System z10 and System z9
servers.
Application Programming Interface (API) to automate
STP CTN reconfi guration: The concept of “a pair and
a spare” has been around since the original Sysplex
Couple Data Sets (CDSs). If the primary CDS becomes
unavailable, the backup CDS would take over. Many sites
have had automation routines bring a new backup CDS
online to avoid a single point of failure. This idea is being
extended to STP. With this enhancement, if the PTS fails
and the BTS takes over as CTS, an API is now available
on the HMC so you can automate the reassignment of the
PTS, BTS, and Arbiter roles. This can improve availability
by avoiding a single point of failure after the BTS has taken
over as the CTS.
55
Prior to this enhancement, the PTS, BTS, and Arbiter roles
had to be reassigned manually using the System (Sysplex)
Time task on the HMC. For additional details on the API,
please refer to System z Application Programming Inter-
faces, SB10-7030-11.
Additional information is available on the STP Web page:
http://www.ibm.com/systems/z/pso/stp.html.
The following Redbooks are available at the Redbooks
Web site: http://www.redbooks.ibm.com/.
• Server Time Protocol Planning Guide, SG24-7280
• Server Time Protocol Implementation Guide, SG24-7281
Internal Battery Feature Recommendation
Single data center
• CTN with 2 servers, install IBF on at least the PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
– CTN with 3 or more servers IBF not required for STP
recovery, if Arbiter confi gured
Two data centers
• CTN with 2 servers (one in each data center) install IBF
on at least the PTS/CTS
– Also recommend IBF on BTS to provide recovery
protection when BTS is the CTS
• CTN with 3 or more servers, install IBF on at least the
PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
Message Time Ordering (Sysplex Timer Connectivity to Coupling
Facilities)
As processor and Coupling Facility link technologies have
improved, the requirement for time synchronization toler-
ance between systems in a Parallel Sysplex environment
has become ever more rigorous. In order to enable any
exchange of time stamped information between systems
in a sysplex involving the Coupling Facility to observe the
correct time ordering, time stamps are now included in
the message-transfer protocol between the systems and
the Coupling Facility. Therefore, when a Coupling Facility
is confi gured on any System z10 or System z9, the Cou-
pling Facility will require connectivity to the same 9037
Sysplex Timer or Server Time Protocol (STP) confi gured
Coordinated Timing Network (CTN) that the systems in its
Parallel Sysplex cluster are using for time synchroniza-
tion. If the ICF is on the same server as a member of its
Parallel Sysplex environment, no additional connectivity is
required, since the server already has connectivity to the
Sysplex Timer.
However, when an ICF is confi gured on any z10 which
does not host any systems in the same Parallel Sysplex
cluster, it is necessary to attach the server to the 9037
Sysplex Timer or implement STP.
HMC System Support
The new functions available on the Hardware Management
Console (HMC) version 2.10.1 apply exclusively to System
z10. However, the HMC version 2.10.1 will continue to sup-
port System z9, zSeries, and S/390
®
G5/G6 servers.
The 2.10.1 HMC will continue to support up to two 10
Mbps or 100 Mbps Ethernet LANs. A Token Ring LAN is
not supported. The 2.10.1 HMC applications have been
updated to support HMC hardware without a diskette
drive. DVD-RAM, CD-ROM, and/or USB fl ash memory
drive media will be used.
Family Machine Type Firmware Driver SE Version
z10 BC 2098 76 2.10.1
z10 EC 2097 73 2.10.0
z9 BC 2096 67 2.9.2
z9 EC 2094 67 2.9.2
z890 2086 55 1.8.2
z990 2084 55 1.8.2
z800 2066 3G 1.7.3
z900 2064 3G 1.7.3
9672 G6 9672/9674 26 1.6.2
9672 G5 9672/9674 26 1.6.2
Internet Protocol, Version 6 (IPv6)
HMC version 2.10.1 and Support Element (SE) version
2.10.1 can now communicate using IP Version 4 (IPv4),
IP Version 6 (IPv6), or both. It is no longer necessary to
assign a static IP address to an SE if it only needs to com-
municate with HMCs on the same subnet. An HMC and
SE can use IPv6 link-local addresses to communicate with
each other.
HMC/SE support is addressing the following requirements:
• The availability of addresses in the IPv4 address space
is becoming increasingly scarce
• The demand for IPv6 support is high in Asia/Pacifi c
countries since many companies are deploying IPv6
• The U.S. Department of Defense and other U.S. govern-
ment agencies are requiring IPv6 support for any prod-
ucts purchased after June 2008
More information on the U.S. government require-
ments can be found at: http://www.whitehouse.gov/
omb/memoranda/fy2005/m05-22.pdf and http:
//www.whitehouse.gov/omb/egov/documents/IPv6_
FAQs.pdf
HMC/SE Console Messenger
On servers prior to System z9, the remote browser capa-
bility was limited to Platform Independent Remote Console
(PIRC), with a very small subset of functionality. Full func-
tionality using Desktop-On-Call (DTOC) was limited to one
user at a time and was slow, so it was rarely used.
With System z9, full functionality to multiple users was
delivered with a fast Web browser solution. You liked this,
but requested the ability to communicate to other remote
users.
There is now a new console messenger task that offers
basic messaging capabilities to allow system operators or
administrators to coordinate their activities. The new task
may be invoked directly, or using a new option in Users
and Tasks. This capability is available for HMC and SE
57
local and remote users permitting interactive plain-text
communication between two users and also allowing a
user to broadcast a plain-text message to all users. This
feature is a limited messenger application and does not
interact with other messengers.
HMC z/VM Tower systems management enhancements
Building upon the previous VM systems management
support from the Hardware Management Console (HMC),
which offered management support for already defi ned
virtual resources, new HMC capabilities are being made
available allowing selected virtual resources to be defi ned.
In addition, further enhancements have been made for
managing defi ned virtual resources.
Enhancements are designed to deliver out-of-the-box inte-
grated graphical user interface-based (GUI-based) manage-
ment of selected parts of z/VM. This is especially targeted to
deliver ease-of-use for enterprises new to System z.
This helps to avoid the purchase and installation of
additional hardware or software, which may include
complicated setup procedures. You can more seam-
lessly perform hardware and selected operating system
management using the HMC Web browser-based user
interface.
HMC DVD drive. This new function does not require an
external network connection between z/VM and the HMC,
but instead uses the existing communication path between
the HMC and the SE.
This support is intended for environments that have no
alternative, such as a LAN-based server, for serving the
DVD contents for Linux installations. The elapsed time for
installation using the HMC DVD drive can be an order of
magnitude, or more, longer than the elapsed time for LAN-
based alternatives.
Using the current support and the z/VM support, z/VM
can be installed in an LPAR and both z/VM and Linux on
System z can be installed in a virtual machine from the
HMC DVD drive without requiring an external network
setup or a connection between an LPAR and the HMC.
This addresses security concerns and additional confi gura-
tion efforts using the only other previous solution of the exter-
nal network connection from the HMC to the z/VM image.
Enhanced installation support using the HMC is exclusive
to System z10 and is supported by z/VM.
Enhanced installation support for z/VM using the HMC:
HMC version 2.10.1, along with Support Element (SE) ver-
sion 2.10.1 on z10 EC, now gives you the ability to install
Linux on System z in a z/VM virtual machine using the
58
Implementation Services for Parallel
Sysplex
IBM Implementation Services for Parallel Sysplex CICS and
WAS Enablement
IBM Implementation Services for Parallel Sysplex Middle-
ware – CICS enablement consists of fi ve fi xed-price and
fi xed-scope selectable modules:
1)CICS application review
2) z/OS CICS infrastructure review (module 1 is a prerequi-
site for this module)
3)CICS implementation (module 2 is a prerequisite for this
module)
4)CICS application migration
5)CICS health check
IBM Implementation Services for Parallel Sysplex Mid-
dleware – WebSphere Application Server enablement
consists of three fi xed-price and fi xed-scope selectable
modules:
1)WebSphere Application Server network deployment
planning and design
2)WebSphere Application Server network deployment
implementation (module 1 is a prerequisite for this
module)
3)WebSphere Application Server health check
For a detailed description of this service, refer to Services
Announcement 608-041, (RFA47367) dated June 24, 2008.
This DB2 data sharing service is designed for clients who
want to:
1) Enhance the availability of data
2) Enable applications to take full utilization of all servers’
resources
3) Share application system resources to meet business
goals
4) Manage multiple systems as a single system from a
single point of control
5) Respond to unpredicted growth by quickly adding com-
puting power to match business requirements without
disruption
6) Build on the current investments in hardware, software,
applications, and skills while potentially reducing com-
puting costs
The offering consists of six selectable modules; each is
a stand-alone module that can be individually acquired.
The fi rst module is an infrastructure assessment module,
followed by fi ve modules which address the following DB2
data sharing disciplines:
1)DB2 data sharing planning
2)DB2 data sharing implementation
3)Adding additional data sharing members
4)DB2 data sharing testing
5)DB2 data sharing backup and recovery
Implementation Services for Parallel Sysplex DB2 Data Sharing
To assist with the assessment, planning, implementation,
testing, and backup and recovery of a System z DB2 data
sharing environment, IBM Global Technology Services
announced and made available the IBM Implementation
Services for Parallel Sysplex Middleware – DB2 data shar-
ing on February 26, 2008.
For more information on these services contact your IBM
representative or refer to: www.ibm.com/services/server.
GDPS
™
Geographically Dispersed Parallel Sysplex
designed to provide a comprehensive end-to-end con-
tinuous availability and/or disaster recovery solution for
59
(GDPS) is
Fiber Quick Connect for FICON LX
Environments
System z servers. Now Geographically Dispersed Open
Clusters (GDOC) is designed to address this need for
open systems. GDPS 3.5 will support GDOC for coordi-
nated disaster recovery across System z and non-System
z servers if Veritas Cluster Server is already installed.
GDPS and the Basic HyperSwap (available with z/OS
V1.9) solutions help to ensure system failures are invisible
to employees, partners and customers with dynamic disk-
swapping capabilities that ensure applications and data
are available.
GDPS is a multi-site or single-site end-to-end application
availability solution that provides the capability to manage
remote copy confi guration and storage subsystems
(including IBM TotalStorage), to automate Parallel Sysplex
operation tasks and perform failure recovery from a single
point of control.
GDPS helps automate recovery procedures for planned
and unplanned outages to provide near-continuous avail-
ability and disaster recovery capability.
Fiber Quick Connect (FQC), an optional feature on z10 EC,
is now being offered for all FICON LX (single mode fi ber)
channels, in addition to the current support for ESCON.
FQC is designed to signifi cantly reduce the amount of
time required for on-site installation and setup of fi ber
optic cabling. FQC facilitates adds, moves, and changes
of ESCON and FICON LX fi ber optic cables in the data
center, and may reduce fi ber connection time by up to
80%.
FQC is for factory installation of IBM Facilities Cabling
Services – Fiber Transport System (FTS) fi ber harnesses
for connection to channels in the I/O cage. FTS fi ber har-
nesses enable connection to FTS direct-attach fi ber trunk
cables from IBM Global Technology Services.
Note: FQC supports all of the ESCON channels and all of
the FICON LX channels in all of the I/O cages of the server.
For additional information on GDPS, visit:
http://www-03.ibm.com/systems/z/gdps/.
60
z10 EC Physical Characteristics
z10 EC Confi guration Detail
z10 EC Environmentals
Features Min # Max # Max Increments
Model 1 I/O Cage 2 I/O Cage 3 I/O Cage
E12 9.70 kW 13.26 kW 13.50 kW
E26 13.77 kW 17.51 kW 21.17 kW
E40 16.92 kW 20.66 kW 24.40 kW
E56 19.55 kW 23.29 kW 27.00 kW
E64 19.55 kW 23.29 kW 27.50 kW
16-port 0
ESCON channels 1 reserved as a spare
FICON 0
Express4 channels
FICON 0
Express2** channels
FICON 0
Model 1 I/O Cage 2 I/O Cage 3 I/O Cage
Express** channels
ICB-4 0
E12 33.1 kBTU/hr 46.0 kBTU/hr 46.0 kBTU/hr
E26 47.7 kBTU/hr 61.0 kBTU/hr 73.7 kBTU/hr
E40 58.8 kBTU/hr 72.0 kBTU/hr 84.9 kBTU/hr
E56 67.9 kBTU/hr 81.2 kBTU/hr 93.8 kBTU/hr
E64 67.9 kBTU/hr 81.2 kBTU/hr 93.8 kBTU/hr
Note; Model E12 has suffi cient Host Channel Adaptor capacity for
58 I/O cards only.
IBM Systems and Technology Group
Route 100
Somers, NY 10589
U.S.A
Produced in the United States of America,
04-09
All Rights Reserved
References in this publication to IBM products or services do not imply
that IBM intends to make them available in every country in which IBM
operates. Consult your local IBM business contact for information on the
products, features, and services available in your area.
IBM, IBM eServer, the IBM logo, the e-business logo, , AIX, APPN, CICS,
Cool Blue, DB2, DRDA, DS8000, Dynamic Infrastructure, ECKD, ESCON,
FICON, Geographically Dispersed Parallel Sysplex, GDPS, HiperSockets, HyperSwap, IMS, Lotus, MQSeries, MVS, OS/390, Parallel Sysplex,
PR/SM, Processor Resource/Systems Manager, RACF, Rational, Redbooks, Resource Link, RETAIN, REXX, RMF, S/390, Scalable Architecture
for Financial Reporting, Sysplex Timer, Systems Director Active Energy
Manager, System Storage, System z, System z9, System z10, Tivoli,
TotalStorage, VSE/ESA, VTAM, WebSphere, z9, z10, z10 BC, z10 EC, z/
Architecture, z/OS, z/VM, z/VSE, and zSeries are trademarks or registered
trademarks of the International Business Machines Corporation in the
Unites States and other countries.
Infi niBand is a trademark and service mark of the Infi niBand Trade Association.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States or other
countries.
Linux is a registered trademark of Linus Torvalds in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the Unites States
and other countries.
Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation In the United States, other countries, or both.
Intel is a trademark of the Intel Corporation in the United States and other
countries.
Other trademarks and registered trademarks are the properties of their
respective companies.
IBM hardware products are manufactured from new parts, or new and
used parts. Regardless, our warranty terms apply.
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary
depending upon considerations such as the amount of multiprogramming
in the user’s job stream, the I/O confi guration, the storage confi guration,
and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
All performance information was determined in a controlled environment.
Actual results may vary. Performance information is provided “AS IS” and
no warranties or guarantees are expressed or implied by IBM.
Photographs shown are of engineering prototypes. Changes may be
incorporated in production models.
This equipment is subject to all applicable FCC rules and will comply with
them upon delivery.
Information concerning non-IBM products was obtained from the suppliers of those products. Questions concerning those products should be
directed to those suppliers.
All customer examples described are presented as illustrations of how
those customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics
may vary by custom.
68
ZSO03018-USEN-02
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.