concurrent parts replacement, and concurrent hardware
upgrades. Exclusive to the z10 EC is the ability to hot
swap ICB-4 and InniBand hub cards.
Enterprises with IBM System z9 EC and IBM z990 may
upgrade to any z10 Enterprise Class model. Model
upgrades within the z10 EC are concurrent with the
exception of the E64, which is disruptive. If you desire
a consolidation platform for your mainframe and Linux
capable applications, you can add capacity and even
expand your current application workloads in a cost-effec-
tive manner. If your traditional and new applications are
growing, you may nd the z10 EC a good t with its base
qualities of service and its specialty processors designed
for assisting with new workloads. Value is leveraged with
improved hardware price/performance and System z10 EC
software pricing strategies.
The z10 EC processor introduces IBM System z10
Enterprise Class with Quad Core technology, advanced
pipeline design and enhanced performance on CPU inten-
sive workloads. The z10 EC is specically designed and
optimized for full z/Architecture compatibility. New features
enhance enterprise data serving performance, industry
leading virtualization capabilities, energy efciency at
system and data center levels. The z10 EC is designed to
further extend and integrate key platform characteristics
such as dynamic exible partitioning and resource man-
agement in mixed and unpredictable workload environ-
ments, providing scalability, high availability and Qualities
of Service (QoS) to emerging applications such as
WebSphere, Java and Linux.
With the logical partition (LPAR) group capacity limit on
z10 EC, z9 EC and z9 BC, you can now specify LPAR
group capacity limits allowing you to dene each LPAR
with its own capacity and one or more groups of LPARs
on a server. This is designed to allow z/OS to manage the
groups in such a way that the sum of the LPARs’ CPU uti-
lization within a group will not exceed the group’s dened
capacity. Each LPAR in a group can still optionally con-
tinue to dene an individual LPAR capacity limit.
The z10 EC has ve models with a total of 100 capacity
settings available as new build systems and as upgrades
from the z9 EC and z990.
The ve z10 EC models are designed with a multi-book
system structure that provides up to 64 Processor Units
(PUs) that can be characterized as either Central Proces-
sors (CPs), IFLs, ICFs, zAAPs or zIIPs.
Some of the signicant enhancements in the z10 EC that
help bring improved performance, availability and function
to the platform have been identied. The following sections
highlight the functions and features of the z10 EC.
10
z10 EC Design and Technology
The System z10 EC is designed to provide balanced
system performance. From processor storage to the
system’s I/O and network channels, end-to-end bandwidth
is provided and designed to deliver data where and when
it is needed.
The processor subsystem is comprised of one to four
books connected via a point-to-point SMP network. The
change to a point-to-point connectivity eliminates the need
for the jumper book, as had been used on the System z9
and z990 systems. The z10 EC design provides growth
paths up to a 64 engine system where each of the 64
PUs has full access to all system resources, specically
memory and I/O.
Each book is comprised of a Multi-Chip Module (MCM),
memory cards and I/O fanout cards. The MCMs, which
measure approximately 96 x 96 millimeters, contain the
Processor Unit (PU) chips, the “SCD” and “SCC” chips of
z990 and z9 have been replaced by a single “SC” chip
which includes both the L2 cache and the SMP fabric
(“storage controller”) functions. There are two SC chips on
each MCM, each of which is connected to all ve CP chips
on that MCM. The MCM contain 103 glass ceramic layers
to provide interconnection between the chips and the
off-module environment. Four models (E12, E26, E40 and
E56) have 17 PUs per book, and the high capacity z10 EC
Model E64 has one 17 PU book and three 20 PU books.
Each PU measures 21.973 mm x 21.1658 mm and has an
L1 cache divided into a 64 KB cache for instructions and a
128 KB cache for data. Each PU also has an L1.5 cache.
This cache is 3 MB in size. Each L1 cache has a Transla-
tion Look-aside Buffer (TLB) of 512 entries associated with
it. The PU, which uses a new high-frequency z/Architecture
microprocessor core, is built on CMOS 11S chip technology
and has a cycle time of approximately 0.23 nanoseconds.
The design of the MCM technology on the z10 EC provides
the exibility to congure the PUs for different uses; there
are two spares and up to 11 System Assist Processors
(SAPs) standard per system. The remaining inactive PUs
on each installed MCM are available to be character-
ized as either CPs, ICF processors for Coupling Facility
applications, or IFLs for Linux applications and z/VM
hosting Linux as a guest, System z10 Application Assist
Processors (zAAPs), System z10 Integrated Information
Processors (zIIPs) or as optional SAPs and provide you
with tremendous exibility in establishing the best system
for running applications. Each model of the z10 EC must
always be ordered with at least one CP, IFL or ICF.
Each book can support from the 16 GB minimum memory,
up to 384 GB and up to 1.5 TB per system. 16 GB of the
total memory is delivered and reserved for the xed Hard-
ware Systems Area (HSA). There are up to 48 IFB links per
system at 6 GBps each.
The z10 EC supports a combination of Memory Bus
Adapter (MBA) and Host Channel Adapter (HCA) fanout
cards. New MBA fanout cards are used exclusively for
ICB-4. New ICB-4 cables are needed for z10 EC and are
only available on models E12, E26, E40 and E56. The
E64 model may not have ICBs. The InniBand Multiplexer
(IFB-MP) card replaces the Self-Timed Interconnect Mul-
tiplexer (STI-MP) card. There are two types of HCA fanout
cards: HCA2-C is copper and is always used to connect
to I/O (IFB-MP card) and the HCA2-O which is optical
and used for customer InniBand coupling which in being
announced and made generally available in 2Q08.
Data transfers are direct between books via the level 2
cache chip in each MCM. Level 2 Cache is shared by all
PU chips on the MCM. PR/SM provides the ability to con-
gure and operate as many as 60 Logical Partitions which
may be assigned processors, memory and I/O resources
from any of the available books.
11
z10 EC Models
E64
E56
E40
E26
E12
Concurrent Upgrade
z990
z10 EC
z9 EC
The z10 EC has been designed to offer high performance
and efcient I/O structure. All z10 EC models ship with
two frames: an A-Frame and a Z-Frame, which together
support the installation of up to three I/O cages. The z10
EC will continue to use the Cargo cage for its I/O, support-
ing up to 960 ESCON® and 256 FICON® channels on the
Model E12 (64 I/O features) and up to 1,024 ESCON and
336 FICON channels (84 I/O features) on the Models E26,
E40, E56 and E64.
To increase the I/O device addressing capability, the I/O
subsystem provides support for multiple subchannels
sets (MSS), which are designed to allow improved device
connectivity for Parallel Access Volumes (PAVs). To sup-
port the highly scalable multi-book system design, the z10
EC I/O subsystem uses the Logical Channel Subsystem
(LCSS) which provides the capability to install up to 1024
CHPIDs across three I/O cages (256 per operating system
image). The Parallel Sysplex Coupling Link architecture
and technology continues to support high speed links pro-
viding efcient transmission between the Coupling Facility
and z/OS systems. HiperSockets provides high-speed
capability to communicate among virtual servers and logi-
cal partitions. HiperSockets is now improved with the IP
version 6 (IPv6) support; this is based on high-speed TCP/
IP memory speed transfers and provides value in allowing
applications running in one partition to communicate with
applications running in another without dependency on
an external network. Industry standard and openness are
design objectives for I/O in System z10 EC.
The z10 EC has ve models offering between 1 to 64 pro-
cessor units (PUs), which can be congured to provide
a highly scalable solution designed to meet the needs
of both high transaction processing applications and On
Demand Business. Four models (E12, E26, E40 and E56)
have 17 PUs per book, and the high capacity z10 EC
Model E64 has one 17 PU book and three 20 PU books.
The PUs can be characterized as either CPs, IFLs, ICFs,
zAAPs or zIIPs. An easy-to-enable ability to “turn off” CPs
or IFLs is available on z10 EC, allowing you to purchase
capacity for future use with minimal or no impact on
software billing. An MES feature will enable the “turned
off” CPs or IFLs for use where you require the increased
capacity. There are a wide range of upgrade options avail-
able in getting to and within the z10 EC.
12
The z10 EC hardware model numbers (E12, E26, E40, E56
E12E26E40E54E64
7xx
6xx
5xx
4xx
CP Capacity Relative to Full Speed
7xx = 100%
6xx ~ 69.35%
5xx ~ 51.20%
4xx ~ 23.69%
xx = 01 through 12
Sub Capacity Models
and E64) on their own do not indicate the number of PUs
which are being used as CPs. For software billing pur-
poses only, there will be a Capacity Indicator associated
with the number of PUs that are characterized as CPs. This
number will be reported by the Store System Information
(STSI) instruction for software billing purposes only. There
is no afnity between the hardware model and the number
of CPs. For example, it is possible to have a Model E26
which has 13 PUs characterized as CPs, so for software
billing purposes, the STSI instruction would report 713.
z10 EC model upgrades
There are full upgrades within the z10 EC models and
upgrades from any z9 EC or z990 to any z10 EC. Upgrade
of z10 EC Models E12, E26, E40 and E56 to the E64 is
disruptive. When upgrading to z10 EC Model E64, unlike
the z9 EC, the rst book is retained. There are no direct
upgrades from the z9 BC or IBM eServer zSeries 900
(z900), or previous generation IBM eServer zSeries.
IBM is increasing the number of sub-capacity engines on
the z10 EC. A total of 36 sub-capacity settings are avail-
able on any hardware model for 1-12 CPs. Models with 13
CPs or greater must be full capacity.
For the z10 EC models with 1-12 CPs, there are four
capacity settings per engine for central processors (CPs).
The entry point (Model 401) is approximately 23.69% of
a full speed CP (Model 701). All specialty engines con-
tinue to run at full speed. Sub-capacity processors have
availability of z10 EC features/functions and any-to-any
upgradeability is available within the sub-capacity matrix.
All CPs must be the same capacity setting size within one
z10 EC.
z10 EC Model Capacity IDs:
• 700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764
• Capacity setting 700 does not have any CP engines
• Nxx, where n = the capacity setting of the engine, and
xx = the number of PU characterized as CPs in the CEC
•
Once xx exceeds 12, then all CP engines are full capacity
z10 EC Base and Subcapacity Offerings
• The z10 EC has 36 additional capacity settings at the
low end
• Available on ANY H/W Model for 1 to 12 CPs. Models
with 13 CPs or greater have to be full capacity
• All CPs must be the same capacity within the z10 EC
• All specialty engines run at full capacity. The one for one
entitlement to purchase one zAAP or one zIIP for each
CP purchased is the same for CPs of any capacity.
• Only 12 CPs can have granular capacity, other PUs
must be CBU or characterized as specialty engines
13
z10 EC Performance
The performance design of the z/Architecture can enable
the server to support a new standard of performance for
applications through expanding upon a balanced system
approach. As CMOS technology has been enhanced to
support not only additional processing power, but also
more PUs, the entire server is modied to support the
increase in processing power. The I/O subsystem supports
a greater amount of bandwidth than previous generations
through internal changes, providing for larger and faster
volume of data movement into and out of the server. Sup-
port of larger amounts of data within the server required
improved management of storage congurations, made
available through integration of the operating system and
hardware support of 64-bit addressing. The combined bal-
anced system design allows for increases in performance
across a broad spectrum of work.
Large System Performance Reference
IBM’s Large Systems Performance Reference (LSPR)
method is designed to provide comprehensive
z/Architecture processor capacity ratios for different con-
gurations of Central Processors (CPs) across a wide
variety of system control programs and workload environ-
ments. For z10 EC, z/Architecture processor capacity
indicator is dened with a (7XX) notation, where XX is the
number of installed CPs.
LSPR workloads have been updated to reect more
closely your current and growth workloads. The classica-
tion Java Batch (CB-J) has been replaced with a new clas-
sication for Java Batch called ODE-B. The remainder of
the LSPR workloads are the same as those used for the z9
EC LSPR. The typical LPAR conguration table is used to
establish single-number-metrics such as MIPS and MSUs.
The z10 EC LSPR will rate all z/Architecture processors
running in LPAR mode, 64-bit mode, and assumes that
HiperDispatch is enabled.
For more detailed performance information, consult the
Large Systems Performance Reference (LSPR) available