IBM has previewed z/VSE 4.2. When available, z/VSE 4.2
is designed help address the needs of VSE clients with
growing core VSE workloads. z/VSE V4.2 is designed to
support:
z/TPF
z/TPF is a 64-bit operating system that allows you to move
legacy applications into an open development environ-
ment, leveraging large scale memory spaces for increased
speed, diagnostics and functionality. The open develop-
ment environment allows access to commodity skills and
enhanced access to open code libraries, both of which can
be used to lower development costs. Large memory spaces
can be used to increase both system and application effi -
ciency as I/Os or memory management can be eliminated.
z/TPF is designed to support:
• 64-bit mode
• Linux development environment (GCC and HLASM for
Linux)
• 32 processors/cluster
• Up to 84* engines/processor
• 40,000 modules
• Workload License Charge
• More than 255 VSE tasks to help clients grow their CICS
workloads and to ease migration from CS/VSE to CICS
Transaction Server for VSE/ESA
™
• Up to 32 GB of processor storage
• Sub-Capacity Reporting Tool running “natively”
• Encryption Facility for z/VSE as an optional priced feature
• IBM System Storage TS3400 Tape Library (via the
TS1120 Controller)
• IBM System Storage TS7740 Virtualization Engine
Release 1.3
z/VSE V4.2 plans to continue the focus on hybrid solu-
tions exploiting z/VSE and Linux on System z, service-ori-
ented architecture (SOA), and security. It is the preferred
replacement for z/VSE V4.1, z/VSE V3, or VSE/ESA. It is
designed to protect and leverage existing VSE information
assets.
Linux on System z
The System z10 EC supports the following Linux on
System z distributions (most recent service levels):
• Novell SUSE SLES 9
• Novell SUSE SLES 10
• Red Hat RHEL 4
• Red Hat RHEL 5
10
z10 EC
Operating System ESA/390 z/Architecture
(31-bit) (64-bit)
z/OS V1R8, 9 and 10 No Yes
z/OS V1R7
(1)(2)
with BM Lifecycle
Extension for z/OS V1.7 No Yes
Linux on System z
(2)
, Red Hat
RHEL 4, & Novell SUSE SLES 9 Yes Yes
Linux on System z
(2)
, Red Hat
RHEL 5, & Novell SUSE SLES 10 No Yes
(3)
z/VM V5R2
z/VSE V3R1
z/VSE V4R1
(3)
, 3
and 4 No* Yes
(2)(4)
(2)(5)
and 2
(5)
Yes No
No Yes
z/TPF V1R1 No Yes
TPF V4R1 (ESA mode only) Yes No
1. z/OS V1.7 support on the z10 BC™ requires the Lifecycle Extension for
z/OS V1.7, 5637-A01. The Lifecycle Extension for z/OS R1.7 + zIIP Web
Deliverable required for z10 to enable HiperDispatch on z10 (does not
require a zIIP). z/OS V1.7 support was withdrawn September 30, 2008.
The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2009. With
this Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain
functions and features of the z10 BC server require later releases of
z/OS. For a complete list of software support, see the PSP buckets and
the Software Requirements section of the System z10 BC announcement
letter, dated October 21, 2008.
2. Compatibility Support for listed releases. Compatibility support allows
OS to IPL and operate on z10 BC
3. Requires Compatibility Support which allows z/VM to IPL and operate
on the z10 providing IBM System z9
Guests. *z/VM supports 31-bit and 64-bit guests
4. z/VSE V3 operates in 31-bit mode only. It does not implement z/
Architecture, and specifi cally does not implement 64-bit mode capabilities. z/VSE is designed to exploit select features of IBM System z10,
System z9, and IBM eServer
5. z/VSE V4 is designed to exploit 64-bit real memory addressing, but will
not support 64-bit virtual memory addressing
Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive
Planning (PSP) bucket prior to installing a z10 BC
®
functionality for the base OS and
™
zSeries® hardware.
Everyday the IT system needs to be available to users
– customers that need access to the company Web site,
line of business personnel that need access to the system,
application development that is constantly keeping the
environment current, and the IT staff that is operating and
maintaining the environment. If applications are not consis-
tently available, the business can suffer.
The z10 EC continues our commitment to deliver improve-
ments in hardware Reliability, Availability and Serviceability
(RAS) with every new System z server. They include micro-
code driver enhancements, dynamic segment sparing for
memory as well as the fi xed HSA. The z10 EC is a server
that can help keep applications up and running in the
event of planned or unplanned disruptions to the system.
IBM System z servers stand alone against competition and
have stood the test of time with our business resiliency
solutions. Our coupling solutions with Parallel Sysplex tech-
nology allows for greater scalability and availability. The
Infi niBand Coupling Links on the z10 EC provides a high
speed solution to the 10 meter limitation of ICB-4 since they
will be available in lengths up to 150 meters.
What the z10 EC provides over its predecessors are
improvements in the processor granularity offerings, more
options for specialty engines, security enhancements,
additional high availability characteristics, Concurrent
Driver Upgrade (CDU) improvements, enhanced network-
ing and on demand offerings. The z10 EC provides our
IBM customers an option for continued growth, continuity,
and upgradeability.
The IBM System z10 EC builds upon the structure
introduced on the IBM System z9 EC – scalability and
z/Architecture. The System z10 EC expands upon a key
attribute of the platform – availability – to help ensure a
resilient infrastructure designed to satisfy the demands
of your business. With the potential for increased perfor-
mance and capacity, you have an opportunity to continue
to consolidate diverse applications on a single platform.
The z10 EC is designed to provide up 1.7 times the total
system capacity than the z9 EC, and has up to triple the
available memory. The maximum number of Processor
Units (PUs) has grown from 54 to 64, and memory has
increased from 128 GB per book and 512 GB per system
to 384 GB per book and 1.5 TB per system.
The z10 EC will continue to use the Cargo cage for its I/O,
supporting up to 960 Channels on the Model E12 (64 I/O
features) and up to 1,024 (84 I/O features) on the Models
E26, E40, E56 and E64.
HiperDispatch helps provide increased scalability and per-
formance of higher n-way and multi-book z10 EC systems
by improving the way workload is dispatched across the
server. HiperDispatch accomplishes this by recognizing
the physical processor where the work was started and
then dispatching subsequent work to the same physical
processor. This intelligent dispatching helps reduce the
movement of cache and data and is designed to improve
CPU time and performance. HiperDispatch is available
only with new z10 EC PR/SM and z/OS functions.
Processor Units (cores) defi ned as Internal Coupling
Facilities (ICFs), Integrated Facility for Linux (IFLs), System
z10 Application Assist Processor (zAAPs) and System
z10 Integrated Information Processor (zIIPs) are no longer
grouped together in one pool as on the z990, but are
grouped together in their own pool, where they can be
managed separately. The separation signifi cantly simpli-
fi es capacity planning and management for LPAR and can
have an effect on weight management since CP weights
and zAAP and zIIP weights can now be managed sepa-
rately. Capacity BackUp (CBU) features are available for
IFLs, ICFs, zAAPs and zIIPs.
For LAN connectivity, z10 EC provides a OSA-Express3
2-port 10 Gigabit Ethernet (GbE) Long Reach feature along
with the OSA-Express3 Gigabit Ethernet SX and LX with
four ports per features. The z10 EC continues to support
OSA-Express2 1000BASE-T and GbE Ethernet features,
and supports IP version 6 (IPv6) on HiperSockets. OSA-
Express2 OSN (OSA for NCP) is also available on System
z10 EC to support the Channel Data Link Control (CDLC)
protocol, providing direct access from the host operating
system images to the Communication Controller for Linux
on the z10 EC, z10 BC, z9 EC and z9 (CCL) using OSA-
Express3 or OSA-Express2 to help eliminate the require-
ment for external hardware for communications.
Additional channel and networking improvements include
support for Layer 2 and Layer 3 traffi c, FCP management
facility for z/VM and Linux for System z, FCP security
improvements, and Linux support for HiperSockets IPv6.
STP enhancements include the additional support for NTP
clients and STP over Infi niBand links.
Like the System z9 EC, the z10 EC offers a confi gurable
Crypto Express2 feature, with PCI-X adapters that can
be individually confi gured as a secure coprocessor or
an accelerator for SSL, the TKE workstation with optional
Smart Card Reader, and provides the following CP Assist
for Cryptographic Function (CPACF):
• DES, TDES, AES-128, AES-192, AES-256
• SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
• Pseudo Random Number Generation (PRNG)
z10 EC is designed to deliver the industry leading Reli-
ability, Availability and Serviceability (RAS) custom-
ers expect from System z servers. RAS is designed to
reduce all sources of outages by reducing unscheduled,
scheduled and planned outages. Planned outages are
further designed to be reduced by reducing preplanning
requirements.
12
z10 EC preplanning improvements are designed to avoid
planned outages and include:
• Flexible Customer Initiated Upgrades
• Enhanced Driver Maintenance
– Multiple “from” sync point support
• Reduce Pre-planning to avoid Power-On-Reset
– 16 GB for HSA
– Dynamic I/O enabled by default
– Add Logical Channel Subsystems (LCSS)
– Change LCSS Subchannel Sets
– Add/delete Logical partitions
• Designed to eliminate a logical partition deactivate/
activate/IPL
– Dynamic Change to Logical Processor Defi nition –
z/VM 5.3
– Dynamic Change to Logical Cryptographic Coproces-
sor Defi nition – z/OS ICSF
Additionally, several service enhancements have also
been designed to avoid scheduled outages and include
concurrent parts replacement, and concurrent hardware
upgrades. Exclusive to the z10 EC is the ability to hot swap
ICB-4 and Infi niBand hub cards.
Enterprises with IBM System z9 EC and IBM z990 may
upgrade to any z10 Enterprise Class model. Model
upgrades within the z10 EC are concurrent with the excep-
tion of the E64, which is disruptive. If you desire a con-
solidation platform for your mainframe and Linux capable
applications, you can add capacity and even expand you
current application workloads in a cost-effective manner. If
your traditional and new applications are growing, you may
fi nd the z10 EC a good fi t with its base qualities of service
and its specialty processors designed for assisting with
new workloads. Value is leveraged with improved hardware
price/performance and System z10 EC software pricing
strategies.
The z10 EC processor introduces IBM System z10
Enterprise Class with Quad Core technology, advanced
pipeline design and enhanced performance on CPU inten-
sive workloads. The z10 EC is specifi cally designed and
optimized for full z/Architecture compatibility. New features
enhance enterprise data serving performance, industry
leading virtualization capabilities, energy effi ciency at
system and data center levels. The z10 EC is designed
to further extend and integrate key platform characteris-
tics such as dynamic fl exible partitioning and resource
management in mixed and unpredictable workload envi-
ronments, providing scalability, high availability and Quali-
ties of Service (QoS) to emerging applications such as
WebSphere, Java and Linux.
With the logical partition (LPAR) group capacity limit on
z10 EC, z10 BC, z9 EC and z9 BC, you can now specify
LPAR group capacity limits allowing you to defi ne each
LPAR with its own capacity and one or more groups of
LPARs on a server. This is designed to allow z/OS to
manage the groups in such a way that the sum of the
LPARs’ CPU utilization within a group will not exceed the
group’s defi ned capacity. Each LPAR in a group can still
optionally continue to defi ne an individual LPAR capacity
limit.
The z10 EC has fi ve models with a total of 100 capacity
settings available as new build systems and as upgrades
from the z9 EC and z990.
The fi ve z10 EC models are designed with a multi-book
r
system structure that provides up to 64 Processor Units
(PUs) that can be characterized as either Central Proces-
sors (CPs), IFLs, ICFs, zAAPs or zIIPs.
Some of the signifi cant enhancements in the z10 EC that
help bring improved performance, availability and function
to the platform have been identifi ed. The following sections
highlight the functions and features of the z10 EC.
13
z10 EC Design and Technology
The System z10 EC is designed to provide balanced
system performance. From processor storage to the
system’s I/O and network channels, end-to-end bandwidth
is provided and designed to deliver data where and when
it is needed.
The processor subsystem is comprised of one to four
books connected via a point-to-point SMP network. The
change to a point-to-point connectivity eliminates the need
for the jumper book, as had been used on the System z9
and z990 systems. The z10 EC design provides growth
paths up to a 64 engine system where each of the 64
PUs has full access to all system resources, specifi cally
memory and I/O.
Each book is comprised of a Multi-Chip Module (MCM),
memory cards and I/O fanout cards. The MCMs, which
measure approximately 96 x 96 millimeters, contain the
Processor Unit (PU) chips, the “SCD” and “SCC” chips of
z990 and z9 have been replaced by a single “SC” chip
which includes both the L2 cache and the SMP fabric
(“storage controller”) functions. There are two SC chips
on each MCM, each of which is connected to all fi ve CP
chips on that MCM. The MCM contain 103 glass ceramic
layers to provide interconnection between the chips and
the off-module environment. Four models (E12, E26, E40
and E56) have 17 PUs per book, and the high capacity
z10 EC Model E64 has one 17 PU book and three 20 PU
books. Each PU measures 21.973 mm x 21.1658 mm and
has an L1 cache divided into a 64 KB cache for instruc-
tions and a 128 KB cache for data. Each PU also has an
L1.5 cache. This cache is 3 MB in size. Each L1 cache
has a Translation Look-aside Buffer (TLB) of 512 entries
associated with it. The PU, which uses a high-frequency
z/Architecture microprocessor core, is built on CMOS 11S
chip technology and has a cycle time of approximately
0.23 nanoseconds.
The design of the MCM technology on the z10 EC pro-
vides the fl exibility to confi gure the PUs for different uses;
there are two spares and up to 11 System Assist Proces-
sors (SAPs) standard per system. The remaining inactive
PUs on each installed MCM are available to be charac-
terized as either CPs, ICF processors for Coupling Facil-
ity applications, or IFLs for Linux applications and z/VM
hosting Linux as a guest, System z10 Application Assist
Processors (zAAPs), System z10 Integrated Information
Processors (zIIPs) or as optional SAPs and provide you
with tremendous fl exibility in establishing the best system
for running applications. Each model of the z10 EC must
always be ordered with at least one CP, IFL or ICF.
Each book can support from the 16 GB minimum memory,
up to 384 GB and up to 1.5 TB per system. 16 GB of
the total memory is delivered and reserved for the fi xed
Hardware Systems Area (HSA). There are up to 48 IFB
links per system at 6 GBps each.
The z10 EC supports a combination of Memory Bus
Adapter (MBA) and Host Channel Adapter (HCA) fanout
cards. New MBA fanout cards are used exclusively for
ICB-4. New ICB-4 cables are needed for z10 EC and are
only available on models E12, E26, E40 and E56. The E64
model may not have ICBs. The Infi niBand Multiplexer (IFB-
MP) card replaces the Self-Timed Interconnect Multiplexer
(STI-MP) card. There are two types of HCA fanout cards:
HCA2-C is copper and is always used to connect to I/O
(IFB-MP card) and the HCA2-O which is optical and used
for customer Infi niBand coupling.
Data transfers are direct between books via the level 2
cache chip in each MCM. Level 2 Cache is shared by all
PU chips on the MCM. PR/SM provides the ability to con-
fi gure and operate as many as 60 Logical Partitions which
may be assigned processors, memory and I/O resources
from any of the available books.
14
z10 EC Model
The z10 EC has been designed to offer high performance
and effi cient I/O structure. All z10 EC models ship with two
frames: an A-Frame and a Z-Frame, which together sup-
port the installation of up to three I/O cages. The z10 EC
will continue to use the Cargo cage for its I/O, supporting
up to 960 ESCON
®
and 256 FICON channels on the Model
E12 (64 I/O features) and up to 1,024 ESCON and 336
FICON channels (84 I/O features) on the Models E26, E40,
E56 and E64.
To increase the I/O device addressing capability, the I/O
subsystem provides support for multiple subchannels
sets (MSS), which are designed to allow improved device
connectivity for Parallel Access Volumes (PAVs). To sup-
port the highly scalable multi-book system design, the z10
EC I/O subsystem uses the Logical Channel Subsystem
(LCSS) which provides the capability to install up to 1024
CHPIDs across three I/O cages (256 per operating system
image). The Parallel Sysplex Coupling Link architecture
and technology continues to support high speed links pro-
viding effi cient transmission between the Coupling Facility
and z/OS systems. HiperSockets provides high-speed
capability to communicate among virtual servers and logi-
cal partitions. HiperSockets is now improved with the IP
version 6 (IPv6) support; this is based on high-speed TCP/
IP memory speed transfers and provides value in allowing
applications running in one partition to communicate with
applications running in another without dependency on
an external network. Industry standard and openness are
design objectives for I/O in System z10 EC.
The z10 EC has fi ve models offering between 1 to 64 pro-
cessor units (PUs), which can be confi gured to provide
a highly scalable solution designed to meet the needs
of both high transaction processing applications and On
Demand Business. Four models (E12, E26, E40 and E56)
have 17 PUs per book, and the high capacity z10 EC
Model E64 has one 17 PU book and three 20 PU books.
The PUs can be characterized as either CPs, IFLs, ICFs,
zAAPs or zIIPs. An easy-to-enable ability to “turn off” CPs
or IFLs is available on z10 EC, allowing you to purchase
capacity for future use with minimal or no impact on
software billing. An MES feature will enable the “turned
off” CPs or IFLs for use where you require the increased
capacity. There are a wide range of upgrade options avail-
able in getting to and within the z10 EC.
The z10 EC hardware model numbers (E12, E26, E40, E56
and E64) on their own do not indicate the number of PUs
which are being used as CPs. For software billing pur-
poses only, there will be a Capacity Identifi er associated
with the number of PUs that are characterized as CPs. This
15
number will be reported by the Store System Information
(STSI) instruction for software billing purposes only. There
is no affi nity between the hardware model and the number
of CPs. For example, it is possible to have a Model E26
which has 13 PUs characterized as CPs, so for software
billing purposes, the STSI instruction would report 713.
z10 EC model upgrades
There are full upgrades within the z10 EC models and
upgrades from any z9 EC or z990 to any z10 EC. Upgrade
of z10 EC Models E12, E26, E40 and E56 to the E64 is
disruptive. When upgrading to z10 EC Model E64, unlike
the z9 EC, the fi rst book is retained. There are no direct
upgrades from the z9 BC or IBM eServer zSeries 900
(z900), or previous generation IBM eServer zSeries.
z10 EC Base and Sub-capacity Offerings
IBM is increasing the number of sub-capacity engines on
the z10 EC. A total of 36 sub-capacity settings are avail-
able on any hardware model for 1-12 CPs. Models with 13
CPs or greater must be full capacity.
For the z10 EC models with 1-12 CPs, there are four
capacity settings per engine for central processors (CPs).
The entry point (Model 401) is approximately 23.69% of
a full speed CP (Model 701). All specialty engines con-
tinue to run at full speed. Sub-capacity processors have
availability of z10 EC features/functions and any-to-any
upgradeability is available within the sub-capacity matrix.
All CPs must be the same capacity setting size within one
z10 EC.
z10 EC Model Capacity Identifi ers:
• 700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764
• Capacity setting 700 does not have any CP engines
• Nxx, where n = the capacity setting of the engine, and
xx = the number of PU characterized as CPs in the CEC
Once xx exceeds 12, then all CP engines are full capacity
•
•
The z10 EC has 36 additional capacity settings at the low end
• Available on ANY H/W Model for 1 to 12 CPs. Models with 13
CPs or greater have to be full capacity
• All CPs must be the same capacity within the z10 EC
• All specialty engines run at full capacity. The one for one entitlement to purchase one zAAP or one zIIP for each CP purchased
is the same for CPs of any capacity.
• Only 12 CPs can have granular capacity, other PUs must be
CBU or characterized as specialty engines
16
z10 EC Performance
The performance design of the z/Architecture can enable
the server to support a new standard of performance for
applications through expanding upon a balanced system
approach. As CMOS technology has been enhanced to
support not only additional processing power, but also
more PUs, the entire server is modifi ed to support the
increase in processing power. The I/O subsystem supports
a greater amount of bandwidth than previous generations
through internal changes, providing for larger and faster
volume of data movement into and out of the server. Sup-
port of larger amounts of data within the server required
improved management of storage confi gurations, made
available through integration of the operating system and
hardware support of 64-bit addressing. The combined bal-
anced system design allows for increases in performance
across a broad spectrum of work.
Large System Performance Reference
IBM’s Large Systems Performance Reference (LSPR)
method is designed to provide comprehensive z/Archi-
tecture processor capacity ratios for different confi gura-
tions of Central Processors (CPs) across a wide variety
of system control programs and workload environments.
For z10 EC, z/Architecture processor capacity identifi er is
defi ned with a (7XX) notation, where XX is the number of
installed CPs.
may experience will vary depending upon considerations
such as the amount of multiprogramming in the user’s job
stream, the I/O confi guration, and the workload processed.
LSPR workloads have been updated to refl ect more
closely your current and growth workloads. The classifi ca-
tion Java Batch (CB-J) has been replaced with a new clas-
sifi cation for Java Batch called ODE-B. The remainder of
the LSPR workloads are the same as those used for the z9
EC LSPR. The typical LPAR confi guration table is used to
establish single-number-metrics such as MIPS and MSUs.
The z10 EC LSPR will rate all z/Architecture processors
running in LPAR mode, 64-bit mode, and assumes that
HiperDispatch is enabled.
For more detailed performance information, consult the
Large Systems Performance Reference (LSPR) available
at: http://www.
ibm.com
/servers/eserver/zseries/lspr/.
CPU Measurement Facility
The CPU Measurement Facility is a hardware facility which
consists of counters and samples. The facility provides a
means to collect run-time data for software performance
tuning. The detailed architecture information for this facility
™
can be found in the System z10 Library in Resource Link
.
Based on using an LSPR mixed workload, the perfor-
mance of the z10 EC (2097) 701 is expected to be up to
1.62 times that of the z9 EC (2094) 701.
The LSPR contains the Internal Throughput Rate Ratios
(ITRRs) for the z10 EC and the previous-generation
zSeries processor families based upon measurements
and projections using standard IBM benchmarks in a con-
trolled environment. The actual throughput that any user
17
z10 EC I/O Subsystem
The z10 EC contains an I/O subsystem infrastructure
which uses an I/O cage that provides 28 I/O slots and
the ability to have one to three I/O cages delivering a
total of 84 I/O slots. ESCON, FICON Express4, FICON