their competitors with new capabilities and deliver clear
returns on investments.
Welcome to the on demand era, the next phase of
e-business, in which companies move beyond simply
integrating their processes to actually being able to sense
and respond to fl uctuating market conditions and provide
products and services to customers on demand. While the
former notion of on demand as a utility capability is a key
component, on demand companies have much broader
capabilities.
What does an on demand company look like?
• Responsive: It can sense and respond in real time to
the changing needs of customers, employees, suppliers
and partners
• Variable: It must be capable of employing variable cost
structures to do business at high levels of productivity,
cost control, capital effi ciency and fi nancial predictability.
• Focused: It concentrates on its core competencies –
areas where it has a differentiating advantage – and
draws on the skills of strategic partners to manage
needs outside of these competencies.
• Resilient: It can handle the ups and downs of the global
market, and manage changes and threats with consistent availability, security and privacy – around the world,
around the clock.
To support an on demand business, the IT infrastructure
must evolve to support it. At its heart the data center must
transition to refl ect these needs; the data center must be
responsive to changing demands, it must be variable to
support the diverse environment, it must be fl exible so that
applications can run on the optimal resources at any point
in time, and it must be resilient to support an always open
for business environment.
The on demand era plays to the strengths of the IBM
®
^
zSeries®. The IBM ^ zSeries 900
(z900) was launched in 2000 and was the fi rst IBM server
"designed from the ground up for e-business." The latest
member of the family, the IBM ^ zSeries 990 (z990),
brings enriched functions that are required for the on
demand data center.
The "responsive" data center needs to have systems that
are managed to the quality of service goals of the business;
they need systems that can be upgraded transparently
to the user and they must be adaptable to the changing
requirements of the business. With the zSeries you have a
server with high levels of reliability and a balanced design
to ensure high levels of utilization and consistently high
service to the user. The capacity on demand features con-
tinue to evolve, helping to ensure that upgrading the servers
is timely and meets the needs of your business. It’s not
just the capacity of the servers that can be changed on
demand, but also the mix of workload and the allocation of
resources to refl ect the evolving needs and priorities of the
business.
3
The variable data center needs to be able to respond to
the ever-changing demands that occur when you sup-
port multiple diverse workloads as a single entity. It must
respond to maintain the quality of service required and
the cost of utilizing the resources must refl ect the chang-
environment. The zSeries Intelligent Resource Director
ing
(IRD), which combines three key zSeries technologies,
®
Workload Manager (WLM), Logical Partitioning and
z/OS
®
Parallel Sysplex
technology, helps ensure that your most
important workloads get the resources they need and con-
stantly manages the resources according to the changing
priorities of the business. With Workload License Charges
(WLC), as the resources required by different applications,
middleware and operating systems change over time,
the software costs change to refl ect this. In addition, new
virtual Linux servers can be added in just minutes with
zSeries virtualization technology to respond rapidly
to huge increases in user activity.
The fl exible data center must be adaptable to support
change and ease integration. This is achieved through
a combination of open and industry standards along
with the adaptability to direct resources where they are
required. The zSeries, along with other IBM servers, has
been investing in standards for years. Key is the support
™
for Linux, but let’s not forget Java
and XML and industry
standard technologies, such as FCP, Ethernet and SCSI.
Finally the on demand data center must be designed to be
resilient. The zSeries has been renowned for reliability and
availability. The zSeries platform will help protect against
®
both scheduled and unscheduled outages, and GDPS
enables protection from loss of complete sites.
The New zSeries from IBM – Impressive Investment -
Unprecedented Performance
IBM’s ongoing investment in zSeries technology has pro-
duced a re-invention of the zSeries server — the z990. The
z990 makes the mainframe platform more relevant to cur-
rent business success than ever before. Developed at an
investment in excess of $1 billion, the new z990 introduces
a host of new benefi ts to meet today’s on demand business.
The major difference is the innovative book structure of the
z990. This new packaging of processors, memory and I/O
connections allows you to add incremental capacity to a
zSeries server as you need it. This makes the z990 a fl ex-
ible and cost-effective zSeries server to date.
IBM’s investment in zSeries doesn’t stop here. To solidify
the commitment to zSeries, IBM introduces the “Mainframe
Charter” that provides a framework for future investment
and a statement of IBM’s dedication to deliver ongoing
value to zSeries customers in their transformation to on
demand business.
Tools for Managing e-business
The IBM ^ product line is backed by a compre-
hensive suite of offerings and resources that provide value
at every stage of IT implementation. These tools can help
customers test possible solutions, obtain fi nancing, plan
and implement applications and middleware, manage
capacity and availability, improve performance and obtain
technical support across the entire infrastructure. The
result is an easier way to handle the complexities and
rapid growth of e-business. In addition, IBM Global Ser-
vices experts can help with business and IT consulting,
business transformation and total systems management
services, as well as customized e-business solutions.
4
z/Architecture
The zSeries is based on the z/Architecture™, which is
designed to reduce bottlenecks associated with the lack
of addressable memory and automatically directs resources
to priority work through Intelligent Resource Director. The
z/Architecture is a 64-bit superset of ESA/390.
z/Architecture is implemented on the z990 to allow full
64-bit real and virtual storage support. A maximum 256
GB of real storage is available on z990 servers. z990 can
defi ne any LPAR as having 31-bit or 64-bit addressability.
z/Architecture has:
• 64-bit general registers.
• New 64-bit integer instructions. Most ESA/390 architecture instructions with 32-bit operands have new 64-bit
and 32- to 64-bit analogs.
• 64-bit addressing is supported for both operands
and instructions for both real addressing and virtual
addressing.
• 64-bit address generation. z/Architecture provides 64-bit
virtual addressing in an address space, and 64-bit real
addressing.
• 64-bit control registers. z/Architecture control registers
can specify regions, segments, or can force virtual
addresses to be treated as real addresses.
• The prefi x area is expanded from 4K to 8K bytes.
New instructions provide quad-word storage consistency.
•
• The 64-bit I/O architecture allows CCW indirect data
addressing to designate data addresses above 2 GB for
both format-0 and format-1 CCWs.
• IEEE Floating Point architecture adds twelve new instructions for 64-bit integer conversion.
• The 64-bit SIE architecture allows a z/Architecture server
to support both ESA/390 (31-bit) and z/Architecture
(64-bit) guests. Zone Relocation is expanded to 64-bit
®
for LPAR and z/VM
.
• 64-bit operands and general registers are used for all
Cryptographic instructions
• The implementation of 64-bit z/Architecture can help
reduce problems associated with lack of addressable
memory by making the addressing capability virtually
unlimited (16 Exabytes).
z/Architecture Operating System Support
The z/Architecture is a tri-modal architecture capable of
executing in 24-bit, 31-bit, or 64-bit addressing modes.
Operating systems and middleware products have been
modifi ed to exploit the new capabilities of the z/Architecture
Immediate benefi t can be realized by the elimination of the
overhead of Central Storage to Expanded Storage page
movement and the relief provided for those constrained by
the 2 GB real storage limit of ESA/390. Application programs
can run unmodifi ed on the zSeries family of servers.
Expanded Storage (ES) is still supported for operating sys-
tems running in ESA/390 mode (31-bit). For z/Architecture
mode (64-bit), ES is supported by z/VM. ES is not supported
by z/OS in z/Architecture mode.
Although z/OS does not support Expanded Storage when
running under the new architecture, all of the Hiperspace
™
and VIO APIs, as well as the Move Page (MVPG) instruc-
tion, continue to operate in a compatible manner. There is
no need to change products that use Hiperspaces.
Some of the exploiters of z/Architecture for z/OS include:
• DB2 Universal Database™ Server for z/OS
™
• IMS
• Virtual Storage Access Method (VSAM)
• Remote Dual Copy (XRC)
• Tape and DASD access method
.
5
IBM
^
zSeries 990
Operating System ESA/390 z/Arch Compati Exploita
(31-bit) (64-bit) bility tion
OS/390® Version 2 Release 10 Yes Yes Yes No
z/OS Version 1 Release 2 No* Yes Yes No
z/OS Version 1 Release 3 No* Yes Yes No
z/OS Version 1 Release 4 No* Yes Yes Yes
z/OS Version 1 Release 5, 6 No Yes Included Included
Linux on S/390
Linux on zSeries No Yes Yes Yes
z/VM Version 3 Release 1 Yes Yes Yes No
z/VM Version 4 Release 3 Yes Yes Yes No
z/VM Version 4 Release 4 Yes Yes Included Yes
z/VM Version 5 Release 1 (3Q04)
VSE/ESA
z/VSE Version 3 Release 1 Yes No Yes Yes
TPF Version 4 Release 1 Yes No Yes No
(ESA mode only)
* Customers with z/OS Bimodal Migration Accommodation Offering
may run in 31-bit support per the terms and conditions of the Offering.
Bimodal Offering available for z/OS ONLY.
®
Yes No Yes Yes
No Yes Included Yes
™
Ver. 2 Release 6, 7 Yes No Yes Yes
IBM ^ zSeries is the enterprise class e-business
server optimized for the integration, transactions and data
of the next generation e-business world. In implement-
ing the z/Architecture with new technology solutions, the
zSeries models are designed to facilitate the IT business
transformation and reduce the stress of business-to-busi-
ness and business-to-customer growth pressure. The
zSeries represents an advanced generation of servers
that feature enhanced performance, support for zSeries
Parallel
Sysplex
clustering, improved hardware man-
agement controls and innovative functions to address
e-business processing.
The z990 server
technology
enhances performance by exploiting new
through many design enhancements. With a
new superscalar microprocessor and the CMOS 9S-SOI
technology, the z990 is designed to further extend and
integrate key platform characteristics such as dynamic
fl exible partitioning and resource management in mixed
and unpredictable
ability, high availability
e-business applications such as WebSphere
workload environments, providing scal-
and Quality of Service to emerging
®
, Java and
Linux.
The z990 has 4 models available as new build systems
and as upgrades from the z900.
The four z990 models are designed with a multi-book
system structure which provides up to 32 Processor Units
(PUs) that can be characterized prior to the shipment of the
machine as either Central Processors (CPs), Integrated
Facility for Linux (IFLs), or Internal Coupling Facilities (ICFs).
6
The new IBM ^ zSeries Application Assist
Processor (zAAP), planned to be available on the IBM
^ zSeries 990 (z990) and zSeries 890 (z890) serv-
ers, is an attractively priced specialized processing unit
that provides strategic z/OS Java execution environment
for customers who desire the powerful integration advan-
tages and traditional Qualities of Service a of the zSeries
platform.
tures and improve operational effi ciencies. For example,
use of zAAPs to strategically integrate Java Web applica-
tions with backend databases could reduce the number of
TCP/IP programming stacks, fi rewalls, and physical inter-
connections (and their associated processing) that might
otherwise be required when the application servers and
their database servers are deployed on separate physical
server platforms.
When confi gured with general purpose Central Processors
(CPs) within logical partitions running z/OS, zAAPs can
help you to extend the value of your existing zSeries
investments and strategically integrate and run e-business
Java workloads on the same server as your database,
helping to simplify and reduce the infrastructure required
for Web applications while helping to lower your overall
total cost of ownership.
zAAPs are designed to operate asynchronously with the
general purpose CPs to execute Java programming under
control of the IBM Java Virtual Machine (JVM). This can
help reduce the demands and capacity requirements
on general purpose CPs which may then be available
for reallocation to other zSeries workloads. The amount
of general purpose CP savings may vary based on the
amount of Java application code executed by zAAP(s).
And best of all, IBM JVM processing cycles can be
executed on the confi gured zAAPs with no anticipated
modifi cations to the Java application(s). Execution of the
JVM processing cycles on a zAAP is a function of the IBM
Software Developer’s Kit (SDK) for z/OS Java 2 Technology
Edition, z/OS 1.6 (or z/OS.e 1.6) and the innovative
™
Processor Resource/Systems Manager
(PR/SM™).
Notably execution of the Java applications on zAAPs,
within the same z/OS LPAR as their associated database
subsystems, can also help simplify the server infrastruc-
Essentially, zAAPs allow customers to purchase additional
processing power exclusively for z/OS Java application
execution without affecting the total MSU rating or machine
model designation. Conceptually, zAAPs are very similar to
a System Assist Processor (SAP); they cannot execute an
Initial Program Load and only assist the general purpose
CPs for the execution of Java programming. Moreover,
IBM does not impose software charges on zAAP capacity.
Additional IBM software charges will apply when additional
general purpose CP capacity is used.
Customers are encouraged to contact their specifi c ISVs/
USVs directly to determine if their charges will be affected.
With the introduction of the z990, customers can expect to
see the following performance improvements:
Number of CPs Base Ratio
1 z900 2C1 1.54 - 1.61
8 z900 2C8 1.52 - 1.56
16 z900 2C16 1.51 - 1.55
32 z900 2C16 2.46 - 2.98
Note: Greater than 16 CPs requires a minimum of two operating
system images
The Large System Performance Reference (LSPR) should
be referenced when considering performance on the z990.
Visit: ibm.com/servers/eserver/zseries/lspr/ for more infor-
mation on LSPR.
7
To support the new scalability of the z990 a new improve-
ment to the I/O Subsystem has been introduced to “break
the barrier” of 256 channels per Central Electronic Com-
plex (CEC). This provides “horizontal” growth by allowing
the defi nition of up to four Logical Channel SubSystems
each capable of supporting up to 256 channels giving a
total of up to 1024 CHPIDs per CEC. The increased scal-
ability is further supported by the increase in the number
of Logical Partitions available from the current 15 LPARs to
a new 30 LPARs. There is still a 256-channel limit per oper-
ating system image.
These are some of the signifi cant enhancements in the
zSeries 990 server that bring improved performance, avail-
ability and function to the platform. The following sections
highlight the functions and features of the server.
z990 Design and Technology
The z990 is designed to provide balanced system perfor-
mance. From processor storage to the system’s I/O and
network channels, end-to-end bandwidth is provided and
designed to deliver data where and when it is needed.
The z990 provides a signifi cant increase in system scal-
ability and opportunity for server consolidation by pro-
viding four models, from one to four MultiChip Modules
(MCMs), delivering up to a maximum 32-way confi gura-
tion. The MCMs are confi gured in a book package, with
each book comprised of a MultiChip Module (MCM),
memory cards and Self-Timed Interconnects. The MCM,
which measures approximately 93 x 93 millimeters (42%
smaller than the z900), contains the processor unit (PU)
chips, the cache structure chips and the processor stor-
age controller chips. The MCM contains 101 glass ceramic
layers to provide interconnection between the chips and
the off-module environment. In total, there is approximately
0.4 kilometer of internal copper wiring on this module.
This new MCM packaging delivers an MCM 42% smaller
than the z900, with 23% more I/O connections and 133%
I/O density improvement. Each MCM provides support for
12 PUs and 32 MB level 2 cache. Each PU contains 122
million transistors and measures 14.1 mm x 18.9 mm. The
design of the MCM technology on the z990 provides the
fl exibility to confi gure the PUs for different uses; two of
the PUs are reserved for use as System Assist Processors
(SAPs), two are reserved as spares. The remaining inac-
tive 8 PUs on the MCM are available to be characterized
as either CPs, ICF processors for Coupling Facility appli-
cations, IFLs for Linux applications, IBM ^ zSeries
Application Assist Processor (zAAPs) for Java applications
or as optional SAPs, providing the customer with tremen-
dous fl exibility in establishing the best system for running
applications. Each model of the z990 must always be
ordered with at least one CP, IFL or ICF.
The PU, which uses the latest chip technology from IBM
semiconductor laboratories, is built on CMOS 9S-SOI with
copper interconnections. The 14.1 mm x 18.9 mm chip has
a cycle time of 0.83 nanoseconds. Implemented on this
chip is the z/Architecture with its 64-bit capabilities includ-
ing instructions, 64-bit General Purpose Registers and
translation facilities.
8
Each book can support up to 64 GB of Memory, delivered
on two memory cards, and 12 STIs giving a total of 256 GB
of memory and 48 STIs on the D32 model. The memory is
delivered on 8 GB, 16 GB or 32 GB memory cards which
can be purchased in 8 GB increments. The minimum
memory is 16 GB. The two memory cards associated with
each book must be the same size. Each book has 3 MBAs
and each MBA supports 4 STIs.
All books are interconnected with a super-fast bi-direc-
tional redundant ring structure which allows the system to
be operated and controlled by PR/SM operating in LPAR
mode as a symmetrical, memory coherent, multiproces-
sor. PR/SM provides the ability to confi gure and operate
as many as 30 Logical Partitions which may be assigned
processors, memory and I/O resources from any of the
available books. The z990 supports LPAR mode only (i.e.
basic mode is no longer supported).
The MultiChip Module is the technology cornerstone for
fl exible PU deployment in the z990 models. For most
models, the ability of the MCM to have inactive PUs allows
such features as Capacity Upgrade on Demand (CUoD),
Customer Initiated Upgrades (CIU), and the ability to add
CPs, ICFs, IFLs, and zAAPs dynamically providing nondis-
ruptive upgrade of processing capability. Also, the ability
to add CPs lets a z990 with spare PU capacity become a
backup for other systems in the enterprise; expanding the
z990 system to meet an emergency outage situation. This
is called Capacity BackUp (CBU). The greater capacity of
the z990 offers customers even more fl exibility for using
this feature to backup critical systems in their enterprise.
In order to support the highly scalable multi-book system
design the I/O SubSystem has been enhanced by intro-
ducing a new Logical Channel SubSystem (LCSS) which
provides the capability to install up to 1024 CHPIDs across
three I/O cages (256 per operating system image). I/O
improvements in the Parallel Sysplex Coupling Link archi-
tecture and technology support faster and more effi cient
transmission between the Coupling Facility and production
™
systems. HiperSockets
provides high-speed capability to
communicate among virtual servers and Logical Partitions;
this is based on high-speed TCP/IP memory speed trans-
fers and provides value in allowing applications running
in one partition to communicate with applications running
in another without dependency on an external network.
Industry standard and openness are design objectives for
I/O in z990. The improved I/O subsystem is delivering new
horizons in I/O capability and has eliminated the 256 limit
to I/O attachments for a mainframe.
9
z990 Family Models
The z990 offers 4 models, the A08, B16, C24 and D32,
which can be confi gured to give customers a highly scal-
able solution to meet the needs of both high transaction
processing applications and the demands of e-business.
The new model structure provides between 1-32 confi gu-
rable Processor Units (PUs) which can be characterized
as either CPs, IFLs, ICFs, or zAAPs. A new easy-to-enable
ability to “turn off” CPs is available on z990 (a similar offer-
ing was available via RPQ on z900). The objective is to
allow customers to purchase capacity for future use with
minimal or no impact on software billing. An MES feature
will enable the CPs for use where the customer requires
the increased capacity. There are a wide range of upgrade
options available which are indicated in the z990 models
chart.
Unlike other zSeries server offerings, it is no longer pos-
sible to tell by the hardware model (A08, B16, C24, D32)
the number of PUs that are being used as CPs. For soft-
ware billing purposes only, there will be a “software” model
associated with the number of PUs that are characterized
as CPs. This number will be reported by the Store System
Information (STSI) instruction for software billing purposes
only. There is no affi nity between the hardware model and
the number of CPs. For example, it is possible to have a
Model B16 which has 5 PUs characterized as CPs, so for
software billing purposes, the STSI instruction would report
305. The more normal confi guration for a 5-way would be
an A08 with 5 PUs characterized as CPs. The STSI instruc-
tion would also report 305 for that confi guration.
* S/W Model refers to number of installed CPs. Reported by STSI instruction. Model 300 does not have any CPs.
Note: For MSU values, refer to: ibm.com/servers/eserver/zseries/library/swpriceinfo/
Model D32
z990 and IBM ^ On/Off Capacity on Demand
IBM ^ On/Off Capacity on Demand (On/Off CoD)
is offered with z990 processors to provide a temporary
increase in capacity to meet customer's peak workload
requirements. The scope of On/Off Capacity on Demand
is to allow customers to temporarily turn on unassigned/
unowned PUs available within the current model for use as
CPs or IFLs. Temporary use of CFs, memory and channels
is not supported.
Before customers can order temporary capacity, they must
have a signed agreement for Customer Initiated upgrade
(CIU) facility. In addition to that agreement, they must
agree to specifi c terms and conditions which govern the
use of temporary capacity.
Typically, On/Off Capacity on Demand will be ordered
through CIU, however there will be an RPQ available if no
RSF connection is present.
10
Although CBU and On/Off Capacity on Demand can both
reside on the server, the activation of On/Off Capacity
on Demand is mutually exclusive with Capacity BackUp
(CBU) and no physical hardware upgrade will be sup-
ported while On/Off Capacity on Demand is active.
This important new function for zSeries gives customers
greater control and ability to add capacity to meet the
requirements of an unpredictable on demand applica-
on demand offerings to the next level of fl exibility. It is
designed to help customers match cost with capacity
utilization and manage periodic business spikes. On/Off
Capacity on Demand is designed to provide a low-risk way
to deploy pilot applications, and it is designed to enable a
customer to grow capacity rationally and proportionately
with market demand.
and up to 360 ESCON channels. Each book will support
up to 12 STIs for I/O connectivity. Seven STIs are required
to support the 28 channel slots in each I/O cage so in
order to support a fully confi gured three I/O cage system
21 STIs are required. To achieve this maximum I/O con-
nectivity requires at least a B16 model which provides 24
STIs.
The following chart shows the upgrade from z900 to z990.
There are any to any upgrades from any of the z900 gen-
eral purpose models. A z900 Coupling Facility Model 100
must fi rst be upgraded to a z900 general purpose model
before upgrading to a z990. There are no upgrades from
9672 G5/G6 or IBM ^ zSeries 800 (z800).
Model Upgrades
z900z990
Customers can also take advantage of Capacity Upgrade
on Demand (CUoD), Customer Initiated Upgrade (CIU),
and Capacity BackUp (CBU) which are described later in
the document.
The z990 has also been designed to offer a high perfor-
mance and effi cient I/O structure. All z990 models ship
with two frames the A-Frame and the Z-Frame; this sup-
ports the installation of up to three I/O cages. Each I/O
cage has the capability of plugging up to 28 I/O cards.
When used in conjunction with the software that supports
Logical Channel SubSystems, it is possible to have up to
®
420 ESCON
mum of 1024 channels across 3 I/O cages. Alternatively,
three I/O cages will support up to 120 FICON
channels in a single I/O cage and a maxi-
™
channels
100
101 - 109A08
1C1 - 116
2C1 - 216
B16
C24
D32
11
z990 and z900 Performance Comparison
The performance design of the z/Architecture enables the
entire server to support a new standard of performance for
applications through expanding upon a balanced system
approach. As CMOS technology has been enhanced to
support not only additional processing power, but also
more engines, the entire server is modifi ed to support the
increase in processing power. The I/O subsystem supports
a great amount of bandwidth through internal changes,
thus providing for larger and quicker data movement into
and out of the server. Support of larger amounts of data
within the server required improved management of stor-
age confi gurations made available through integration of
the software operating system and hardware support of
64-bit addressing. The combined balanced system effect
allows for increases in performance across a broad spec-
trum of work. However, due to the increased fl exibility in
the z990 model structure and resource management in
the system, it is expected that there will be larger perfor-
mance variability than has been previously seen by our
traditional customer set. This variability may be observed
in several ways. The range of performance ratings across
the individual LSPR workloads is likely to have a larger
spread than past processors. There will also be more
performance variation of individual LPAR partitions as the
impact of fl uctuating resource requirements of other parti-
tions can be more pronounced with the increased number
of partitions and additional CPs available on the z990. The
customer impact of this increased variability will be seen
as increased deviations of workloads from single-number-
metric based factors such as MIPS, MSUs and CPU time
chargeback algorithms. It is important to realize the z990
has been optimized to run many workloads at high utiliza-
tion rates.
It is also important to notice that the LSPR workloads for
z990 have been updated to refl ect more closely our cus-
tomers’ current and growth workloads. The traditional TSO
LSPR workload is replaced by a new, heavy Java tech-
nology-based online workload referred to as Trade2-EJB
®
(a stock trading application). The traditional CICS
/DB2®
LSPR online workload has been updated to have a Web-
frontend which then connects to CICS. This updated
workload is referred to as WEB/CICS/DB2 and is repre-
sentative of customers who Web-enable access to their
legacy applications. Continuing in the LSPR for z990 will
be the legacy online workload, IMS, and two legacy batch
workloads CB84 and CBW2. The z990 LSPR will provide
performance ratios for individual workloads as well as a
“default mixed workload” which is used to establish single-
number-metrics such as MIPS, MSUs and SRM constants.
The z990 default mixed workload will be composed of
equal amounts of fi ve workloads, Trade2-EJB, WEB/CICS/
DB2, IMS, CB84 and CBW2. Additionally, the z990 LSPR
will rate all z/Architecture processors running in LPAR
mode and 64-bit mode. The existing z900 processors have
all been re-measured using the new workloads — all run-
ning in LPAR mode and 64-bit mode.
Using the new LSPR ‘default mixed workload’, and with all
processors executing in 64-bit and LPAR mode, the follow-
ing results have been estimated:
• Comparing a one-way z900 Model 2C1 to a z990 model
with one CP enabled, it is estimated that the z990 model
has 1.52 to 1.58 times the capacity of the 2C1.
• Comparing an 8-way z900 Model 2C8 to a z990 model
with eight CPs enabled, it is estimated that the z990
model has 1.48 to 1.55 times the capacity of the 2C8.
12
z990 I/O SubSystem
• Comparing a 16-way z900 Model 216 to a z990 model
with sixteen CPs enabled, it is estimated that the z990
model has 1.45 to 1.53 times the capacity of the 216.
• Comparing a 16-way z900 Model 216 to a z990 model
with thirty-two CPs enabled, and the workload executing on the z990 executing in two 16-way LPARs, it is
estimated that the z990 model has 2.4 to 2.9 times the
capacity of the 216.
Note: Expected performance improvements are based on hardware
changes. Additional performance benefi ts may be obtained as the
z/Architecture is fully exploited.
The z990 contains an I/O subsystem infrastructure which
uses an I/O cage that provides 28 I/O slots and the abil-
ity to have one to three I/O cages delivering a total of 84
™
I/O slots. ESCON, FICON Express
and OSA-Express
features plug into the z990 I/O cage along with any ISC-
3s, STI-2 and STI-3 distribution cards, and PCICA and
PCIXCC features. All I/O features and their support cards
can be hot-plugged in the I/O cage. Installation of an I/O
cage remains a disruptive MES, so the Plan Ahead feature
remains an important consideration when ordering a z990
system. The A08 model has 12 available STIs and so has
connectivity to a maximum of 12 I/O domains, i.e. 48 I/O
slots, so if more than 48 I/O slots are required a Model B16
is required. Each model ships with one I/O cage as stan-
dard in the A-Frame (the A-Frame also contains the pro-
cessor CEC cage) any additional I/O cages are installed in
the Z-Frame. The z990 provides a 400 percent increase in
I/O bandwidth provided by the STIs.
z990 Cage Layout
3rd
I/O Cage
2nd
I/O Cage
A-FrameZ-Frame
CEC
1st
I/O Cage
13
z990 Logical Channel SubSystems (LCSSs) and support for
greater than 15 Logical Partitions (LP)
In order to provide the increased channel connectivity
required to support the scalability of the z990, the z990
channel I/O SubSystem delivers a breakthrough in con-
nectivity, by providing up to 4 LCSS per CEC, each of
which can support up to 256 CHPIDs with exploitation soft-
ware installed. This support is provided in such a way that
is transparent to the programs operating in the logical par-
tition. Each Logical Channel SubSystem may have from 1
to 256 CHPIDs and may in turn be confi gured with 1 to 15
logical partitions. Each Logical Partition (LPAR) runs under
a single LCSS. As with previous zSeries servers, Multiple
Image Facility (MIF) channel sharing as well as all other
channel subsystem features are available to each Logical
Partition confi gured to each Logical Channel SubSystem.
Physical Channel IDs (PCHIDs) SubSystem
In order to accommodate the new support for up to 1024
CHPIDs introduced with the Logical Channel SubSystem
(LCSS) a new Physical Channel ID (PCHID) is being intro-
duced. The PCHID represents the physical location of an
I/O feature in the I/O cage. CHPID numbers are no longer
pre-assigned and it is now a customer responsibility to do
this assignment via IOCP/HCD. CHPID assignment is done
by associating a CHPID number with a physical location,
the PCHID. It is important to note that although it is pos-
sible to have LCSSs, there is still a single IOCDS to defi ne
the I/O subsystem. There is a new CHPID mapping tool
available to aid in the mapping of CHPIDs to PCHIDs. The
™
CHPID Mapping tool is available from Resource Link
, at
ibm.com/servers/resourcelink.
Up to 256
LCSS0LCSS1
CHPIDs
Up tp 30 Logical Partitions
Up to 256
CHPIDs
LCSS2
Up to 256
CHPIDs
LCSS3
Up to 256
CHPIDs
IOCP - IOCDS
PartitionsPartitions
LCSS0
CHPIDsCHPIDs
12 52 EF4F12 2F EF0002
LCSS1
HCD - HSA or IOCDS - HSA
102 103 104 110200 201 2022B0 2C5
Physical Channels (PCHIDs)
Note: Crypto no longer requires a CHPID
14
z990 Channels and I/O Connectivity
Logical Channel SubSystem (LCSS) Spanning
The concept of spanning channels provides the ability for
a channel to be confi gured to multiple LCSSs and therefore
they may be transparently shared by any/all of the logical
partitions in those LCSSs. Normal Multiple Image Facility
(MIF) sharing of a channel is confi ned to a single LCSS.
The z990 supports the spanning of the channels types: IC,
2320) is the latest zSeries implementation for the Fibre
Channel Architecture. The FICON Express card has two
links and can achieve improved performance over the
previous generation FICON channel card. For example,
attached to a 100 MBps link (1 Gbps), a single FICON
Express feature confi gured as a native FICON channel
is capable of supporting up to 7,200 I/O operations/sec
(channel is 100% utilized) and an aggregate total through-
put of 120 MBps on z990.
With 2 Gbps links, customers may expect up to 170 MBps
of total throughput. The 2 Gbps link data rates are appli-
cable to native FICON and FCP channels on zSeries only
and for full benefi t, require 2 Gbps capable devices as
well. Customers can leverage this additional bandwidth
capacity to consolidate channels and reduce confi guration
complexity, infrastructure costs, and the number of chan-
nels that must be managed. Please note, no additional
hardware or code is needed in order to obtain 2 Gbps
links. The functionality was incorporated in all zSeries
with March 2002 LIC. The link data rate is auto-negotiated
between server and devices.
Flexibility - Three channel types supported
The FICON Express features support three different chan-
nel types: 1) FCV Mode for FICON Bridge Channels, 2) FC
mode for Native FICON channels (including the FICON
CTC function), and 3) FCP mode for Fibre Channels (FCP
channels). Support for FCP devices means that zSeries
servers will be capable of attaching to select fi bre channel
switches/directors and FCP/SCSI disks and may access
these devices from Linux on zSeries and, new with z/VM
Version 5 Release 1, installation and operation of z/VM on
a SCSI disk.
Distance
All channels defi ned on FICON Express LX channel card
features at 1 Gbps link data rates support a maximum
unrepeated distance of up to 10 km (6.2 miles, or up to
20 km via RPQ, or up to 100 km with repeaters) over nine
micron single mode fi ber and up to 550 meters (1,804
feet) over 50 or 62.5 micron multimode fi ber through Mode
Conditioning Patch (MCP) cables. At 2 Gbps link speeds
FICON Express LX channel card features support up to
10 km (6.2 miles, or up to 12 km via RPQ, or up to 100
km with repeaters) over nine micron single mode fi ber.
At 2 Gbps link speeds, Mode Conditioning Patch (MCP)
cables on 50 or 62.5 micron multimode fi ber are not sup-
ported. The maximum unrepeated distances for 1 Gbps
17
links defi ned on the FICON Express SX channel cards are
up to 500 meters (1,640 feet) and 250 meters (820 feet)
for 50 and 62.5 micron multimode fi ber, respectively. The
maximum unrepeated distances for 2 Gbps links defi ned
on the FICON Express SX channel cards are up to 300
meters and 120 meters for 50 and 62.5 micron multimode
fi ber, respectively. The FICON Express channel cards are
designed to reduce the data droop effect that made long
distances not viable for ESCON. This distance capability is
becoming increasingly important as enterprises are moving
toward remote I/O, vaulting for disaster recovery and
™
Geographically Dispersed Parallel Sysplex
for availability.
Shared infrastructure
FICON (FC-SB-2 Fibre Channel Single-Byte Command
Code Set-2) has been adopted by INCITS (International
Committee for Information Technology Standards) as a
standard to the Fibre Channel Architecture. Using open
connectivity standards leads to shared I/O fi ber cabling
and switch infrastructures, facilitated data sharing, storage
management and SAN implementation, and integration
®
between the mainframe and UNIX
/Intel® technologies.
Native FICON Channels
Native FICON channels and devices can help to reduce
bandwidth constraints and channel contention to enable
easier server consolidation, new application growth, large
business intelligence queries and exploitation of e-business.
Currently, the IBM TotalStorage
®
Enterprise Storage Server®
(ESS) Models F10, F20 and 800 have two host adapters
to support native FICON. These host adapters each have
one port per card and can either be FC 3021 for long
wavelength or FC 3032 for short wavelength on the F10/
F20, or FC 3024 for long wavelength and 3025 for short
wavelength on the 800. All three models can support up
to 16 FICON ports per ESS. The Model 800 is 2 Gb link
capable. The IBM TotalStorage Enterprise Tape Control-
3590 Model A60 provides up to two FICON interfaces
ler
which can coexist with ESCON on the same box. Enter-
prise Tape Controller 3592-J70 provides up to four FICON
interfaces, which can exist with ESCON on the same box.
The 3592-J70 is designed to provide up to 1.5 times the
throughput of the Model A60. Customers can utilize IBM’s
highest capacity, highest performance tape drive to sup-
port their new business models.
Many Fibre Channel directors provide dynamic connectiv-
ity to native FICON control units. The IBM 2032 models
001, 064 and 140 (resell of the McDATA ED-5000, and
Intrepid 6000 Series Directors) are 32-, 64- and 140-port
high availability directors. The IBM 2042 Models 001, 128
and 256 (resell of the CNT FC/9000 Directors) are 64-,
128- and 256-port high availability directors. All have fea-
tures that provide interface support to allow the unit to be
managed by System Automation for OS/390. The McDATA
Intrepid 6000 Series Directors and CNT FC/9000 Directors
support 2 Gbps link data rates as well.
The FICON Express features now support attachment to
the IBM M12 Director (2109-M12). The IBM M12 Director
supports attachment of FICON Express channels on the
z990 via native FICON (FC CHPID type) and Fibre Channel
Protocol (FCP CHPID type) supporting attachment to SCSI
disks in Linux environments.
18
Wave Division Multiplexor and Optical Amplifi ers that sup-
port 2 Gbps FICON Express links are: Cisco Systems ONS
15530 and 15540 ESP (LX, SX) and optical amplifi er (LX,
SX), Nortel Networks Optera Metro 5100, 5200 and 5300E
and optical amplifi er, ADVA Fiber Service Platform (FSP)
2000 system and the IBM 2029 Fiber Saver.
The raw bandwidth and distance capabilities that native
FICON end-to-end connectivity has to offer makes them of
interest for anyone with a need for high performance, large
data transfers or enhanced multi-site solutions.
FICON Connectivity
FICON
Bridge
ESCD
9032
Model 5
2032
FICON
Bridge
32, 64 or 140
PORT
ESCON
CU
ESCON
CU
ESCON
CU
ESS
F10, F20, 800
Enterprise
Tape
Controller
3590 A60
FICON CTC function
Native FICON channels support channel-to-channel (CTC)
on the z990, z890, z900 and z800. G5 and G6 servers
can connect to a zSeries FICON CTC as well. This FICON
CTC connectivity will increase bandwidth between G5, G6,
z990, z890, z900, and z800 systems.
Because the FICON CTC function is included as part of the
native FICON (FC) mode of operation on zSeries, FICON
CTC is not limited to intersystem connectivity (as is the case
with ESCON), but will also support multiple device defi ni-
tions. For example, ESCON channels that are dedicated as
CTC cannot communicate with any other device, whereas
native FICON (FC) channels are not dedicated to CTC only.
Native can support both device and CTC mode defi nition
concurrently, allowing for greater connectivity fl exibility.
64, 128 or 256
All FICON Channels =
100MB/s
= LX ONLY
= LX ONLY
= LX or SX
2042
PORT
ESS
F10, F20, 800
Enterprise
Tape
Controller
3590 A60
FICON Support for Cascaded Directors
Native FICON (FC) channels now support cascaded
directors. This support is for a single hop confi guration
only. Two-director cascading requires a single vendor
high integrity fabric. Directors must be from the same
vendor since cascaded architecture implementations can
be unique. This type of cascaded support is important
for disaster recovery and business continuity solutions
because it can help provide high availability, extended
distance connectivity, and (particularly with the implemen-
tation of 2 Gbps Inter Switch Links), has the potential for
fi ber infrastructure cost savings by reducing the number of
channels for interconnecting the 2 sites.
19
FICON cascaded directors have the added value of high
integrity connectivity. New integrity features introduced
within the FICON Express channel and the FICON cas-
caded switch fabric to aid in the detection and reporting
of any miscabling actions occurring within the fabric can
prevent data from being delivered to the wrong end point.
FICON cascaded directors are offered in conjunction with
IBM, CNT, and McDATA directors.
FCP Channels
zSeries supports FCP channels, switches and FCP/SCSI
disks with full fabric connectivity under Linux on zSeries
and z/VM Version 4 Release 3 and later for Linux as a
guest under z/VM. Support for FCP devices means that
zSeries servers will be capable of attaching to select FCP/
SCSI devices and may access these devices from Linux
on zSeries. This expanded attachability means that enter-
prises have more choices for new storage solutions, or
may have the ability to use existing storage devices, thus
leveraging existing investments and lowering total cost of
ownership for their Linux implementation.
IBM
Two site non-cascaded director
topology. Each CEC connects to
directors in both sites.
With
Inter Switch Links (ISLs),
less fiber cabling may be needed
for cross-site connectivity
Two Site cascaded director
topology. Each CEC connects to
IBM
local directors only.
FICON Bridge Channel
Introduced fi rst on the 9672 G5 processors, the FICON
Bridge (FCV) channel is still an effective way to use FICON
bandwidth with existing ESCON control units. FICON
Express LX channel cards in FCV (FICON Converted)
Mode of operation can attach to the 9032 Model 005
ESCON Director through the use of a director bridge card.
Up to 16 bridge cards are supportable on a single 9032
Model 005 with each card capable of sustaining up to
eight concurrent ESCON data transfers. 9032 Model 005
ESCON Directors can be fi eld upgradeable at no charge
to support the bridge cards, and bridge cards and ESCON
cards can coexist in the same director.
For details of supported FICON and FCP attachments
access Resource Link at ibm.com/servers/resourcelink
and in the Planning section, go to z890/z990 I/O Connec-
tion information.
The support for FCP channels is for Linux on zSeries and
z/VM 4.3 and later for Linux as a guest under z/VM. Linux
may be the native operating system on the zSeries server
(note z990 runs LPAR mode only), or it can be in LPAR
mode or, operating as a guest under z/VM 4.3 or later. The
z990 now provides support for IPL of Linux guest images
from appropriate FCP attached devices.
Now, z/VM V5.1 supports SCSI FCP disks enabling the
deployment of a Linux server farm running under VM con-
fi gured only with SCSI disks. With this support you can
install, IPL, and operate z/VM from SCSI disks.
The 2 Gbps capability on the FICON Express channel
cards means that 2 Gbps link speeds are available for
FCP channels as well.
20
Open Systems Adapter-Express Features
(OSA-Express)
FCP Full fabric connectivity
FCP full fabric support means that any number of (single
vendor) FCP directors/ switches can be placed between
the server and FCP/ SCSI device, thereby allowing many
“hops” through a storage network for I/O connectivity.
This support along with 2 Gbps link capability is being
delivered together with IBM switch vendors IBM, CNT, and
McDATA. FCP full fabric connectivity enables multiple FCP
switches/ directors on a fabric to share links and there-
fore provides improved utilization of inter-site connected
resources and infrastructure. Further savings may be real-
ized in the reduction of the number of fi ber optic cabling
and director ports.
When confi gured as FCP CHPID type, the z990 FICON
Express features support the industry standard interface
for Storage Area Network (SAN) management tools.
FCP
Device
FCP
Device
Fibre Channel
Directors
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
FCP
Device
With the introduction of the z990, its increased process-
ing capacity, and the availability of Logical Channel
SubSystems, the OSA-Express Adapter family of Local
Area Network (LAN) features is also expanding by offer-
ing a maximum of up to 24 features per server, versus the
maximum of up to 12 features per server on prior genera-
tions. This expands the z990 balanced solution to increase
throughput and responsiveness in an on demand operat-
ing environment. These features combined with z/OS, or
OS/390, z/VM, Linux on zSeries, TPF, and VSE/ESA can
deliver a balanced system solution to increase throughput
and decrease host interrupts to help satisfy your business
goals.
Each of the OSA-Express features offers two ports for con-
nectivity delivered in a single I/O slot, with up to a maxi-
mum of 48 ports (24 features) per z990. Each port uses
a single CHPID and can be separately confi gured. For a
new z990 build, you can choose any combination of OSA-
Express features: the new OSA-Express Gigabit Ethernet
LX or SX, the new OSA-Express 1000BASE-T Ethernet or
OSA-Express Token-Ring. The prior OSA-Express Gigabit
LX and SX, the OSA-Express Fast Ethernet, and the OSA-
Express Token-Ring can be carried forward on an upgrade
from z900.
21
z990 OSA-Express 1000BASE-T Ethernet
The new OSA-Express 1000BASE-T Ethernet feature
replaces the current Fast Ethernet (10/100 Mbps) feature.
This new feature is capable of operating at 10,100 or 1000
Mbps (1 Gbps) using the same copper cabling infrastructure
as Fast Ethernet making transition to this higher speed Ether-
net feature a straightforward process. It is designed to sup-
port Auto-negotiation, QDIO and non-QDIO environments on
each port, allowing you to make the most of your TCP/IP and
SNA/APPN® and HPR environments at up to gigabit speeds.
OSA-Express Integrated Console Controller
The new Open Systems Adapter-Express Integrated Con-
sole Controller function (OSA-ICC), which is exclusive to
IBM and the IBM z890 and z990 servers since it is based
on the OSA-Express feature, supports the attachment of
non-SNA 3270 terminals for operator console applications.
Now, 3270 emulation for console session connections
is integrated in the zSeries and can help eliminate the
requirement for external console controllers (2074, 3174)
helping to reduce cost and complexity.
When this adapter is operating at gigabit Ethernet speed it
runs full duplex only. It also can support standard (1492 or
1500 byte) and jumbo (8992 byte) frames.
The new Checksum offl oad support on the 1000BASE-T Eth-
ernet feature when operating in QDIO mode at gigabit speed
is designed to offl oad z/OS 1.5 and Linux TCP/IP stack pro-
cessing of Checksum packet headers for TCP/IP and UDP
non-QDIO mode
SNA Passthru
TCP/IP Passthru
HPDT MPC
10/100/1000 Mbps
Ethernet (copper)
Switch/
Hub/
Router
Server
IP WAN
Intranet
DLSw
Router
10/100 Mbps
Ethernet
Server
10/100 Mbps
Ethernet
DLSw Router
Remote Office
4/16 Mbps
Token-Ring
SNA DLSw
TCP/IP
Native SNA
IP Router
10/100/1000 Mbps
Ethernet (copper)
10/100/1000
Mbps
Ethernet
IP WAN
Intranet
10/100 Mbps
Ethernet
Server
QDIO Mode - TCP/IP
IBM ^ pSeries®,
RS/6000
IBM ^ xSeries
Netfinity
Switch/
Hub/
Router
10/100 Mbps
Server
Ethernet
IP Router
Internet or
extranet
Remote Office
4/16 Mbps
Token-Ring
TCP/IP
applications
TN3270 browser
access to SNA
applications
Enterprise
Extender for
SNA end points
.
®
,
The OSA-ICC uses one or both ports on an OSA-Express
1000BASE-T Ethernet feature with the appropriate
Licensed Internal Code (LIC). The OSA-ICC is enabled
using CHPID type OSC.
The OSA-ICC supports up to 120 client console sessions
either locally or remotely.
Support for this new function will be available for z/VM
Version 4 Release 4 and later, z/OS Version 1 Release3,
VSE/ESA Version 2 Release 6 onwards and TPF.
22
Queued Direct Input/Output (QDIO)
The OSA-Express Gigabit Ethernet, 1000BASE-T Ethernet
and Token-Ring features support QDIO (CHPID type OSD),
which is unique to IBM. QDIO was fi rst introduced to the
world on the z900, in Communication Server for OS/390 2.7.
Queued Direct Input/Output (QDIO), a highly effi cient data
transfer architecture, breaks the barriers associated with
the Channel Control Word (CCW) architecture increasing
data rates and reducing CPU cycle consumption. QDIO
allows an OSA-Express feature to directly communicate
with the server’s communications program through the use
of data queues in memory. QDIO helps eliminate the use
of channel program and channel control words (CCWs),
helping to reduce host interrupts and accelerate TCP/IP
packet transmission.
TCP/IP connectivity is increased with the capability to
allow up to a maximum of 160 IP stacks per OSA-Express
port and 480 devices. This support is applicable to all the
OSA-Express features available on the z990 and is pro-
vided through the Licensed Code (LIC).
Full Virtual Local Area Network (VLAN) support is available
on z990 in z/OS V1.5 Communications Server (CS) for the
OSA-Express 1000BASE-T Ethernet, Fast Ethernet and
Gigabit Ethernet features when confi gured in QDIO mode.
Full VLAN is also available with z/OS V1.2 on z900 and
z800 using appropriate LIC upgrade on Fast Ethernet and
Gigabit Ethernet features. Full VLAN support in a Linux on
zSeries environment was delivered for QDIO mode in April
2002 for z800 and z900. z/VM V4.4, and later, also exploits
the VLAN technology offering one global VLAN ID for IPv4.
z/VM V5.1 provides the support for one global VLAN ID for
IPv6.
z990 OSA-Express Gigabit Ethernet
The new z990 OSA-Express Gigabit Ethernet LX and Giga-
bit Ethernet SX features replace the z900 Gigabit Ethernet
features for new build z990. The new OSA-Express GbE
features have a new connector type, LC Duplex, replacing
the current SC Duplex connectors used on the prior z900
Gigabit Ethernet features. The new Checksum offl oad sup-
port on these z990 features is designed to offl oad z/OS
V1.5 and Linux on zSeries TCP/IP stack processing of
Checksum packet headers for TCP/IP and UDP.
QDIO Mode - TCP/IP
pSeries, RS/6000
Gigabit Ethernet
Gigabit
Ethernet
Gigabit Ethernet
Switch /
Router
Server
TCP/IP applications
TN3270 browser
access to SNA appls.
Enterprise Extender
for SNA end points
(fiber or copper)
(fiber or copper)
10/100 Mbps
Server
IP WAN
Intranet
4/16 Mbps
Token-Ring
4/16/100 Mbps
Ethernet
IP Router
Token-Ring
Internet or
Remote Office
10/100 Mbps
Ethernet
Server
xSeries, Netfinity
Server
IP
Router
extranet
23
NON-QDIO operational mode
The OSA-Express 1000BASE-T Ethernet, Fast Ethernet
and Token-Ring also support the non-QDIO mode of
operation (CHPID type OSE). The adapter can only be set
(via the CHPID type parameter) to one mode at a time.
The non-QDIO mode does not provide the benefi ts of
QDIO. However, this support includes native SNA/APPN,
High Performance Routing, TCP/IP passthrough, and
HPDT MPC. The new OSA-Express 1000BASE-T Ethernet
provides support for TCP/IP and SNA/APPN/HPR up to 1
gigabit per second over the copper wiring infrastructure.
z990 OSA-Express Token-Ring
The same OSA-Express Token-Ring feature is supported
on z990 and z900. This Token-Ring supports a range of
speed including 4, 16 and 100 Mbps, and can operate in
both QDIO and non-QDIO modes.
Note: OSA-Express 155 ATM and OSA-2 FDDI are no
longer supported. If ATM or FDDI support are still required,
a multiprotocol switch or router with the appropriate net-
work interface for example, 1000BASE-T Ethernet, GbE LX
or GbE SX can be used to provide connectivity between
the LAN and the ATM network or FDDI LAN.
Server to User connections
A key strength of OSA-Express and associated Commu-
nications Server protocol support is the ability to accom-
modate the customer’s attachment requirements, spanning
combinations of TCP/IP and SNA applications and
devices. Customers can use TCP/IP connections from the
non-QDIO mode
SNA Passthru
TCP/IP Passthru
HPDT MPC
Switch/
Hub/
Router
Server
4/16 Mbps
Token-Ring
Server
DSLw
Router
IP WAN
Intranet
10/100 Mbps
Ethernet
4/16 Mbps
Token-Ring
Token-Ring
Switch/
Hub/
Router
16/100 Mbps
Token-Ring
Remote Office
4/16 Mbps
Token-Ring
SNA DLSw
TCP/IP
Native SNA
100 Mbps
Backbone
4/16 Mbps
Token-Ring
Switch/
Hub/
Router
4/16/100 Mbps
Token-Ring
Backbone
IP WAN
Intranet
10/100 Mbps
Ethernet
QDIO Mode - TCP/IP
Server
100 Mbps
Token-Ring
4/16 Mbps
Token-Ring
Server
IP Router
Internet or
extranet
Remote Office
4/16 Mbps
Token-Ring
TN3270 browser
access to SNA
appls.
Enterprise
Extender for
SNA end points
TCP/IP
applications
remote site to either their TCP/IP or SNA applications on
zSeries and S/390 by confi guring OSA-Express with QDIO
and using either direct TCP/IP access or use appropriate
SNA to IP integration technologies, such as TN3270 Server
and Enterprise Extender for access to SNA applications.
Customers who require the use of native SNA-based con-
nections from the remote site can use a TCP/IP or SNA
transport to the data center and then connect into zSeries
and S/390 using appropriate SNA support on OSA-
Express features confi gured in non-QDIO mode.
24
LPAR Support of OSA-Express
For z990 customers or customers who use the Processor
Resource/Systems Manager (PR/SM) capabilities IBM
offers the Multiple Image Facility (MIF), allowing the shar-
ing of physical channels by any number of LPARs. Since
a port on an OSA-Express feature operates as a channel,
sharing of an OSA-Express port is done using MIF. The
LPARs are defi ned in the Hardware Confi guration Defi ni-
tion (HCD). Depending upon the feature, and how it is
defi ned, SNA/APPN/HPR and TCP/IP traffi c can fl ow simul-
taneously through any given port.
IPv6 Support
IPv6 requires the use of an OSA-Express adapter running
in QDIO mode and is supported only on OSA-Express
features on zSeries at driver level 3G or above. IPv6 is
supported on OSA-Express for zSeries Fast Ethernet,
1000BASE-T Ethernet and Gigabit Ethernet when running
with Linux on zSeries, z/VM V5.1, and z/OS V1.4 and later.
z/VM V4.4 provided IPv6 support for guest LANs.
more effi cient technique for I/O interruptions designed
to reduce path lengths and overhead in both the host
operating system and in the adapter. This benefi ts OSAExpress TCP/IP support in both Linux for zSeries and
z/VM.
• The z990’s support of virtual machine technology has
been enhanced to include a new performance assist
for virtualization of adapter interruptions. This new z990
performance assist is available to V=V guests (pageable
guests) that support QDIO on z/VM V4.4 and later. The
deployment of adapter interruptions improves effi ciency
and performance by reducing z/VM Control Program
overhead.
Performance enhancements for virtual servers
Two important networking technology advancements are
announced in z/VM V4.4 and Linux on z990:
• The high performance adapter interrupt handling fi rst
introduced with HiperSockets is now available for both
OSA-Express in QDIO mode (CHPID=OSD) and FICON
Express (CHPID=FCP). This advancement provides a
25
HiperSockets
HiperSockets, a function unique to the zSeries, provides
a “TCP/IP network in the server” that allows high-speed
any-to-any connectivity among virtual servers (TCP/IP
images) and LPARs within a zSeries system without any
• Nondisruptive CP, ICF, IFL, and zAAP upgrades within
minutes
• Dynamic upgrade of all I/O cards in the I/O Cage
• Dynamic upgrade of spare installed memory
Plan Ahead and Concurrent Conditioning
Concurrent Conditioning confi gures a system for hot
plugging of I/O based on a future specifi ed target con-
fi guration. Concurrent Conditioning of the zSeries I/O is
minimized by the fact that all I/O cards plugging into the
zSeries I/O cage are hot pluggable. This means that the
only I/O to be conditioned is the I/O cage itself. The ques-
tion of whether or not to concurrently condition a cage is
a very important consideration, especially with the rapid
change in the IT environment (e-business) as well as the
technology. Migration to FICON Express or additional
OSA-Express networking is exceptionally easy and non-
disruptive with the appropriate microcode load and if the
cage space is available.
29
The z990 supports concurrent memory upgrade. This capa-
bility will allow a processor’s memory to be increased with-
out disrupting the processor operation. To take advantage
of this capability, a customer should not plan processor
storage on the 16 or 32 GB increments. A customer with a
Model A08, for example, with 24 GB of storage will be able
to concurrently upgrade to 32 GB but will not be able to get
to the next increment of 40 GB without a disruption.
Plan Ahead for PUs is done by ordering a “more book”
model. For example, if a customer needs 5 PUs initially,
but plans to grow to need 10 PUs, he should not order an
A08, but a Model B16 with only 5 PUs initially active.
The Plan Ahead process can easily identify the customer
confi guration that is required to meet future needs. The
result of concurrent conditioning is a fl exible IT infrastruc-
ture that can accommodate unpredictable growth in a low
risk, nondisruptive way. Depending on the required Con-
current Conditioning, there should be minimal cost associ-
ated with dormant z990 capacity. This creates an attractive
option for businesses to quickly respond to changing
environments, bringing new applications online or growing
existing applications without disrupting users.
z990 Server Capacity BackUp (CBU)
Capacity BackUp (CBU) is offered with the z990 servers
to provide reserved emergency backup CPU capacity for
situations where customers have lost capacity in another
part of their establishment and want to recover by adding
reserved capacity on a designated z990 server. A CBU
system normally operates with a “base” CP confi guration
and with a preconfi gured number of additional Processor
Units (PUs) reserved for activation in case of an emergency.
The z990 technology is suited for providing capacity
backup. The reserved CBU processing units are on the
same technology building block, the MCM, as the regular
CPs. Therefore, a single server can support two diverse
confi gurations with the same MCM. For CBU purposes, the
Models A08, B16, C24 & D32 can scale from a 1-way to a
32-way; with the purpose of providing capacity backup.
The “base” CBU confi guration must have suffi cient
memory and channels to accommodate the potential
needs of the larger CBU target machine. When capacity is
needed in an emergency, the primary operation performed
is activating the emergency CBU confi guration with the
reserved PUs added into the confi guration as CPs.
Customers who have an active Remote Support Facility
connection can perform a CBU upgrade automatically and
within a matter of minutes, right from their CBU machine’s
Hardware Management Console (HMC). For more informa-
tion on how a CBU upgrade can be activated automati-
cally, please refer to the z990 Capacity BackUp Users'
Guide found on IBM Resource Link.
The z990 supports concurrent CBU downgrade. This func-
tion enables a Capacity BackUp server to be returned to
its normal confi guration without an outage (i.e. PowerOn
Reset).
Automatic Enablement of CBU for Geographically
Dispersed Parallel Sysplex (GDPS)
The intent of GDPS support for CBU is to enable auto-
matic management of the reserved PUs provided by the
CBU feature in the event of a processor failure and/or a
site failure. Upon detection of a processor failure or site
failure, GDPS will activate CBU to dynamically add PUs to
30
Advanced Availability Functions
the processors in the takeover site to acquire processing
power required to restart mission-critical production work-
loads. GDPS-CBU management helps to minimize manual
customer intervention and the potential for errors, which
can help reduce the outage time for critical workloads from
hours to minutes. Similarly, GDPS-CBU management can
also automate the process of dynamically returning the
reserved CPs when the temporary period has expired.
z990 Server Customer Initiated Upgrade (CIU)
Customer Initiated Upgrade (CIU) is the capability to initi-
ate a processor and/or memory upgrade when spare PUs/
installed unused memory are available via the Web using
IBM Resource Link. Customers will be able to download
and apply the upgrade using functions on the Hardware
Management Console via the Remote Support Facility.
This unique and important function for zSeries gives the
customer greater control and ability in adding capacity to
the system to meet resource requirements for unpredict-
able e-business workloads and for applications which
are diffi cult to size. CIU is a low-risk, well tested-and-tried
function.
Transparent Sparing
z990 offers a 12 PU MCM with 2 PUs reserved as spares.
In the case of processor failure, these spares are used for
transparent sparing.
Enhanced Dynamic Memory Sparing
The z900 has enhanced this robust recovery design with
16 times more chips available for sparing. This will virtually
eliminate the need to replace a memory card due to DRAM
failure.
Enhanced Storage Protect Keys: z990 has enhanced the
memory storage protect key design by adding a third key
array to each memory card. The arrays are parity checked
and employ a Triple Voting strategy to assure accuracy.
This will reduce the need for memory card replacement
due to key array failure.
ESCON Port Sparing: The ESCON 16-port I/O card is
delivered with one unused port dedicated for sparing in
the event of a port failure on that card. Other unused ports
are available for nondisruptive growth of ESCON channels.
Concurrent Maintenance
• Concurrent Service for I/O Cards: All the cards which
plug into the new I/O Cage are able to be added and
replaced concurrent with system operation. This virtually
eliminates any need to schedule outage to service or
upgrade the I/O subsystem on this cage.
• Upgrade for Coupling Links: z990 has concurrent maintenance for the ISC-3 adapter card. Also, Coupling Links
can be added concurrently. This eliminates a need for
scheduled downtime in the demanding sysplex environment.
31
Parallel Sysplex Cluster Technology
• Cryptographic Cards: The PCIXCC and PCICA cards
plug in the I/O cage and can be added or replaced concurrently with system operation.
• Redundant Cage Controllers: The Power and Service
Control Network features redundant Cage Controllers for
Logic and Power control. This design enables nondisruptive service to the controllers and virtually eliminates
customer scheduled outage.
• Auto-Switchover for Service Element: The z990 has two
Service Elements. In the event of failure on the Primary
SE, the switchover to the backup is handled automatically. There is no need for any intervention by the customer or Service Representative.
Concurrent Capacity Backup Downgrade (CBU Undo)
This function allows the customer to downgrade the disas-
ter backup machine to its normal confi guration without
requiring the PowerOn Reset (POR).
Concurrent Memory Upgrade
This function allows adding memory concurrently, up to the
maximum amount physically installed.
Parallel Sysplex clustering was designed to bring the
power of parallel processing to business-critical zSeries
and S/390 applications. A Parallel Sysplex cluster consists
of up to 32 z/OS and/or OS/390 images coupled to one or
more Coupling Facilities (CFs or ICFs) using high-speed
specialized links for communication. The Coupling Facili-
ties, at the heart of the Parallel Sysplex cluster, enable
high speed, read/write data sharing and resource sharing
among all the z/OS and OS/390 images in a cluster. All
®
images are also connected to a Sysplex Timer
to address
time synchronization.
CF
12
11
1
2
10
3
9
4
8
5
7
6
Parallel Sysplex Resource Sharing enables multiple system
resources to be managed as a single logical resource
shared among all of the images. Some examples of
resource sharing include Automatic Tape Switching (ATS
star), GRS “star,” and Enhanced Catalog Sharing; all of
which provide simplifi ed systems management, increased
performance and/or scalability. For more detail, please see
S/390 Value of Resource Sharing
white paper – GF22-5115
on the Parallel Sysplex home page at ibm.com/servers/
eserver/zseries/pso.
32
Although there is a signifi cant value in a single footprint
and multi-footprint environment with resource sharing,
those customers looking for high availability must move
on to a database data sharing confi guration. With the
Parallel Sysplex environment, combined with the Workload
Manager and CICS TS or IMS TM, incoming work can be
dynamically routed to the z/OS or the OS/390 image most
capable of handling the work. This dynamic workload
balancing, along with the capability to have read/write
access data from anywhere in the Parallel Sysplex cluster,
provides the scalability and availability that businesses
demand today. When confi gured properly, a Parallel
Sysplex cluster has no single point of failure and can
provide customers with near continuous application avail-
ability over planned and unplanned outages. For detailed
information on IBM’s Parallel Sysplex technology, visit our
Parallel Sysplex home page at ibm.com/servers/eserver/
zseries/pso.
Coupling Facility Confi guration Alternatives
IBM offers different options for confi guring a functioning
Coupling Facility:
• Standalone Coupling Facility: z900 Model 100, z800
Model 0CF and 9672-R06 models provide a physically
isolated, totally independent CF environment. There is
no unique standalone coupling facility model offered
with the z990. Customers can achieve the same physically isolated environment as on prior mainframe families
by ordering a z990 with PUs characterized as ICFs.
There are no software charges associated with such
confi guration. An ICF or CF partition sharing a server
with any operating system images not in the sysplex
acts like a logical standalone CF.
• Internal Coupling Facility (ICF): Customers considering clustering technology can get started with Parallel
Sysplex technology at a lower cost by using an ICF
instead of purchasing a standalone Coupling Facility.
An ICF feature is a processor that can only run Coupling
Facility Control Code (CFCC) in a partition. Since CF
LPARs on ICFs are restricted to running only CFCC,
there are no IBM software charges associated with
ICFs. ICFs are ideal for Intelligent Resource Director and
resource sharing environments as well as for data sharing environments where System Managed CF Structure
Duplexing is exploited.
• Coupling Facility partition on a z990, z900, z800 or 9672
server using standard LPAR: A CF can be confi gured
to run in either a dedicated or shared CP partition. IBM
software charges apply. This may be a good alternative
for test confi gurations that require very little CF processing resource or for providing hot-standby CF backup
using the Dynamic Coupling Facility Dispatching function.
A Coupling Facility can be confi gured to take advantage of
a combination of different Parallel Sysplex capabilities:
• Dynamic CF Dispatch: Prior to the availability of the
Dynamic CF Dispatch algorithm, shared CF partitions
could only use the “active wait” algorithm. With active
wait, a CF partition uses all of its allotted time-slice,
whether it has any requests to service or not. The
optional Dynamic CF Dispatch algorithm puts a CF partition to “sleep” when there are no requests to service and
the longer there are no requests, the longer the partition
sleeps. Although less responsive than the active wait
algorithm, Dynamic CF Dispatch will conserve CP or ICF
resources when a CF partition has no work to process
and will make the resources available to other partitions
sharing the resource. Dynamic CF Dispatch can be
used for test CFs and also for creating a hot-standby
partition to back up an active CF.
33
• Dynamic ICF Expansion: Dynamic ICF expansion provides value by providing extra CF capacity when there
are unexpected peaks in the workload or in case of loss
of CF capacity in the cluster.
– ICF Expansion into shared CPs. A CF partition running
with dedicated ICFs needing processing capacity
beyond what is available with the dedicated CP ICFs,
can “grow” into the shared pool of application CPs
being used to execute S/390 applications on the same
server.
– ICF Expansion into shared ICFs. A CF partition run-
ning with dedicated ICFs can “grow” into the shared
pool of ICFs in case the dedicated ICF capacity is not
suffi cient. The resulting partition, an “L-shaped” LPAR,
will be composed of both shared ICF and dedicated
ICF processors, enabling more effi cient utilization of
ing provides a general purpose, hardware-assisted, easy-
to-exploit mechanism for duplexing CF structure data. This
provides a robust recovery mechanism for failures such as
loss of a single structure or CF or loss of connectivity to a
single CF, through rapid failover to the backup instance of
the duplexed structure pair.
Benefi ts of System-Managed CF Structure Duplexing
include:
• Availability
Faster recovery of structures by having the data already
there in the second CF, dramatically reducing the time
and processing required for structure rebuilds. SystemManaged Duplexing also provides basic recovery for
many struct
data for
ures that have no simple means to recover
failed structures, failed CFs, and losses of CF
connectivity.
• Manageability and Usability
A consistent procedure for duplexing does not “set
up” structures and manage structures across multiple
exploiters.
• Reliability
A common framework provides less effort on behalf
of the exploiters, resulting in more reliable subsystem
code.
• Cost Benefi ts
Facilitates the use of non-standalone CFs (e.g. ICFs) for
data sharing environments in addition to resource sharing environments.
• Flexibility
The diagram below represents creation of a duplexed
copy of the structure within a System-Managed CF
Duplexing Confi guration.
ICFz/OSz/OSICF
z800/z900/z990/G5/G6
A robust failure recovery capability
Note: An example of two systems in a Parallel Sysplex with CF
Duplexing
z800/z900/z990/G5/G6
34
Customers who are interested in testing and/or deploy-
ing System-Managed CF Structure Duplexing in their
gm130103.html to understand the performance and other
considerations of using this feature.
Parallel Sysplex Coupling Connectivity
The Coupling Facilities communicate with z/OS and
OS/390 images in the Parallel Sysplex environment over
specialized high-speed links. For availability purposes, it
is recommended that there be at least two links connect-
ing each z/OS or OS/390 image to each CF in a Parallel
Sysplex cluster. As processor performance increases, it is
important to also use faster links so that link performance
does not become constrained. The performance, avail-
ability and distance requirements of a Parallel Sysplex
environment are the key factors that will identify the appro-
priate connectivity option for a given confi guration.
Parallel Sysplex coupling links on the zSeries have been
enhanced with the introduction of Peer Mode. When con-
necting a zSeries server (z990/z900/z800) to a z800 Model
0CF, a z900 Model 100 or a zSeries ICF, the links can be
confi gured to operate in Peer Mode. This allows for higher
data transfer rates to and from the Coupling Facilities. In
Peer Mode, the fi ber-optic single mode coupling link (ISC-
3) provides 200 Gbps capacity, up to 10 km, 100 Gbps
up to 20 km, the ICB-3 link with 1 GBps peak capacity, the
ICB-4 for z990 to z990 connection at 2.0 GBps, and the
IC-3 link with 1.25 GBps capacity. Additional Peer Mode
benefi ts are obtained by enabling the link to be MIFed
at ibm.com/server/eserver/zseries/
System-Managed CF
between z/OS (or OS/390) and CF LPARs. The peer link
acts simultaneously as both a CF Sender and CF Receiver
link, reducing the number of links required. Larger and
more data buffers and improved protocols also improve
long distance performance. For connectivity to 9672s,
zSeries ISC-3 CF links can be confi gured to run in Com-
patibility Mode with the same characteristics as links on
the 9672 of 100 Gbps. The z900 and z990 also support
ICB-2 links for connectivity to 9672s. The ICB coupling link
speeds described above are theoretical maximums.
GDPS/PPRC Cross Site Parallel Sysplex distance Extended to
100 km
When using a Dense Wave Division Multiplexor (DWDM),
it will be possible via an RPQ to confi gure GDPS/PPRC or
a multi-site Parallel Sysplex with up to 100 km between
the two sites. The immediate advantage of this extended
distance is to potentially decrease the risk that the same
disaster will effect both sites, thus providing the ability
for customers to recover their production applications at
another site. Support for the External Timer Reference
(ETR) links and the Inter System Channel (ISC-3) links
has been increased from the current capability of 50 km
to an extended capability of up to 100 km. The extended
distance support for ETR and ISC-3 links is now consis-
tent with other cross site technologies that already sup-
port 100 km, such as FICON, Peer-to-Peer Remote Copy
(PPRC), and Peer-to-Peer Virtual Tape Server (PtP VTS).
It should be noted that the maximum distance between a
pair of 9037 Sysplex Timers in an Expanded Availability
confi guration remains at 40 km. Therefore, to achieve the
extended distance of 100 km between the two sites, one of
35
the options to be considered is locating one of the Sysplex
Timers in an intermediary site that is less than 40 km
from one of the two sites (as can be seen in the diagram
below). Other potential options can be evaluated when the
RPQ is ordered.
D
W
D
M
1 ETR link
per CPC
up to
100 km
9037
CLO
11
10
9
8
7
ETR
Route A
Route B
D
12
1
2
W
3
4
D
5
6
M
ETR
hut with
amplifiers
1ETR
link per
CPC
D
W
D
M
D
W
D
M
ETR
ETR
Site 1Site 2
40 km max40 km nom
2 CLO
links
D
W
D
1ETR
M
link per
CPC
D
W
D
M
ETR
9037
10
9
8
11
7
ETR
CLO
12
1
2
3
4
5
6
ETR
CPCCPC
Note: Midspan 9037 can also be located within 40 km of site 2 or on Route B.
All ETR and CLO links are provisioned as 1 channel per wavelength.
z990 Theoretical Maximum Coupling Link Speed
Connectivity
Options
G5/G6 ISC
z800/z900 ISC-3
z890/z990 ISC-3
G5/G6 ICBn/a
z900 ICB-2n/aNot Supported n/an/a
z990 ICB-2n/a Not Supportedn/an/a
z800/z900 ICB-3n/an/a
z890/z990 ICB-3n/an/a
z890/z990 ICB-4n/an/an/a
z990 ISC-3z990 ICB-2z990 ICB-3z990 ICB-4
1 Gbps
Compatibility
2 Gbps
Peer Mode
2 Gbps
Peer Mode
n/an/an/a
n/an/an/a
n/an/an/a
333 MBps
Compatibility
n/an/a
1 GBps
Peer Mode
1 GBps Peer Mode
Recommendation
use ICB-4s
n/a
n/a
2 GBps
Peer Mode
• ISC-3. InterSystem Channel-3 provides the connectivity
required for resource or data sharing between the
pling Facility and the systems directly attached to it.
Cou-
ISC-3s are point-to-point connections that require a
unique channel defi nition at each end of the channel.
ISC-3 channels operating in Peer Mode provide connection between zSeries (z800/z900/z990) general
purpose models and zSeries Coupling Facilities. ISC-3
channels operating in Compatibility Mode provide connection between zSeries and HiPerLink (ISC-2) channels
on 9672 G5 and G6 and the 9674 R06 Models. A four
port ISC-3 card structure is provided on the z990 processors. It consists of a mother card with two daughter
cards which have 2 ports each. Each daughter card is
capable of operation at 1 Gbps in Compatibility Mode
or 2 Gbps in Peer Mode up to a distance of 10 km. From
10 to 20 km, an RPQ card which comes in 2 port increments is available which runs at 1 Gbps in both Peer
and Compatibility Modes. The mode is selected for each
port via CHPID type in the IOCDS. The ports are activated in one port increments.
• ISC-2. HiPerLinks. HiPerLinks, based on single-mode
CF links, are available on 9672s (G3 - G6) and 9674s
only. ISC-3s replace HiPerLinks on zSeries 900 and z990
models.
• ICB-2. The Integrated Cluster Bus-2 is used to provide
high-speed coupling communication between a zSeries
server or CF and a 9672 G5/G6 server or CF over short
distances (~7 meters). For longer distances, ISC links
must be used. The z990 features the STI-2 card which
resides in an I/O cage and provides 2 ICB-2 ports each
capable of up to 333 MBps. The ports are activated in
one port increments. Up to 4 STI-2 cards, 8 ICB-2 links
are available on the z990.
36
• ICB-3. The Integrated Cluster Bus-3 is used to provide
0
0
high-speed coupling communication between a z990
server or CF and a z800/z900 server or CF or between
two z800/z900s over short distances (~7 meters). For
longer distances, ISC-3 links must be used. The z990
features the STI-3 card which resides in an I/O cage and
provides 2 ICB-3 ports each capable of up to 1 GBps.
The ports are activated in one port increments. Up to 8
STI-3 cards, 16 ICB-3 links are available on the z990.
ICB-3 links operate in “Peer Mode.”
• ICB-4. The Integrated Coupling Bus-4 is a “native” coupling connection available for connecting a z990 server
or CF to another z990 server or CF over short distances.
Capable of up to 2.0 GBps, the ICB-4 is the fast external
coupling connection available for the z990. The ICB-4
connection consists of one link that directly attaches to
an STI port on the system and does not require connectivity to a card in the I/O cage. One feature is required
for each end of the link. Up to 16 ICB-4 features can be
confi gured on a z990 depending on model selected.
• IC. The Internal Coupling channel emulates the Coupling Links providing connectivity between images
within a single server. No hardware is required, however
a minimum of 2 CHPID numbers must be defi ned in the
IOCDS. IC links provide the fastest Parallel Sysplex connectivity
Intelligent Resource Director
Exclusive to IBM’s z/Architecture is Intelligent Resource
Director (IRD), a function that optimizes processor and
channel resource utilization across Logical Partitions
(LPARs) based on workload priorities. IRD combines the
strengths of the zSeries LPARs, Parallel Sysplex clustering,
and z/OS Workload Manager.
Intelligent Resource Director uses the concept of an
LPAR cluster, the subset of z/OS systems in a Parallel
Sysplex cluster that are running as LPARs on the same
zSeries server. On a z990, systems that are part of the
same LPAR cluster may be in different LCSSs. In a Parallel
Sysplex environment, Workload Manager directs work to
the appropriate resources based on business policy. With
IRD, resources are directed to the priority work. Together,
Parallel Sysplex technology and IRD provide fl exibility and
responsiveness to on demand e-business workloads that
is unrivaled in the industry.
IRD has three major functions: LPAR CPU Management,
Dynamic Channel Path Management, and Channel Sub-
system Priority Queuing.
IRD Scope
4 x 2 GB/s
STI
I/O Cage
3
IT
S
I
T
S
X
U
M
I
T
S
I
T
S
3
I
I-
T
T
S
S
X
U
M
M
B
A
STI
M
STI
B
STI
A
STI
4 x 2.0 GBps
STIs
STI-3
MUX
STI-2
MUX
I/O Cage
ICB-4 (2.0 GBps)
ICB-3 (1 GBps)
ICB-3 (1 GBps)
ICB-2 (333 MBps)
ICB-2 (333 MBps)
G5/G6
z990
z90
z80
LPAR Cluster
z/OS
z/OS
Linux
OS/390
ICF
37
LPAR CPU Management
LPAR CPU Management allows WLM working in goal
mode to manage the processor weighting and logical
processors across an LPAR cluster. CPU resources are
automatically moved toward LPARs with the greatest need
by adjusting the partition’s weight. WLM also manages
the available processors by adjusting the number of logi-
cal CPs in each LPAR. This helps optimize the processor
speed and multiprogramming level for each workload,
helps reduce MP overhead, and helps give z/OS more
control over how CP resources are distributed to help meet
your business goals.
z/OS 1.2 enhances the LPAR CPU management capa-
bilities and will allow the dynamic assignment of CPU
resources to non-z/OS partitions outside the z/OS LPAR
cluster such as Linux or z/VM.
Dynamic Channel Path Management
In the past, and on other architectures, I/O paths are
defi ned with a fi xed relationship between processors and
devices. With z/OS and the zSeries, paths may be dynami-
cally assigned to control units to refl ect the I/O load. For
example, in an environment where an installation normally
requires four channels to several control units, but occa-
sionally needs as many as six, system programmers must
currently defi ne all six channels to each control unit that
may require them. With Dynamic Channel Path Manage-
ment (DCM), the system programmer need only defi ne the
four channels to the control units, and indicate that DCM
may add an additional two. As the control unit becomes
more heavily used, DCM may assign channels from a pool
of managed channels, identifi ed by the system program-
mer, to the control unit. If the work shifts to other control
units, DCM will unassign them from lesser utilized control
units and assign them to what are now the more heavily
used ones. DCM is for ESCON and FICON Bridge chan-
nels and can help reduce the number of channels required
to effectively run a workload. DCM can also help reduce
the cost of the fi ber infrastructure required for connectiv-
ity between multiple data centers. On a z990 with Logical
Channel SubSystems (LCSSs), the scope of DCM man-
agement is within a Logical Channel SubSystem. Although
an LPAR cluster can span LCSSs, when DCM is used it will
only consider systems in the same LPAR cluster and the
same LCSS.
Channel Subsystem Priority Queuing
The notion of I/O Priority Queuing is not new; it has been in
place in OS/390 for many years. With IRD, this capability
is extended into the I/O channel subsystem. Now, when
higher priority workloads are running in an LPAR cluster,
their I/Os will be given higher priority and will be sent to
the attached I/O devices (normally disk but also tape and
network devices) ahead of I/O for lower priority workloads.
LPAR priorities are managed by WLM in goal mode.
Channel Subsystem Priority Queuing provides two advan-
tages. First, customers who did not share I/O connectivity
via MIF (Multiple Image Facility) out of concern that a lower
priority I/O intensive workload might preempt the I/O of
higher priority workloads, can now share the channels and
reduce costs. Second, high priority workloads may even
benefi t with improved performance if there were I/O con-
tention with lower priority workloads. Initially, Channel Sub-
system Priority Queuing is implemented for Parallel OEMI
and ESCON, FICON Bridge and native FICON channels.
38
On a z990, the scope of Channel Subsystem I/O Priority
Queuing is a Logical Channel SubSystem.
Channel Subsystem Priority Queuing complements the
IBM Enterprise Storage Server capability to manage I/O
priority across CECs.
With IRD, the combination of z/OS and zSeries working in
synergy extends the world-class workload management
tradition of S/390 and OS/390 to ensure that the most
important work on a server meets its goals, to increase the
effi ciency of existing hardware, and to reduce the amount
of intervention in a constantly changing environment.
Parallel Sysplex Professional Services
IBM provides extensive services to assist customers
with migrating their environments and applications to
benefi t from Parallel Sysplex clustering. A basic set of
IBM services is designed to help address planning and
early implementation requirements. These services can
help you reduce the time and costs of planning a Parallel
Sysplex environment and moving it into production. An
advanced optional package of services is also available
and includes data sharing application enablement, project
management and business consultation through advanced
capacity planning and application stress testing. For more
information on Parallel Sysplex Professional Services, visit
IBM’s Web site at ibm.com/servers/eserver/zseries/pso/
services.html.
Geographically Dispersed Parallel Sysplex
The GDPS solution, based on Peer-to-Peer Remote Copy
(PPRC), referred to as GDPS/PPRC, is designed with the
attributes of a continuously availability solution. PPRC is a
hardware solution that is designed to synchronously mirror
data residing on a set of disk volumes, called the primary
volumes in site 1, to secondary disk volumes on a second
system in site 2. Only when the primary storage subsystem
receives “write complete” from the secondary storage sub-
system is the application I/O signaled completed. GDPS/
PPRC complements a multisite Parallel Sysplex environ-
ment by providing a single, automated solution to dynami-
cally manage disk and tape storage subsystem mirroring,
processors, and network resources to allow a business
to attain “continuous availability” and near transparent
business continuity/disaster recovery without data loss.
GDPS/PPRC provides the ability to perform a controlled
site switch for both planned and unplanned site outages,
while maintaining full data integrity across multiple storage
subsystems. GDPS/PPRC is designed to be application
independent and therefore is expected to be able to cover
the customer’s complete application environment. GDPS
supports both the synchronous Peer-to-Peer Remote Copy
(PPRC) as well as the asynchronous Extended Remote
Copy (XRC) forms of remote copy. GDPS/PPRC is a con-
tinuous availability solution and near transparent business
continuity/disaster recovery solution that is designed to
allow a customer to meet a Recovery Time Objective
(RTO) of less than an hour, a Recovery Point Objective
(RPO) of no data loss, and protects against metropolitan
area disasters (up to 40 km between sites). GDPS/XRC is
a business continuity/disaster recovery solution that allows
a customer to meet a RTO of one to two hours, an RPO of
less than a minute, and helps protect against metropolitan
as well as regional disasters, since the distance between
sites is unlimited. XRC can use either common communi-
cation links and channel extender technology or dark fi ber
as the connectivity between sites.
39
On the other hand, the GDPS solution based on Extended
Remote Copy (XRC), referred to as GDPS/XRC, has the
attributes of a Disaster Recovery solution. XRC is a com-
bined hardware and software asynchronous remote copy
solution. The application I/O is signaled completed when
the data update to the primary storage is completed. Sub-
™
sequently, a DFSMSdfp
component called System Data
Mover (SDM), typically running in site 2, is designed to
asynchronously offl oad data from the primary storage sub-
system’s cache and updates the secondary disk volumes.
and perform failure recovery from a single point of control,
thereby helping to improve application availability. GDPS
supports both the synchronous Peer-to-Peer Remote Copy
(PPRC), as well as the asynchronous Extended Remote
Copy (XRC) forms of remote copy. Depending on the form
of remote copy, the solution is referred to as GDPS/PPRC
or GDPS/XRC.
GDPS/PPRC and GDPS/XRC have been enhanced to
include new functions.
GDPS/PPRC HyperSwap function: The GDPS/PPRC
HyperSwap function is designed to broaden the continu-
ous availability attributes of GDPS/PPRC by extending the
Parallel Sysplex redundancy to disk subsystems.
Planned HyperSwap function provides the ability to:
• Transparently switch all primary PPRC disk subsystems
with the secondary PPRC disk subsystems for a planned
reconfi guration
• Perform disk confi guration maintenance and planned
site maintenance without requiring any applications to
be quiesced.
Planned HyperSwap function became generally available
December 2002.
Unplanned HyperSwap function contains additional func-
tion to transparently switch to use secondary PPRC disk
subsystems in the event of unplanned outages of the
primary PPRC disk subsystems or a failure of the site con-
taining the primary PPRC disk subsystems. Unplanned
HyperSwap support can allow:
• Production systems to remain active during a disk subsystem failure. Disk subsystem failures will no longer
constitute a single point of failure for an entire Parallel
Sysplex.
• Production servers to remain active during a failure of
the site containing the primary PPRC disk subsystems
if applications are cloned and exploiting data sharing
across the two sites. Even though the workload in the
second site will need to be restarted, an improvement
in the Recovery Time Objective (RTO) will be accomplished.
Unplanned HyperSwap function became generally avail-
able February 2004.
41
GDPS/PPRC management for open systems LUNs (Logi-
cal Unit Numbers): GDPS/PPRC technology has been
extended to manage a heterogeneous environment of
z/OS and open systems data. If installations share their
disk subsystems between the z/OS and open systems
platforms, GDPS/PPRC, running in a z/OS system, can
manage the PPRC status of devices that belong to the
other platforms and are not even defi ned to the z/OS
platform. GDPS/PPRC can also provide data consistency
across both z/OS and open systems data.
GDPS/PPRC management of open systems LUNs became
generally available February 2004.
GDPS supports PPRC over FCP links: In 2003, IBM
TotalStorage Enterprise Storage Server (ESS) announced
support of PPRC over Fiber Channel for the ESS Model
800. Refer to Hardware Announcement 103-298,
(RFA38991) dated October 14, 2003. This support is
designed to provide improved throughput (compared to
ESCON), and a reduction in cross-site connectivity (two
PPRC Channel links per ESS are considered suffi cient for
most customer workloads). One of the potential benefi ts
of this support is the ability for customers to increase the
distance between sites while maintaining acceptable per-
formance.
GDPS/PPRC support for PPRC over Channel became gen-
erally available February 2004.
LSS within the disk subsystem. Since this constraint has
been removed with FlashCopy V2, GDPS can now allow a
FlashCopy from a source in one LSS to a target in a differ-
ent LSS within the same disk subsystem. This new fl exibil-
ity can help simplify administration and capacity planning
for FlashCopy.
GDPS/PPRC support for FlashCopy V2 became generally
available February 2004.
GDPS/PPRC and Cross-site Parallel Sysplex distance
extended to up to 100 km: On October 31, 2003, IBM deliv-
ered, via a Request for Price Quote (RPQ), the capability
to confi gure GDPS/PPRC or a multi-site Parallel Sysplex up
to a distance of up to 100 kilometers (62 miles) between
two sites. This extended distance can potentially decrease
the risk that the same disaster will affect both sites, thus
permitting enterprises to recover production applications
at another site. Support has been extended up to a dis-
tance of up to 100 km from the current capability of up to
50 km (31 miles) for:
• External Time Reference (ETR) links
• An ETR link on a zSeries or S/390 server provides
attachment to the Sysplex Timer
• InterSystem Channel-3 (ISC-3) links operating in Peer
Mode ISC-3 links, supported on all zSeries servers, connect z/OS and OS/390 systems to Coupling Facilities in
a Parallel Sysplex environment.
®
GDPS supports FlashCopy
V2 elimination of the Logical
SubSystem (LSS) constraint: In 2003, IBM TotalStorage
Enterprise Storage Server announced support of
FlashCopy V2. (Refer to Hardware Announcement 103-
141, dated May 13, 2003.) Prior to this announcement,
both source and target volumes had to reside on the same
The extended distance support for ETR and ISC-3 links is
now consistent with other cross-site link technologies that
currently support up to 100 km between two sites (such
as FICON, Peer-to-Peer Remote Copy (PPRC), Peer-to-
Peer Virtual Tape Server (PtP VTS)). It should be noted that
the maximum fi ber optic cable distance between a pair
42
of Sysplex Timers in an Expanded Availability confi gura-
tion remains at 40 km (25 miles). Therefore, to achieve the
extended distance of up to 100 km between sites, one of
the options to be considered is locating one of the Sysplex
Timers in an intermediary site that is less than 40 km from
one of the two sites. Other potential options can be evalu-
ated when the RPQ request is submitted to IBM for review.
Coordinated near continuous availability and disaster
recovery for Linux guests: z/VM 5.1 is providing a new
HyperSwap function so that the virtual device associ-
ated with one real disk can be swapped transparently to
another. HyperSwap can be used to switch to secondary
disk storage subsystems mirrored by Peer-to-Peer Remote
Copy (PPRC).
HyperSwap can also be helpful in data migration scenar-
ios to allow applications to use new disk volumes.
GDPS plans to exploit the new z/VM HyperSwap function
to provide a coordinated near continuous availability and
disaster recovery solution for z/OS and Linux guests run-
ning under z/VM. This innovative disaster recovery solution
requires GDPS, IBM Tivoli System Automation for Linux,
Linux on zSeries, and z/VM V5.1 designed to help antici-
pate and rapidly respond to business objectives and tech-
nical requirements while maintaining unsurpassed system
availability. This solution is may be especially valuable
for customers who share data and storage subsystems
between z/OS and Linux on zSeries.
To support planned and unplanned outages, GDPS is
designed to provides the following recovery actions:
• Re-IPL in place of failing operating system images
• Site takeover/failover of a complete production site
• Coordinated planned and unplanned HyperSwap of
storage subsystems transparently to the operating
system images and applications using the storage
Performance enhancements for GDPS/PPRC and GDPS/XRC
confi gurations
• Concurrent activation of Capacity BackUp (CBU) can
now be performed in parallel across multiple servers,
which may result in an improved RTO. This improvement
may apply to both the GDPS/PPRC and GDPS/XRC confi gurations.
• In a GDPS/XRC confi guration, it is often necessary to
have multiple System Data Movers (SDMs). The number
of SDMs is based on many factors, such as the number
of volumes being copied and the I/O rate. Functions are
now capable of | being executed in parallel across multiple SDMs, thus helping to provide improved scalability
for a coupled SDM confi guration.
• Analysis has shown that PPRC commands issued by
GDPS will generate a large number of Write to Operator
messages (WTOs) that may cause WTO buffer shortages and temporarily adversely impact system performance. The Message Flooding Automation function is
expected to substantially reduce the WTO message
traffi c and improve system performance by suppressing
redundant WTOs.
Performance enhancements for GDPS/PPRC and GDPS/
XRC became generally available March 2003.
These GDPS enhancements are applicable to z800, z900,
z890, and z990. For a complete list of other supported
hardware platforms and software prerequisites, refer to
the GDPS executive summary white paper, available at:
ibm.com/server/eserver/zseries/pso
43
Automatic Enablement of CBU for Geographically Dispersed
Parallel Sysplex
The intent of the GDPS (CBU) is to enable automatic man-
agement of the reserved PUs provided by the CBU feature
in the event of a processor failure and/or a site failure.
Upon detection of a site failure, GDPS will dynamically
add PUs to the confi guration in the takeover site to restore
processing power for mission-critical production work-
loads. GDPS-CBU management helps to minimize manual
customer intervention and the potential for errors, thereby
helping to reduce the outage time for critical workloads
from hours to minutes. Similarly, GDPS-CBU management
can also automate the process of dynamically returning
the reserved CPs when the temporary period has expired.
GDPS is discussed in a white paper available at ibm.com/
server/eserver/zseries/pso/library.html. GDPS is a service
offering of IBM Global Services. For IBM Installation Ser-
vices for GDPS, refer to the IBM Web site.
Message Time Ordering (Sysplex Timer Connectivity to Coupling
Facilities)
As processor and Coupling Facility link technologies have
improved over the years, the requirement for time synchro-
nization tolerance between systems in a Parallel Sysplex
environment has become ever more rigorous. In order
to ensure that any exchanges of timestamped informa-
tion between systems in a sysplex involving the Coupling
Facility observe the correct time ordering, time stamps are
now included in the message-transfer protocol between
the systems and the Coupling Facility. Therefore, when a
Coupling Facility is confi gured as an ICF on any z990 or
z900 Models 2C1 through 216, the Coupling Facility will
require connectivity to the same 9037 Sysplex Timer that
the systems in its Parallel Sysplex cluster are using for
time synchronization. If the ICF is on the same server as a
member of its Parallel Sysplex environment, no additional
connectivity is required, since the server already has
connectivity to the Sysplex Timer. However, when an ICF
is confi gured on any z990 or z900 Models 2C1 through
216 which do not host any systems in the same Parallel
Sysplex cluster, it is necessary to attach the server to the
9037 Sysplex Timer.
z900 Turbo or z890/z990 Model with ICF
and non-Parallel Sysplex LPARs
CF01
ICF
ICB-3 / ICB-4/ ISC-3
ICB-3/ ICB-4 / ISC-3
IBM ^ z900
or z890/z990
Sysplex
LPARs
IC
z/OS
CF02
CF02
ICF
ICF
New Connection to Sysplex Timer
Non-Sysplex
LPARs
12
11
1
10
2
3
9
4
8
5
7
6
z/OS
10
9
8
IBM ^ z900
or z890/z990
Sysplex
z/OS
12
11
1
2
New Connection to Sysplex Timer
3
4
5
7
6
LPARs
44
Continuous Availability Recommended Confi guration for
Parallel Sysplex
Dedicated (External)
Coupling Facility
z990
with CFs only
z900 Model 100
9674, 9672 R06,
or z800 Model 0CF
Internal Coupling
IC
ICF
ESCON/FICON Express Channels
Note: z990 will attach to 9037-001 or 9037-002. Service for 9037-001
will be discontinued at the end of 2003.
Facility
z/OSz/OS
Sysplex Timers
Components and assumptions
• Two Coupling Facilities; at least one external or else
using System-Managed CF Structure Duplexing
• Two Sysplex Timers
• Two z/OS or OS/390 servers with redundant backup
capacity
• Two links from each CF to each image
• Two hardware management consoles
• Two ESCON or FICON Directors with cross-connected
disks
• Dual electrical power grids
• Cloned OS/390 images, latest available software levels
• Automation capabilities for recovery/restart
• Critical data on RAID and/or mirrored disks
Key attributes can include
• No single point of failure
• Fast, automatic recovery
– CF: rebuild in surviving CF
– CEC, z/OS, OS/390: restart subsystems on surviving
image
– TM/DBMS: restart in place
• Surviving components absorb new work
• No service loss for planned or unplanned outages
• Near unlimited, plug-and-play growth capacity
45
z990 Support for Linux
Linux on zSeries
Linux and zSeries are a great team. Linux has the same
appearance and application programming interfaces no
matter what platform it is running on, since it is designed to
be platform-independent. When Linux is run on a zSeries
server it can inherit the legendary qualities of service that
businesses worldwide rely on for hosting their most impor-
tant workloads. Linux is open standards-based, supporting
rapid application portability and can be adapted to suit
changing business needs. The fl exibility and openness of
Linux make it very popular with developers, whose contri-
butions endow Linux with an extensive and diverse appli-
within a single server, either horizontally or vertically. Hun-
dreds of Linux images can run simultaneously, providing
server consolidation capabilities while helping to reduce
both cost and complexity.
Of course, no matter which Linux applications are brought
to the zSeries platform, they can all benefi t from high-
speed access to the applications and corporate data that
reside on zSeries.
IBM developed the code that enables Linux to run on
zSeries servers, and made it available to the Open Source
community. The term used to describe this enabling code
is “patches.”
To eliminate the need for an external 2074 Console control-
ler and associated consoles, an administrator may utilize
the Hardware Management Console (HMC) functions "Inte-
grated 3270 Console Support" for operating z/VM images,
and "Integrated ASCII Console Support" to operate Linux
images.
The support is implemented using an internal communi-
cations method —
ing system to communicate with the HMC. The software
port was made available in z/VM Version 4 Release 4.
sup
An update for Linux will be made available to IBM Linux
Distribution Partners.
Linux on zSeries supports the 64-bit architecture avail-
able on zSeries processors. This architecture eliminates
the existing main storage limitation of 2 GB. Linux on
zSeries provides full exploitation of the architecture in both
real and virtual modes. Linux on zSeries is based on the
Linux 2.4 kernel. Linux on S/390 is also able to execute on
zSeries and S/390 in 32-bit mode:
SCLP — which enables the operat-
IBM Middleware
• Connectors
– DB2 Connect
™
Version 8.1
– DB2 Connect Enterprise Edition Version 7.2
– DB2 Connect Unlimited Edition Version 7.2
– CICS Transaction Gateway Version 5.0
– IMS Connect Version 1.1 and 1.2
• WebSphere Family
– WebSphere Application Server Version 5.0
– WebSphere Application Server for Developers
Version 5.0
– WebSphere Application Server Network Deployment
Version 5.0
– WebSphere Application Server Advanced Edition 4.0
– WebSphere Application Server Advanced Single
Server Edition Version 4.0
– WebSphere Application Server Advanced Developer
Edition Version 4.0
46
– WebSphere Application Server Advanced Edition
Version 3.5
– WebSphere Commerce Business Edition Version 5.4
– WebSphere Host On-Demand Version 7.0 and 6.0
– WebSphere MQ Everyplace Version 2.0 and 1.2
– WebSphere MQ Version 5.3
– WebSphere Personalization Server for Multiplatforms
Version 4.0
– WebSphere Personalization Server Version 3.5
– WebSphere Portal Server for Multiplatforms Version
4.1 and 4.2
• Data Management
– DB2 Universal Database Enterprise Server Edition
Version 8.1
– DB2 Universal Developers Edition Version 8.1
– DB2 Personal Developers Edition Version 8.1
®
– DB2 Net.Data
Version 8.1
– DB2 Runtime Client Version 8.1
– DB2 Spatial Extender Version 8.1
™
– DB2 Intelligent Miner
Modeling Version 8.1
– DB2 Intelligent Miner Scoring Version 8.1
– DB2 Intelligent Miner Visualization Version 8.1
– DB2 Net Search Extender Version 8.1
DB2 Universal Database Enterprise Edition Version 7.2
–
–
DB2 Universal Database Developers Edition Version 7.2
– DB2 Intelligent Miner Scoring Version 7.1
– DB2 Net Search Extender Version 7.2
• Tivoli
–
Tivoli Access Manager for e-business Versions 3.9 and
4.1
–
Tivoli Access Manager for Operating Systems Version 4.1
– Tivoli Confi guration Manager Version 4.2
– Tivoli Decision Support for OS/390 Version 1.5.1
– Tivoli Distributed Monitoring Version 4.1
– Tivoli Enterprise Console Version 3.8 and 3.7.1
– Tivoli Identity Manager Version 1.1
Tivoli Monitoring for Transaction Performance Version 5.1
–
– Tivoli Monitoring Version 5.1.1 and 5.1
– Tivoli NetView for z/OS Version 5.1
– Tivoli Remote Control Version 3.8
– Tivoli Risk Manager Version 4.1 and 3.8
– Tivoli Software Distribution Version 4.0
™
– Tivoli Storage Manager
Versions 5.1.5 and 5.1
– Tivoli Storage Manager Client Version 4.2
– Tivoli Switch Analyzer Version 1.2
– Tivoli User Admin Version 3.8
– Tivoli Workload Scheduler Version 8.1
• Informix
– Informix C-ISAM
• U2
– IBM UniData Version 5.2x
• Other IBM Software Products
– IBM Application Workload Modeler Version 1.1 and
Release 1
– IBM Developer Kit Versions 1.4 and 1.3.1
– IBM Directory Integrator Version 5.1
– IBM Directory Server Versions 5.1 and 4.1
– IBM HTTP Server Version 1.3.19.3
– IBM Object REXX Version 2.2
– IBM Screen Customizer Versions 2.0.7 and 2.0.6
47
Linux Distribution Partners
• SUSE LINUX
Product Information at suse.de/en/produkte/susesoft/S390/
• Turbolinux
Product Information at turbolinux.com/products/s390
• Red Hat Linux
Product Information at redhat.com/software/S390
z/VM Version 4 and Version 5
z/VM supports Linux on the mainframe. Within the VM
environment, Linux images benefi t from the ability to
share hardware and software resources and use internal
high-speed communications. While benefi ting from the
reliability, availability and serviceability of IBM ^
zSeries servers, both z/VM V4 and V5 offer an ideal plat-
form for consolidating Linux workloads on a single physi-
cal server which allows you to run tens to hundreds of
Linux images. z/VM V4 is priced on a per-engine basis
(one-time charge) and supports IBM Integrated Facility for
Linux (IFL) processor features for Linux based workloads,
as well as standard engines for all other zSeries and S/390
workloads. Engine-based Value Unit pricing for z/VM V5.1
is replacing the pricing model available with z/VM V4.
Engine-based Value Unit pricing is designed to provide a
lower entry point and a decreasing price curve which will
help provide improved price/performance as hardware
capacities and workload grow. Value Unit pricing helps
you to add capacity and workload with an incremental
and improved price and the ability to aggregate licenses
acquired across machines that are part of your enterprise.
Integrated Facility for Linux (IFL)
The Integrated Facility for Linux feature of the zSeries serv-
ers provides a way to add processing capacity, exclusively
for Linux workloads, with minimal effect on the model des-
ignation. Operating systems like z/OS, TPF, and VSE/ESA
will not execute on Integrated Facility for Linux engines.
Consequently, these engines will not necessarily affect
the software charges for workload running on the other
engines in the system.
OSA-Express Ethernet for Linux
Driver support is provided for the functions of the new
OSA-Express Gigabit Ethernet and 1000BASE-T Ethernet
features.
OSA-Express Enhancements for Linux
A new function, Checksum Offl oad, offered for the new
OSA-Express GbE and 1000BASE-T Ethernet features,
is available for the Linux on zSeries and z/OS environ-
ments. Checksum Offl oad provides the capability of
calculating the Transmission Control Protocol (TCP),
User Datagram Protocol (UDP), and Internet Protocol (IP)
header checksums. Checksum verifi es the correctness of
fi les. By moving the checksum calculations to a Gigabit
or 1000BASE-T Ethernet feature, host CPU cycles are
reduced and performance is improved. When checksum
is offl oaded, the OSA-Express feature performs the check-
sum calculations for Internet Protocol Version 4 (IPv4)
packets.
48
Two important networking technology advancements are
announced in z/VM 4.4 and Linux on z990:
• The high performance adapter interrupt handling fi rst
introduced with HiperSockets is now available for both
OSA-Express in QDIO mode (CHPID=OSD) and FICON
Express (CHPID=FCP). This advancement provides a
more effi cient technique for I/O interruptions designed
to reduce path lengths and overhead in both the host
operating system and in the adapter. This benefi ts OSAExpress TCP/IP support in both Linux for zSeries and
z/VM.
• The z990’s support of virtual machine technology has
been enhanced to include a new performance assist
for virtualization of adapter interruptions. This new z990
performance assist is available to V=V guests (pageable
guests) that support QDIO on z/VM 4.4. The deployment
of adapter interruptions improves effi ciency and performance by reducing z/VM Control Program overhead
when handling Linux guest virtual servers.
HiperSockets
HiperSockets can be used for communication between
Linux images and Linux or z/OS images, whether Linux is
running in an IFL LPAR, natively or under z/VM.
Virtual Local Area Networks (VLANs), IEEE standard
802.1q, is offered for HiperSockets in a Linux on zSeries
environment. VLANs can reduce overhead by allowing
networks to be organized for optimum traffi c fl ow; the
network is organized by traffi c patterns rather than physi-
cal location. This enhancement permits traffi c to fl ow on
a VLAN connection both over HiperSockets and between
HiperSockets and an OSA-Express GbE, 1000BASE-T Eth-
ernet, or Fast Ethernet feature.
Internet Protocol Version 4 (IPv4) broadcast packets
are now supported over HiperSockets. TCP/IP applica-
tions that support IPv4 broadcast, such as OMPROUTE
when running Routing Information Protocol Version 1
(RIPv1), can send and receive broadcast packets over
HiperSockets interfaces. This support is exclusive to z990.
You can transparently bridge traffi c between a HiperSockets
and an external OSA-Express network attachment. New
Linux Layer 2 Switch (Linux L2S) support can help simplify
network addressing between HiperSockets and OSA-
Express. You can now seamlessly integrate HiperSockets-
connected operating systems into external networks,
without requiring intervening network routing overhead,
thus increasing performance and simplifying confi guration.
The currently available distributions; SUSE SLES 7, SUSE
SLES 8, Red Hat 7.1 and Red Hat 7.2 support z990
compatibility and exploitation of 30 LPARs and 2 Logical
Channel SubSystems. Support to further exploit z990 func-
tions will be delivered as an Open Source Contribution
via www.software.ibm.com/developerworks/opensource/
linux390/index.shtm. IBM is working with its distribution
partners to provide these functions in future distribution
releases.
Fibre Channel Protocol (FCP channel) Support for Linux
Support for FCP channels enables zSeries servers to
connect to select Fibre Channel Switches and FCP/SCSI
devices under Linux on zSeries. This expanded attachabil-
ity provides a larger selection of storage solutions for Linux
implementations.
49
zSeries 990 Family Confi guration Detail
Cryptographic Support for Linux
Linux on zSeries running on standard z990, z900, and
z800 engines is capable of exploiting the hardware cryp-
tographic feature provided by the PCICA feature (PCI
Cryptographic Accelerator). This enables customers
implementing e-business applications on Linux on zSeries
to utilize this enhanced hardware security.
Linux Support
Environment
• z990, z900, z800 or S/390 single image
• zSeries or S/390 LPAR
• VM/ESA
®
or z/VM guest
Block devices
• VM minidisks
• ECKD
™
3380 or 3390 DASDs
• VM virtual disk in storage
Network devices
• Virtual CTC
• ESCON CTC
•
OSA-Express (Gigabit Ethernet, 1000BASE-T Ethernet,
Fast Ethernet, Token-Ring) up to 24 features/48 ports on
z990
• HiperSockets (up to 4,096 TCP/IP stacks on up to 16
HiperSockets on z990)
• 3172
• IUCV
• Character devices
• 3215 console
• Integrated console
Additional information is available at ibm.com/linux/ and at
ibm.com/zseries/linux/.
Maximum of 1024 CHPIDs; 3 I/O cages
(28 slots each) = 84 I/O slots.
Per System
Feature Minimum Maximum Maximum Increments/ Purchase
Features I/O Slots Connections channels/ Increm.
used by ports per
Features Feature
1
ESCON, 0
16 port channels2 channels3 channels
FICON 0
Express
STI-26 0 4 N/A 2 outputs N/A
ICB-2 link 0
STI-36 0 8 N/A 2 outputs N/A
ICB-3 link 0
ICB-4 link 01 N/A8 16 links7 N/A
ISC-3 0
OSA- 0 245 48 2 ports 1 feature
Express ports
PCICA
cards cards
PCIXCC
coprocessors coprocessor
1) A minimum of one I/O feature (ESCON, FICON Express) or one Coupling
Link (ICB, ISC-3) is required.
2) Maximum of 48 ESCON features/720 active channels on Model A08.
Maximum of 48 FICON features/96 channels on A08.
3) Each ESCON feature has 16 channels of which 15 channels may be activated. One channel is always reserved as a spare.
4) ESCON channels are purchased in increments of four and are activated
via Licensed Internal Code, Confi guration Control (LIC CC). Channels
are activated equally across all installed 16-port ESCON features for
high availability.
5) The maximum quantity of FICON Express, OSA-Express, PCICA, and
PCIXCC in combination cannot exceed 20 features per I/O cage and 60
features per server.
6) The STI distribution cards, which support ICB-2 and ICB-3, reside in the
I/O cage. Each STI distribution card occupies one I/O slot.
7) The maximum number of Coupling Links combined (ICs, ICB-2s, ICB-3s,
ICB-4s, and active ISC-3 links) cannot exceed 64 per server.
8) ICB-4s do not require connectivity to a card in the I/O cage. ICB-4s are
not included in the maximum feature count for I/O slots.
9) A maximum of 32 ISC-3s can be defi ned in Compatibility ode (operating
at 1 Gbps, instead of 2 Gbps).
10) It is recommended that an initial order for ISC-3 include two links. When
two links are purchased, two ISC-3 features are shipped and activated
links are balanced across ISC-3 features for high availability.
11) The total number of PCICAs and PCIXCCs cannot exceed eight features per server.
12) The total number of PCICAs cannot exceed two features per I/O cage.
13) PCIXCC feature increments are 0, 2, 3, or 4.
0 6
013
692 1024 16 4
1
1
N/A 8 links7 N/A 1 link
1
N/A 16 links7 N/A 1 link
1
4
2, 5
60
120 2 1
channels2
12 48 links
5, 11, 12
12 accelerator 2 accelerator 1 feature
5, 11
4 1 1 feature
channels feature
7, 9
4 links 1 link
1 link
4
10
13
50
Processor Unit Assignments
Coupling Links
Model Min. PU* Max. PU SAP Standard Spares Standard
A08** 1 8 2 2
B16** 1 16 4 4
C24** 1 24 6 6
D32** 1 32 8 8
*PU can be characterized as CP, IFL, ICF, Optional SAPs, unassigned
CPs, and/or unassigned IFLs up to a max number of PUs for the model
**Customer will be required to purchase at least one CP, IFL or ICF feature
for any model.
Processor Memory
z990 Model Minimum Maximum
A08 16 GB 64 GB
B16 16 GB 128 GB
C24 16 GB 192 GB
D32 16 GB 256 GB
Max two memory cards per z990 book. Memory cards 8 GB, 16 GB or 32
GB.
Channels
Model A08 B16 C24 D32
ESCON Min 0 0 0 0
**ESCON Max 720 1024 1024 1024
FICON *Min 0 0 0 0
FICON *Max 96 120 120 120
*FICON Express and FCP confi gured on the same FICON Express features.
Max channels total 120.
**ESCON increments of 4 channels
Links IC ICB-2* ICB-3** ICB-4 ISC-3 Max Links
0-32 0-8 0-16 0-16 0-48 Total External
and Internal
links = 64
*requires STI-2 card
**requires STI-3 card
Note: At least one I/O channel (FICON, ESCON) or one coupling link (ISC,
ICB) must be present.
Cryptographic Features
PCICA
Minimum 0 0
Maximum 63 4
1. Max two PCICA features per I/O cage
2. Max eight PCICA and PCIXCC features per system
3. Two accelerator cards per PCICA feature
4. One coprocessor per PCIXCC feature
1, 2
PCIXCC2
4
OSA-Express Features
OSA-Express* Features
Minimum 0
Maximum 24
*Any combination of GbE LX, GbE SX, 1000BASE-T Ethernet, Token-Ring
51
z990 Frame and I/O Confi guration Content: Planning for I/O
The following diagrams show the capability and fl exibility
built into the I/O subsystem. All machines are shipped with
two frames, the A-Frame and the Z-Frame and can have
between one and three I/O cages. Each I/O cage has 28
I/O slots.
Z-Frame
3rd
I/O Cage
2nd
I/O Cage
A-Frame
CEC
1st
I/O Cage
3
I/O cages
Z-Frame
I/O Feature Type Features Maximum
ESCON 28 cards 420 channels
FICON Express 20 40 channels
OSA-Express 20 40 ports
PCIXCC 4 4
PCICA 2 4 cards
Maximum combined FICON Express, OSA-Express, PCICA/PCIXCC features is 20.
Z-Frame
A-Frame
CEC
I/O
Cage
Single
I/O cage
A-Frame
2
I/O cages
2nd
I/O Cage
CEC
1st
I/O Cage
I/O Feature Type Features Maximum
ESCON 35 cards 0 channels
FICON Express 60 120 channels
OSA-Express 24 48 ports
PCIXCC 4 4
PCICA 6 12 cards
Maximum combined FICON Express, OSA-Express, PCICA/PCIXCC features is 60.
General Information:
•
ESCON confi gured in 4-port increments. Up to 28 channels in 2 cards, 60 channels in 4 cards, 88 channels in 6
cards.
•
120 in 8 cards, etc. up to a maximum 69 cards, 1024
channels.
• OSA-Express can be Gigabit Ethernet, 1000BASE-T
Ethernet or Token-Ring.
• Total number of PCIXCC / PCICA is 8 per system.
• If ICB-2 or ICB-3 are required on the system, these will
use up a single I/O slot for every 2 ICB-2 or ICB-3 to
accommodate the STI-2 and STI-3 cards.
I/O Feature Type Features Maximum
ESCON 35 cards 512 channels
FICON Express 40 80 channels
OSA-Express 24 48 ports
PCIXCC 4 4
PCICA 4 8 cards
Maximum combined FICON Express, OSA-Express, PCICA/PCIXCC features is 40.
52
Physical Characteristics
Channels and channel adapters no longer supported on z990
The following channels and/or channel adapters are no
longer supported:
• Parallel channels - an ESCON converter is required;
IBM 9034 or Optica 34600 FXBT
• OSA-2 adapters - use equivalent OSA-Express adapters, for FDDI use 1000BASE-T or Gigabit Ethernet with
appropriate multi-protocol switch or router
• OSA-Express ATM - use 1000BASE-T or Gigabit Ethernet with appropriate multi-protocol switch or router
• 4-Port ESCON cards - will be replaced with 16-port
ESCON cards during upgrade
• FICON (pre FICON Express) - will be replaced with
FICON Express during upgrade
• PCICC - replaced with PCIXCC for most functions
The fi rst ICB-2 or 3 required a slot. The second to the
fourth required another slot. The fi fth to the sixth required
another slot. (STI - 2/3 cards each supports two ICBs)
z990 Power/Heating/Cooling
System Power Consumption (kW)
Model / Confi g 1 I/O Cage 2 I/O Cage 3 I/O Cage
A08 5.3 7.8 10.3
B16 7.3 9.8 12.3
C24 9.1 11.6 13.9
D32 10.8 13.3 15.8
Note: Assumes maximum confi guration of I/O Cages 60 amp cords
System Cooling (Air Flow Rate - CFM)
Model / Confi g 1 I/O Cage 2 I/O Cage 3 I/O Cage
A08 1400 1800 2200
B16 1800 2200 3000
C24 2200 2600 3250
D32 2200 3000 3250
Note: Assumes chilled underfl oor temperature of 24OC and maximum confi guration of I/O cages
Heat Output (kBTU/hr)
Model / Confi g 1 I/O Cage 2 I/O Cage 3 I/O Cage
A08 18.02 26.52 35.02
B16 24.82 33.32 41.82
C24 30.94 39.44 47.26
D32 36.72 45.22 53.72
z990 Dimensions
z990
# of Frames 2 Frames
IBF contained within 2 frames
Height (w/ covers) 194.1 cm / 76.4 in (40 EIA)
Width (w/ covers) 157.7 cm / 62.1 in (each frame 30.2 in)
Depth (w/ covers) 157.7 cm / 62.1 in
Height Reduction 178.5 cm / 70.3 in (38 EIA)
Width Reduction None
Machine Area 2.49 sq. meters / 26.78 sq. feet
Service Clearance 5.45 sq. meters / 58.69 sq. feet
(IBF contained within the frame)
53
Coupling Facility — CF Level of Support
CF Level Function
13 Protocol used with fi ber channel expected to be
more effi cient than ESCON
Helps lower Total Cost of Ownership (TCO)
Only 2 cross site FCP links / ESS required for
most workloads
Can provide better performance
Able to increase distance between sites while
maintaining acceptable application performance
One protocol exchange vs. 2-3 with ESCO
12 64-bit support for Coupling Facility, CF Duplexing
Toleration for >15 LPAR ID on z990
Enhanced Storage Protect
DB2 Performance
Message Time Ordering
11 9672 G5/G6 CF Duplexing
Toleration for >15 LPAR IDs on z990
1 Dynamic Alter support
CICS temporary storage queues
System Logger
Notes:
–
G5 base CF level code is CF Level 6 and can be upgraded to CF Level 11
–
G6 base CF level code is CF Level 8 and can be upgraded to CF Level 11
– z900 base CF level code is CF Level 9
– z800 and z990 base CF level code is CF Level 12
–
Detailed information regarding CF Levels can be found in Coupling Facility
Level (CF LEVEL) Considerations at ibm.com/s390/pso/cftable.html
*G3, G4, G5 and G6 only
**zSeries required
Please note that although a particular back level machine may be updated
to a more current CFCC level, NOT all the functions of that CFCC level may
be able to run on that hardware platform, i.e., G3/G4 can be upgraded to
CF Level 8 but it cannot use dynamic ICF expansion into shared ICF pool.
shared message queues
Fiber-Optic Cabling and System Connectivity
In the world of open systems and Storage Area Networks
(SANs), the changing requirements for fi ber-optic cabling
are directly related to the system hardware confi guration.
As industry-standard protocols and higher data rates con-
tinue to be embraced in these environments, the fi ber-
cabling options can become numerous and complex.
Today’s marketplace is evolving towards new Small Form
Factor (SFF) fi ber-optic connectors, short wavelength (SX)
and long wavelength (LX) laser transceivers, and increas-
ing link speeds from one Gigabit per second (Gbps) to 10
Gbps. New industry-standard SFF fi ber optic connectors
and transceivers are utilized on the zSeries ESCON and
FICON Express features, on the ISC-3 feature, and on the
zSeries ETR feature. These new features must coexist with
the current infrastructure that utilizes a different “family” of
fi ber-optic connectors and transceivers.
As a result of this complex and continually changing land-
scape, IBM is providing you with multiple fi ber cabling
services options to provide fl exibility in meeting your fi ber
cabling needs.
IBM Network Integration and Deployment Services for
zSeries fi ber cabling (zSeries fi ber cabling services)
enables businesses to choose the zSeries confi guration
that best matches their computing environment without
having to worry about planning and implementing the
fi ber optic cabling. By teaming with IBM, businesses can
receive a world-class solution for their zSeries fi ber con-
nectivity requirements, including consulting and project
management, as well as the fi ber-optic jumper cables and
installation to complete the zSeries integration.
optic
54
zSeries fi ber cabling now offers three options to address
a solution for your fi ber cable installation. Enterprise fi ber
cabling offers two additional options to help meet your
structured (trunking) environments requirements.
zSeries fi ber cabling:
• Fiber-optic jumper cabling package
will analyze your zSeries channel confi guration and your
existing fi ber-optic cabling to determine the appropriate
fi ber-optic jumper cables required, then supply, label
and install the fi ber-optic jumper cables and complete
the installation with a detailed connection report.
• Fiber-optic jumper migration and reuse for a zSeries
upgrade
will plan, organize, re-label, re-route and re-plug your
existing fi ber-optic jumper cables for reuse with the
upgraded zSeries server
• Fiber-optic jumper cables and installation
will supply the fi ber-optic jumper cables you specify,
then label and install the fi ber-optic jumper cables.
Enterprise fi ber cabling options:
• zSeries fi ber-optic trunk cabling package
will analyze your zSeries channel confi guration and your
existing fi ber-optic infrastructure to determine the appropriate fi ber-optic harnesses, fi ber-optic trunk cables and
the fi ber-optic patch panel boxes required, then supply,
label and install the fi ber-optic components to connect
your new zSeries server to your existing structured fi ber
cabling infrastructure.
• Enterprise fi ber cabling package
will analyze your entire data center confi guration and
existing fi ber-optic infrastructure to determine the appropriate end-to-end enterprise solution for connectivity.
This is a customized offering that includes trunk cables,
zone cabinets, patch panels and direct attach harnesses for servers, directors and storage devices.
These tailored zSeries fi ber cabling options use the same
planning and implementation methodologies as IBM’s cus-
tomized enterprise fi ber cabling services, only focused on
your zSeries fi ber cabling needs.
Fiber Quick Connect (FQC): FQC, a zSeries confi guration
option, helps reduce the cable bulk associated with the
installation of potentially 240 (z800) to 256 (z900) to 420
(z990) ESCON channels in one I/O cage. Fiber harnesses,
which are factory-installed, enable connection to IBM’s
Fiber Transport System (FTS) direct-attach fi ber trunk
cables. Each trunk can have up to 72 fi ber pairs. Four
trunks can displace the 240 to 256 fi ber-optic cables on
the z800 or z900.
In planning for zSeries systems, refer to
S/390 Fiber Optic Links
and Open System Adapters), GA23-0367, and the Installa-
tion Manual Physical Planning (IMPP) manual. Refer to the
services section of Resource Link for further details on the
zSeries Fiber Cabling Service options and the Fiber Quick
Connect confi guration option.
Access Resource Link at ibm.com/servers/resourcelink.
(ESCON, FICON, Coupling Links,
Planning for:
55
z/OS
While zSeries servers are supported by a number of dif-
ferent operating systems, their most advanced features
are powered by z/OS. z/OS is the foundation for the future
of zSeries, an integral part of the z/Architecture designed
and developed to quickly respond to the demanding qual-
ity of service requirements for on demand businesses.
z/OS is the fl agship mainframe operating system based
on the 64-bit z/Architecture. It is designed to deliver the
highest qualities of service for enterprise transactions
and data, and extends these qualities to new applications
using the latest software technologies. It provides a highly
secure, scalable, high technology-performance base
on which to deploy Internet and Java-enabled applica-
tions, providing a comprehensive and diverse application
execution environment. z/OS takes advantage of the latest
software technologies: new object-oriented programming
models that permit the rapid design, development and
deployment of applications essential to on demand busi-
nesses. It helps protect your investment in your present
mainframe applications by providing options for modern-
izing existing applications and integrating them with new
on demand applications, all within a single image of z/OS.
It provides a solid base for new applications, supporting
™
new technologies such as Enterprise JavaBeans
HTML, and Unicode, Parallel Sysplex clustering, highly
available TCP/IP networking and dynamic workload and
resource balancing.
, XML,
Integrated system services
z/OS helps make critical data and processing functions
accessible to end users regardless of their location in the
heterogeneous on demand world. The z/OS base includes
z/OS Communications Server, which enables world class
TCP/IP and SNA networking support, including mainframe
dependability, performance, and scalability; highly secure
connectivity; support for multiple protocols; and effi cient
use of networking assets.
This integrated set of system services in z/OS can help a
customer to focus on extracting the maximum business
value from the z/OS installation. The system manages the
workload, program libraries and I/O devices. Complexities
are designed to be minimized and problem determina-
tion is facilitated with the sophisticated recovery, reporting
and debug facilities of z/OS. The z/OS operating system
combines many features that change the playing fi eld of IT
infrastructure design:
• Support for zSeries Application Assist Processors
(zAAP), an attractively priced special processing unit
that provides an economical z/OS Java language-based
execution environment
• Intelligent Resource Director expands the capabilities of
z/OS Workload Manager to react to changing conditions
and prioritize critical business workloads.
Support for 64-bit real memory and 64-bit virtual storage.
•
• A new installation and confi guration infrastructure that
simplifi es the installation and confi guration of z/OS and
related products.
56
• Software pricing models designed to support on
demand reality
z/OS 1.6 is the fi rst release of z/OS that requires the
z/Architecture. This release will only run on zSeries servers
(z890, z990, z800, z990) or equivalent servers.
z/OS.e
z/OS.e is unique for the z800 and z890 providing select
function at an exceptional price. z/OS.e is intended to help
customers exploit the fast growing world of on demand
business by making the deployment of new applications
on the z800 and z890 very attractively priced.
z/OS.e uses the same code base as z/OS with custom
parameters and invokes an operating environment that is
comparable to z/OS in service, management, reporting,
and reliability. In addition, z/OS.e invokes zSeries hard-
ware functionality just as z/OS does. No new z/OS skills
and service procedures are required for z/OS.e.
IBM
For more information on z/OS.e see the
zSeries 890 and z/OS Reference Guide
Intelligent Resource Director
Intelligent Resource Director (IRD) is a key feature of the
z/Architecture which extends the Workload Manager
to work with PR/SM on zSeries servers to dynamically
manage resources across an LPAR cluster. An LPAR
cluster is the subset of the z/OS systems that are running
as LPARs on the same CEC in the same Parallel Sysplex.
Based on business goals, WLM can adjust processor
capacity, channel paths, and I/O requests across LPARs
without human intervention.
IRD assigns resources to the application; the applica-
tion is not assigned to the resource. This capability of a
system to dynamically direct resources to respond to the
needs of individual components within the system is an
^
.
evolutionary step. It enables the system to continuously
allocate resources for different applications, and this helps
to reduce the total cost of ownership of the system. IRD is
made up of three parts that work together to help optimize
the utilization of zSeries resources:
• LPAR CPU Management
• Dynamic Channel Path Management
• Channel Subsystem Priority Queuing
The z/OS Intelligent Resource Director (IRD) Planning
Wizard helps to plan your IRD implementation by asking
questions about your enterprise setup, and produces a
worksheet that describes the issues on each of your sys-
tems that you must consider before you can implement
IRD. The z/OS IRD Planning Wizard supports z/OS 1.2 and
higher.
zSeries Application Assist Processor
The IBM ^ zSeries Application Assist Processor
(zAAP), available on the z990 and z890 servers, is an
attractively priced specialized processing unit that pro-
vides an economical Java execution environment for z/OS
for customers who desire the traditional qualities of service
and the integration advantages of the zSeries platform.
When confi gured with general purpose processors within
logical partitions running z/OS, zAAPs may help increase
general purpose processor productivity and may contrib-
ute to lowering the overall cost of computing for z/OS Java
technology-based applications. zAAPs are designed to
operate asynchronously with the general processors to
execute Java programming under control of the IBM Java
Virtual Machine (JVM). This can help reduce the demands
and capacity requirements on general purpose proces-
sors, which may then be available for reallocation to other
zSeries workloads.
57
The IBM JVM processing cycles can be executed on the
confi gured zAAPS with no anticipated modifi cations to
the Java application(s). Execution of the JVM processing
cycles on a zAAP is a function of the Software Developer’s
Kit (SDK) 1.4.1 for zSeries, z/OS 1.6, and the Processor
Resource/Systems Manager (PR/SM).
z/OS Scalability
z/OS is a highly scalable operating system that can sup-
port the integration of new applications with existing
mainframe applications and data. z/OS can scale up in a
single logical partition, or scale out in a Parallel Sysplex for
higher availability. With z/OS 1.6, up to 24 processors are
supported in a single logical partition on the z990 server.
In previous releases, z/OS supported up to 16 processors.
In a Parallel Sysplex, up to 32 z/OS images can be confi g-
ured in single-image cluster, with access to shared data.
64-bit Support
z/OS scale is extended with support for 64-bit real and
virtual storage on zSeries servers, while continuing to sup-
port 24-bit and 31-bit applications.
The 64-bit real support is intended to eliminate expanded
storage, helps eliminate paging and may allow you to
consolidate your current systems into fewer LPARs or to
a single native image. z/OS V1.5 delivers 64-bit shared
memory support to allow middleware to share large
amounts of 64-bit virtual storage among multiple address
spaces. This is expected to provide a signifi cant capac-
ity enhancement for relieving shared virtual storage con-
straints.
Applications that can be written to 64-bit virtual storage
have signifi cantly larger addressability to data. With z/OS
1.2, assembler programs can obtain virtual storage above
2 GB for storing and manipulating data. This 64-bit sup-
port is used by DB2 V8 and other middleware. z/OS 1.6
includes C/C++ support for the development of 64-bit
applications, including debug and runtime support. In
addition, the Java SDK 1.4.1 is also available with 64-bit
support.
Automation Support
z/OS Managed System Infrastructure for Operations (msys
for Operations) provides automation for single system and
sysplex operations to help simplify operations and improve
availability. msys for Operations plays an important role in
outage avoidance
msys for Operations provides functions that control and
manage both hardware and software resources making
fully automated solutions possible. The focus is on simpli-
situations and reacting to them quickly and precisely. This
is achieved through panel driven operator dialogs and
automated recovery routines that run in the background.
Simplifi ed Confi guration z/OS Managed System Infrastruc-
ture for Setup (msys for setup) is the strategic solution for
product installation, confi guration and function enable-
ment. msys for Setup allows usage of consistent interfaces
with wizard-like confi guration dialogs. In z/OS 1.4, the
msys for Setup Framework was enhanced to provide multi-
user capability and improved multisystem support.
58
The msys for setup dialogs use defaults and best prac-
tices values whenever possible and derive low-level values
from answers to high-level questions. After the confi gura-
tion parameters have been specifi ed, msys for Setup can
automatically update the system confi guration directly. The
user can see in detail what the changes will be before they
are made.
Also, with z/OS 1.5 msys for Setup can use the IBM Direc-
tory Server, OpenLDAP, on any IBM platform including
OpenLDAP on z/OS UNIX System Services. This can sim-
plify the initialization of msys for setup, and can make the
Management Directory virtually transparent to the user.
The following functions can be confi gured using msys for
Setup: Parallel Sysplex clustering, TCP/IP, UNIX System
®
Service, Language Environment
, LDAP, RMF™, ISPF, FTP,
and DB2 UDB for z/OS V8.
System Services
z/OS Version 1 Release 6 base elements and components
Base Control Program (BCP)
JES2
ESCON Director Support
MICR/OCR Support
Bulk Data Transfer base
DFSMSdfp
EREP/ MVS High Level Assembler ICKDSF
ISPF
TSO/E
3270 PC File Transfer Program
™
FFST
/ESA
TIOC
z/OS Version 1 Release 6 optional priced features
DFSMSdss
DFSMShsm
DFSMSrmm
DFSMStvs
Bulk Data Transfer (BDT) File to File
Bulk Data Transfer SNA NJE
™
™
™
The backbone of the z/OS system is the Base Control Pro-
gram (BCP) with JES2 or JES3. These provide the essential
services that make z/OS the system of choice when work-
loads must be processed reliably, securely, with complete
data integrity and without interruption. The BCP includes the
I/O confi guration program (IOCP), the Workload Manager
(WLM), systems management facilities (SMF), the z/OS
UNIX Systems Services kernel, and support for Unicode.
Sense and Respond with Workload Manager
Workload Manager (WLM) addresses the need for manag-
mixed workload distribution, load balancing and the
ing
distribution of computing resources to competing workloads.
It does this while providing fewer, simpler system externals.
Performance management goals are expressed in Service
Level Agreement terms. All this is done with a single policy
that can be used across the sysplex to provide a single con-
trol point, eliminating the need to manage each individual
image.
Dynamic balancing of JES2 batch initiators across a sysplex
has been enhanced in z/OS 1.4 to provide better utilization
of processor resources. WLM is designed to check every
10 seconds to see if re-balancing is required. WLM is more
aggressive in reducing initiators on constrained systems and
starting new ones on less utilized systems, helping to ensure
that processors are more evenly utilized.
59
WLM Improvements for WebSphere
z/OS 1.5 can simplify WLM control for WebSphere.
Customers now have the choice to manually defi ne
WebSphere application environments for WLM or have
WebSphere defi ne them as and when required.
DFSMS can automate and centralize storage manage-
ment based on the policies that your installation defi nes for
availability, performance, space, and security. With these
optional features enabled, you can take full advantage of
all the functions that DFSMS offers.
Performance block reporting for enclaves and multi-period
classes are designed to provide improved workload bal-
ancing for middleware applications such as WebSphere.
WLM Enqueue Management establishes a new interface to
allow reporting of resource contention. The priority of the
task holding the enqueue can be increased to allow the
resource to be released more quickly.
Data Management with DFSMS
DFSMS comprises a suite of related data and storage
management functions for the z/OS system. DFSMSdfp is
a base element of z/OS which performs the essential data,
storage and device management functions of z/OS. One
function of DFSMSdfp is the Storage Management Subsys-
tem (SMS). SMS helps automate and centralize the man-
agement of storage based on the customer’s policies for
availability, performance, space, and security. Using SMS,
the storage administrator defi nes policies that can auto-
mate the management of storage and hardware devices.
These policies describe data allocation characteristics,
performance and availability goals, backup and retention
requirements, and storage requirements for the system.
DFSMShsm can perform space management functions
along with disaster recovery functions such as Advanced
Copy Services and aggregate backup and recovery sup-
port (ABARS). DFSMSdss can provide backup, restore
and copy services. DFSMSrmm provides tape manage-
ment services. Finally, DFSMStvs can provide coordinated
updates to multiple VSAM data sets at a transaction level,
providing high availability for CICS/VSAM by allowing con-
current access by batch applications.
z/OS 1.5 can help signifi cantly enhance application
backup with enhancements to DFSMShsm to utilize
volume level fast replication. The fast backup is designed
to exploit FlashCopy and the virtual concurrent copy capa-
bility of IBM TotalStorage Enterprise Storage Server and
IBM RAMAC Virtual Array (RVA) respectively. DFSMShsm
Fast Replication in z/OS 1.5 is also intended to provide a
fast, easy to use point-in-time backup and recovery solu-
tion specifi cally designed for DB2 Universal Database
(UDB) for z/OS V8. It is designed to allow fast, nondisrup-
tive backups to be taken at appropriate events when there
is minimum activity at the application level or when a fast
point-in-time backup is desired.
The other elements of DFSMS – DFSMSdss, DFSMShsm,
DFSMSrmm, and DFSMStvs, complement DFSMSdfp to
provide a comprehensive approach to data and storage
management. In a system-managed storage environment,
ing is designed to signifi cantly enhance Parallel Sysplex
availability. It can provide a robust failure recovery capa-
bility via CF structure redundancy, and it can enhance
Parallel Sysplex ease of use by helping to reduce the
complexity of CF structure recovery. These benefi ts can
be achieved by creating a duplicate (or duplexed) copy
of a CF structure and then maintaining the two struc-
ture instances in a synchronized state during normal CF
operation. In the event of a CF related failure (or even a
planned outage of a CF), failover to the remaining copy of
the duplexed structures can be initiated and quickly com-
pleted transparent to the CF structure user and without
manual intervention.
• In z/OS 1.2, JES2 and JES3 allow an installation to have
up to 999,999 jobs managed at any single point in time.
In addition, both provide the installation the ability to
obtain (spinoff) their JESlog data sets prior to job completion.
• The JES2 Health monitor, in z/OS 1.4, provides improved
diagnostics. Even when JES2 is not responding to commands, the JES2 monitor, running in a separate address
space, will be able to provide information about JES2’s
status. JES2 also provides enhanced recovery from corrupted checkpoint data to prevent multisystem outages
• In z/OS 1.4, JES3 provides additional tolerance for initialization errors and the MAINPROC refresh function
which enables the dynamic addition of systems to the
.
sysplex
System Management Services
z/OS Version 1 Release 6 base elements
HCD
SMP/E
Managed System Infrastructure for Setup
Managed System Infrastructure for Operations
z/OS Version 1 Release 6 optional priced features
RMF
SDSF
HCM
61
z/OS provides systems management functions and fea-
tures to manage not only host resources, but also distrib-
uted systems resources. These capabilities have a long,
successful history of usage by S/390 customers. z/OS has
enhanced many of these systems management functions
and features to provide more robust control and automa-
tion of the basic processes of z/OS.
Console Enhancements
z/OS 1.5 includes console enhancements which are
•
designed to improve system availability by enhancing
the capacity and reliability of message delivery. Major
changes to the message production and consumption
fl ow can help reduce the possibility of bottlenecks which
can cause a backlog of undelivered messages. These
enhancements are available with z/OS 1.4 as an optional
no-charge Console Enhancements Feature. msys for
Setup has been enhanced in z/OS 1.4 to allow multiple
users to log on and work concurrently from different
workstations. Furthermore, as part of the user enrollment process, the msys for Setup user administrator can
control which msys for Setup workplace functions a user
can access. The graphical user interface (msys for Setup
workplace) has been redesigned and is now easier to
learn and use. These valuable ease-of-use enhancements make working with msys for Setup more intuitive
.
RMF
RMF is IBM’s strategic product for z/OS performance mea-
surement and management. It is the base product to col-
lect performance data for z/OS and sysplex environments
to monitor systems’ performance behavior and allows
customers to optimally tune and confi gure their system
according to business needs. RMF provides its benefi ts
through the operation of Postprocessing and Online
Monitoring functions. They are based on a set of data
gatherers and data services which enables access to all
performance relevant data in a z/OS environment. The four
components are RMF Data Gatherer, RMF Sysplex Data
Services, Historical Data Reporting and Online Monitoring
with RMF.
Enhancements
• RMF can show the contention for Cryptographic Coprocessors, including a description of which workloads are
using or are delayed in access to the cryptographic
coprocessors
• Application State Recording, a new feature of z/OS 1.4
provides more granular performance reporting for middleware such as WebSphere
• In z/OS 1.5, RMF Monitor II and Monitor III performance
data is now RACF protected
z/OS msys for Operations is a base element in z/OS 1.2
that incorporates automation technology into z/OS. It pro-
vides self-healing attributes for some critical system and
sysplex resources and can simplify the day-to-day opera-
tion of a single z/OS image or of a Parallel Sysplex cluster.
• msys for Operations enhancements in z/OS 1.3 include
automation to handle enqueue contention and auxiliary
storage shortages. msys for Operations can also interface with the Hardware Management Console (HMC) to
provide hardware functions such as deactivating LPARs.
SMP/E
SMP/E provides the ability to install software products
and service either from DASD or tape, or directly from a
network source, such as the Internet. By installing directly
from a network source, SMP/E is enabling a more seamless
integration of electronic software delivery and installation.
62
Advanced System Automation
The unique and rich functions of IBM Tivoli System Auto-
mation for OS/390 (SA OS/390) Version 2.2 (separately
orderable) can ease z/OS management, reduce costs, and
increase application availability. SA OS/390 automates I/O,
processor, and system operations, and includes “canned”
automation for IMS, CICS, Tivoli OPC, and DB2. Its focus
is on Parallel Sysplex automation, including multi- and
single-system confi gurations, and on integration with end-
to-end Tivoli enterprise solutions. With the new patented
manager/agent design, it is now possible to automate
applications distributed over a sysplex by virtually remov-
ing system boundaries for automation.
System Services benefi ts can include:
z/OS Version 1 Release 6 optional priced features
Security server:
- RACF
z/OS Version 1 Release 6 optional no-charge features
z/OS Security Level 3 which includes:
- LDAP Security Level 3
- Network Authentication Service Level 3
- System SSL Security Level 3
- Open Cryptographic Services Facility Security Level 3
z/OS extends its robust mainframe security features to
address the demands of on demand enterprises. Tech-
nologies such as LDAP, Secure Sockets Layer (SSL),
Kerberos V5, Public Key Infrastructure, and exploitation of
zSeries cryptographic features are available in z/OS.
• Increased system availability
• Improved productivity of system programmers
• A more consistent approach for confi guring z/OS components or products
• System setup and automation using best practices
which can greatly improve availability
Security Services
z/OS Version 1 Release 6 base elements and components
Integrated Security Services
- Public Key Infrastructure Services
- DCE Security Server
- Open Cryptographic Enhanced Plug-ins
- Firewall Technologies
- LDAP Services
- Network Authentication Service
- Enterprise Identity Mapping
Cryptographic Services
- Integrated Cryptographic Service Facility (ICSF)
- System SSL
- Open Cryptographic Service Facility
include:
RACF
Resource Access Control Facility (RACF) provides the
functions of authentication and access control for z/OS
resources and data, including the ability to control access
to DB2 objects using RACF profi les. Using an entity known
as the RACF user ID, RACF can identify users requesting
access to the system. The RACF user password (or valid
substitute, such as a RACF PassTicket or a digital certifi -
cate) authenticates the RACF user ID.
Once a user is authenticated, RACF and the resource
managers control the interaction between that user
and the objects it tries to gain access to. These objects
terminals and objects that you defi ne. RACF supports fl ex-
ibility in auditing access attempts and changes to security
controls. To audit security-relevant events, you can use the
RACF system management unload utility and a variety of
reporting tools.
63
With one command, a security administrator can update
remote RACF databases without logging on to remote sys-
tems. Throughout the enterprise, RACF commands can be
sent automatically to synchronize multiple databases. In
addition, RACF can automatically propagate RACF data-
base updates made by applications. With RACF, users can
keep passwords synchronized for specifi c user IDs. When
you change one password, RACF can change passwords
for your user ID on different systems and for several user
IDs on the same system. Also, passwords can be changed
automatically for the same user ID on different systems.
This way, several RACF databases can be kept synchro-
nized with the same password information.
RACF enhancements:
• Digital Certifi cates can be automatically authenticated
without administrator action.
• Administrative enhancements enable defi nition of profi les granting partial authority. Handling of new passwords and removal of class authority are simplifi ed.
• On demand applications require a way to associate
more users under a RACF Group defi nition, so RACF
allows the creation of a new kind of Group that can contain an unlimited number of users.
• RACF now allows you to perform RACF installation class
updates without an IPL, which can help improve availability
• RACF facilitates enterprise password sychronization
through RACF password enveloping and notifi cation of
password changes using z/OS LDAP
• Improved user accountability through RACF’s enforcement of unique z/OS UNIX UIDs and GIDs
• Improved access control fl exibility and granularity for
z/OS UNIX fi les with access control lists
• Multilevel security support
Multilevel Security
z/OS 1.5 is the fi rst and only IBM operating system to pro-
vide Multilevel Security. This technology can help improve
the way government agencies and other organizations
share critical classifi ed information. Combined with IBM’s
DB2 UDB for z/OS Version 8, z/OS provides multilevel
security on the zSeries mainframe to help meet the strin-
gent security requirements of government agencies and
fi nancial institutions, and can help open up new hosting
opportunities. Multilevel security technology allows IT
administrators to give users access to information based
on their need to know, or clearance level. It is designed to
prevent individuals from accessing unauthorized informa-
tion and to prevent individuals from declassifying informa-
tion.
With multilevel security support in IBM’s z/OS 1.5 and DB2
V8, customers can enable a single repository of data to
be managed at the row level and accessed by individuals
based on their need to know.
SSL
Secure Socket Layer (SSL) is a public key cryptography-
based extension to TCP/IP networking which helps to
ensure private communications between parties on the
Internet. z/OS provides fast and highly secure SSL sup-
port, with increased performance when coupled with
zSeries server cryptographic capabilities.
64
z/OS SSL support includes the ability for applications to
create multiple SSL environments within a single process.
An application can now modify environment attributes
without terminating any SSL sessions already underway.
• IPv6 Support: This support allows System SSL to be
used in an IPv6 network confi guration. It also enables
System SSL to support both IPv4 and IPv6 Internet protocol addresses.
• Performance is improved with CRL Caching: Today,
SSL supports certifi cate revocation lists (CRLs) stored
in an LDAP server. Each time a certifi cate needs to be
validated, a request is made to the LDAP server to get
the list of CRLs. CRL Caching enables applications to
request that the retrieved list of CRLs be cached for a
defi ned length of time.
• Support for the AES Symmetric Cipher for SSL V3 and
TLS Connections: System SSL supports the Advanced
Encryption Standard (AES), which provides data encryption using 128-bit or 256-bit keys for SSL V3.0 and TLS
V1.0 connections.
• Support for DSS (Digital Signature Standard) Certifi cates: System SSL has been enhanced to support Digital Signature Standard certifi cates defi ned by the FIPS
(Federal Information Processing Standard) 186-1 Standard.
• System SSL of RSA Private Keys Stored in ICSF: With
z/OS 1.4, support is introduced that is designed to allow
a certifi cate’s private key to reside in ICSF thus lifting
a restriction where the private key had to reside in the
RACF database.
• Failover LDAP provides greater availability: You can
now specify a list of Security Server-LDAP servers to be
used for storing certifi cate revocation lists (CRLs). When
certifi cate validation is being performed, this list will be
used to determine which LDAP server to connect to for
the CRL information.
• Simplifi ed administration with the ability to export
and import certifi cate chains using PKCS#7 format
fi les.defi ned length of time.
tocol (LDAP) services supporting thousands of concurrent
clients. Client access to information in multiple directories
is supported with the LDAP protocol. The LDAP server
supports thousands of concurrent clients, increasing the
maximum number of concurrently connected clients by an
order of magnitude.
Enhancements
• Mandatory Authentication Methods (required by IETF
RFC 2829) are supported in z/OS 1.4: The CRAM-MD5
and DIGEST-MD5 authentication methods have been
added. The methods avoid fl owing the user’s password
over the connection to the server. The LDAP Server, the
C/C++ APIs, and the utilities are updated with this support. Interoperability is improved for any applications
that make use of these methods.
• TLS: z/OS LDAP now provides support for TLS (Transport Layer Security) as defi ned in IETF RFC 2830 as an
alternative to SSL support. It also provides support, via
an LDAP extended operation, that allows applications to
selectively activate TLS for certain LDAP operations at
the application’s discretion.
65
• Support for IPv6 and 64-bit addressing
• Peer-to-peer replication provides failover support for
server availability. If a primary master server fails, there
is now a backup master to which LDAP operations can
be directed.
• Large group support helps improve LDAP server performance when maintaining large access groups containing many members.
ICSF
Integrated Cryptographic Service Facility (ICSF) is a part
of z/OS which provides cryptographic functions for data
security, data integrity, personal identifi cation, digital
signatures, and the management of cryptographic keys.
These functions are provided via APIs intended to deliver
the highly scalable and available security features of z/OS
and the zSeries servers. Together with cryptography fea-
tures of zSeries servers, z/OS is designed to provide high
performance SSL, which can benefi t applications that use
System SSL, such as the z/OS HTTP Server and Web-
Sphere, TN3270, and CICS Transaction Gateway server.
ICSF provides support for the z990 and z890 PCIX Cryp-
tographic Coprocessor (PCIXCC), a replacement for the
PCICC and the CMOS Cryptographic Coprocessor Facility
that were found on the z900 and z800. All of the equivalent
PCICC functions offered on the PCIXCC are expected to
be implemented with higher performance. In addition,
PCIXCC implements the functions on the CMOS Crypto-
graphic Coprocessor Facility used by known applications.
PCIXCC supports secure cryptographic functions, use of
secure encrypted key values and user-defi ned extensions.
PKI Services
PKI Services is a z/OS component that provides a com-
plete Certifi cate Authority (CA) package for full certifi cate
life cycle management. Customers can be their own Cer-
tifi cate Authority, with the scale and availability provided by
z/OS. This can result in signifi cant savings over third party
options.
• User request driven via customizable Web pages for
browser or server certifi cates
• Automatic or administrator approval process administered via same Web interface
• End user / administrator revocation process
• Certifi cate validation service for z/OS applications
Firewall
• Firewall Technologies provide sysplex-wide Security
Association Support: This function is designed to enable
VPN (virtual private network) security associations to
be dynamically reestablished on a backup processor in
a sysplex when a Dynamic Virtual IP Address (DVIPA)
takeover occurs. When the Dynamic Virtual IP Address
give-back occurs, the security association is designed
to be reestablished on the original processor in the
sysplex. When used in conjunction with z/OS Communications Server’s TCP/IP DVIPA takeover/give-back capability, this function provides customers with improved
availability of IPSec security associations.
66
Network Authentication Service
C/C++
• Network Authentication Service provides authentication, delegation and data confi dentiality services which
are interoperable with other industry implementations
based on the MIT Kerberos V5 reference implementation. Network Authentication Service, administered with
RACF commands, supports both the native Kerberos
API functions as well as the GSS-API Kerberos security
mechanism and does not require DCE.
• IPv6 supported by Kerberos with z/OS 1.4 for improved
network security scalability.
• Kerberos in z/OS 1.4 provides an alternative database to
RACF by offering support for its own registry database
using the UNIX System Services NDBM (New Database
Manager) support. NDBM provides full Kerberos administration support.
z/OS provides a solid infrastructure on which you can build
new applications, extend existing applications, and run
existing transactional and batch processes.
• Extra Performance Linkage (XPLINK) is provided in z/OS
1.2. A C or C++ application has overhead associated
with each function call. The more highly functionalized a
program, the more overhead. XPLINK helps cut down on
the overhead associated with these function calls and
can improve the performance of these applications. In
order to exploit the bulk of “high-performance linkage”
customers must recompile their C and C++ programs
under the new XPLINK environment. The new IBM SDK
for z/OS Java 2 Technology Edition V1.4 has been
rewritten to take advantage of this unique z/OS function,
which can result in performance improvements.
• Enhanced ASCII support provides the ability to produce
code that contains ASCII string literals and character
constants. This allows ASCII dependent logic to continue working on ASCII platforms, thus eliminating the
need to fi nd all such places in the code and converting
them to EBCDIC when porting UNIX applications to z/OS.
• Performance enhancements: A new higher optimization level, OPTIMIZE(3), provides the compiler’s highest
and most aggressive level of optimization. OPTIMIZE(3)
is suggested when the desire for run-time improvement outweighs the concern for minimizing compilation
resources.
• DB2 preprocessor integration: The C/C++ compiler has
been enhanced to integrate the functionality of the DB2
precompiler. A new SQL compiler option enables the
compiler to process embedded SQL statements.
Language Environment
Language Environment is a base element of z/OS and
provides the run-time environment for programs generated
with C, C++, COBOL, FORTRAN, and PL/1.
67
C/C++ IBM Open Class
in Software Announcement 203-131, dated May 13, 2003,
the application development support (that is, the head-
ers, source, sidedecks, objects, and samples from the
Application Support Class and Collection Class libraries)
is withdrawn from the C/C++ IBM Open Class Library
(IOC) in z/OS 1.5. Applications that use these IOC libraries
cannot be compiled nor linked using z/OS 1.5. Run-time
support for the execution of existing applications that use
IOC libraries is provided with z/OS 1.5, but is planned to
be removed in a future release.
®
Library: As previously announced
• Continue to take advantage of:
– Common cross platform programming Security APIs
within Java framework
– Java Record Input/Output (JRIO) APIs to provide
record-oriented access to VSAM datasets, System
catalogs, and PDS directory
– Persistent reusable JVM technology for CICS, IMS,
and DB2
• Leverage traditional zSeries software and server benefi ts: scalability, reliability, availability, performance and
serviceability
z/OS 64-bit C/C++ environment: z/OS 1.6 delivers the
capability to exploit 64-bit virtual in developing and
deploying new applications that require a signifi cantly
larger addressability of data. This capability is provided
with enhanced UNIX System Services, 64-bit Language
Environment (LE) run-time developed with the C/C++ com-
piler 64-bit support, and the Program Management Binder
64-bit support. The availability of this support completes
the major steps of the z/OS 64-bit virtual roadmap.
Java
SDK for z/OS, Java 2 Technology Edition, 1.4 provides a
full-function Software Development Kit (SDK) at the Java
2 technology level, compliant with the Sun SDK 1.4 APIs.
With SDK for z/OS, Java 2 Technology Edition, V1.4, cus-
tomers can:
• Test and deploy Java applications at the Java 2 SDK 1.4
API level
• Continue the “write once, run anywhere” Java paradigm
at the Java 2 API level
• Take advantage of the new Java 2 function, including
XML and Web services
IBM 64-bit SDK for z/OS, Java 2 Technology Edition, 1.4
(5655-I56) provides a full-function Software Development
Kit (SDK) at the Java 2 technology level, compliant with the
Sun SDK 1.4 APIs. With 64-bit SDK for z/OS, Java 2 Tech-
nology Edition 1.4, you can run Java applications that were
previously storage constrained.
The Java SDK for z/OS is available via download from the
IBM ^ zSeries Java Web site and by tape from IBM
Software Delivery and Fulfi llment (SDF) in SMP/E format.
For additional information about zSeries and Java prod-
ucts, go to: ibm.com/servers/eserver/zseries/software/
java/.
Unicode
z/OS provides Unicode Callable System Services – code
page and case conversions from EBCDIC to Unicode:
• DB2 V7 is the fi rst exploiter
• New hardware instruction on zSeries servers has been
implemented to provide superior performance
• Unicode Normalization Services allows programmers to
decompose or compose characters from another code
page and can apply normalization forms to have the
same meaning.
68
REXX Functions
z/OS 1.4 extends the REXX language on z/OS when used
in a UNIX System Services zSeries REXX environment.
It includes functions for standard REXX I/O and to easily
access some common fi le services and environments vari-
ables.
In case of a failure of the primary IP stack, VIPA Takeover
introduced in OS/390 2.8 can support movement to a
backup IP stack on a different server in a Parallel Sysplex
cluster. Dynamic VIPA Takeover can enhance the initial 2.8
functions, providing VIPA takeback support. This can allow
the movement of workload back from the alternate to the
primary IP stack.
Communication Services
z/OS Version 1 Release 6 base elements
z/OS Communications Server (Multiprotocol/HPR Services, TCP/IP
Services, SNA/APPN Services)
OSA Support Facility
z/OS Version 1 Release 6 optional no charge features
z/OS Communications Server Security Level 3
The z/OS base includes z/OS Communication Server, which
enables: world class TCP/IP and SNA networking support,
including enterprise class dependability; performance and
scalability; highly secure connectivity; support for multiple
protocols; and effi cient use of networking assets.
z/OS can provide near continuous availability for TCP/IP
applications and their users with two key features in z/OS:
Sysplex Distributor and Dynamic VIPA.
Dynamic Virtual IP Address Takeover
VIPA represents an IP address that is not tied to a specifi c
hardware adapter address. The benefi t can be that if an
adapter fails, the IP protocol can fi nd an alternate path to
the same software, be it the TCP/IP services on a zSeries
server or an application.
With Sysplex-Wide Security Associations (SWSA) in z/OS
1.4, IPSec protected workloads are expected to now
realize all the benefi ts derived from workload balancing,
such as optimal routing of new work to the target system
and server application based on QoS and WLM advice,
increased availability by routing around failed components
and increased fl exibility in adding additional workload in a
nondisruptive manner.
Sysplex Distributor
Introduced in OS/390 2.10, Sysplex Distributor is a soft-
ware-only means of distributing IP workload across a
Parallel Sysplex cluster. Client connections appear to be
connected to a single IP address, yet the connections are
routed to z/OS images on servers on different zSeries 800/
900 or S/390 servers. In addition to load balancing, Sys-
plex Distributor simplifi es the task of moving applications
within a Parallel Sysplex environment.
In z/OS we have taken the functions provided by the
Cisco MNLB Workload Agent and Systems Manager, and
integrated them into Enhanced Sysplex Distributor. This
can eliminate the need for separate Cisco LocalDirector
machines in the network and the need for MNLB work-
load agents to be run on the zSeries servers. It can also
improve performance, while allowing the Sysplex Distribu-
tor to decide, based on priority supplied by WLM, the
Service Policy Agent and the TCP/IP stack status, on the
defi nes a user’s security context that is consistent through-
out an enterprise, regardless of the User ID used and
regardless of which platform the user is accessing. RACF
commands are enhanced to allow a security administrator
to defi ne EIM information for EIM applications to use. The
EIM information consists of the LDAP host name where the
EIM domain resides, the EIM domain name, and the bind
distinguished name and password an application may use
to establish a connection with the domain.
Intrusion Detection Services (IDS)
Introduced in z/OS 1.2 and enhanced in 1.5, IDS enables
the detection of attacks on the TCP/IP stack and the appli-
cation of defensive mechanisms on the z/OS server. The
focus of IDS is self-protection. IDS can be used alone or
in combination with an external network-based Intrusion
Detection System. IDS is integrated into the z/OS Commu-
nications Server stack.
• IPv6
• IPv6 (Internet Protocol version 6) is supported in z/OS
and can dramatically increase network addressability
in support of larger internal and multi-enterprise networks. z/OS provides compatibility with existing network
addressing and mixed-mode addressing with IPv4.
HiperSockets
• HiperSockets, introduced in z/OS 1.2, provides very
high-speed, low latency TCP/IP data communications across LPARs within the same zSeries server.
HiperSockets acts like a TCP/IP network within the
server.
• HiperSockets Accelerator provides an “accelerated
routing path” which concentrates traffi c between OSAExpress external network connections and HiperSockets
connected LPARs. This function can improve performance, simplify confi guration, and increase scalability
while lowering cost by reducing the number of networking adapters and associated I/O cage slots required for
large numbers of virtual servers.
Communications Services highlights:
• A single high-performance TCP/IP stack providing support for both IPv4 and IPv6 applications
• High Performance Native Sockets (HPNS) for TCP/IP
applications
• Support for the latest security protocols - SSL & TLS
• Multinode Persistent Sessions for SNA applications running in a Parallel Sysplex environment
• Simple Network Time Protocol Support (SNTP) for client/
server synchronization
• New confi guration support for Enterprise Extender (EE)
XCA major nodes allows activation and inactivation at
the GROUP level. In addition, the EE XCA major node
now supports confi guration updates when the major
node is active. This provides fl exibility and can help
improve availability by allowing updates to occur without
necessarily affecting existing sessions.
• Alternate route selection for SNA and Enterprise
®
Extender (EE): VTAM
allows alternate route selection
for sessions using Enterprise Extender (EE) connection networks when connectivity fails due to temporary
conditions in the underlying IP network. This can help
improve availability for sessions using EE connection
networks.
70
• Separate address space for TN3270 servers
• TCP/IP Sysplex health monitoring
Network Services benefi ts can include:
•
Function for on demand Internet and intranet applications
• Multivendor, multiplatform connectivity
• Mainframe class of service over IP networks
• Dramatic improvements in TCP/IP performance include
optimization of the TCP/IP stack, and inclusion of a
number of performance related capabilities
z/OS UNIX
z/OS Version 1 Release 6 base elements
z/OS UNIX
z/OS UNIX is an integral element of z/OS and is a key ele-
ment of the zSeries’ open and distributed computing strat-
egy. Many middleware and application products that run
on z/OS use z/OS UNIX.
z/OS contains the UNIX application services (shell, utilities
and debugger) and the UNIX System Services (kernel and
runtime environment). The shell and utilities provide the
standard command interface familiar to interactive UNIX
users. z/OS includes all of the commands and utilities
specifi ed in the X/Open Company’s Single UNIX Specifi ca-
tion, also known as UNIX 95 or XPG4.2. The z/OS UNIX
Services Debugger provides a set of commands that
allow a C language program to be debugged interactively.
The command set is familiar to many UNIX users. With
Language Environment, z/OS supports industry standards
for C programming, shell and utilities, client/server appli-
cations, and the majority of the standards for thread man-
agement and the X/Open Single UNIX Specifi cation. The
combination of open computing and z/OS allows the trans-
parent exchange of data, easy portability of applications,
cross-network management of data and applications, and
the exploitation of traditional zSeries system strengths in
• Web application and UNIX C program performance
improvements
• Improved z/OS UNIX setup
• Multiprocess/Multiuser Kernel Support
• Performance enhancements include recompiled and
optimized functions within the kernel, and shell and utilities; addition of Socket Functions; use of Communication Storage Management buffer transfer instead of data
movement; and optimized NFS Logical File System.
• Multiprocess/MultiUser can allow faster process creation
for customers and reduced storage usage for servers.
• Semaphores without contention using the hardware Perform Locked Operation (PLO) instruction.
• Shared memory (captured storage) can reduce real storage when sharing large amounts of virtual storage.
• UNIX System Services and UNIX debugger add support
for IEEE-fl oating point.
• UNIX System Services provides greater security granularity for HFS and zFS fi le systems with support for
Access Control Lists (ACLs).
71
• More fi le descriptors per UNIX process are provided in
z/OS 1.6, which supports up to 64K per process.
• Additional support for 64-bit programming, conditional
variables in shared memory, Euro symbol, and superkill
support, along with enhancements to the automount
daemon and Unicode.
UNIX System Services benefi ts can include:
• Development and execution of UNIX applications —
z/OS is a UNIX platform
• Increased application portfolio on z/OS as Independent
Software Vendors can use USS to port their applications
to z/OS
• Portability of applications to and from other platforms
• Use of UNIX development skills in a z/OS environment
• Consolidate multiple UNIX systems
• Scalability for high growth UNIX applications
• Parallel Sysplex support to share UNIX fi le systems
benefi ts with Web server applications and others who
access the hierarchical fi le system. This support can
make your data and information that reside in the HFS
available to your customers at any time, no matter where
the applications are running in the Parallel Sysplex environment.
z/OS UNIX supports hierarchical fi le systems that use UNIX
APIs. Applications can work with data in both UNIX hierar-
chical fi le systems and traditional zSeries data sets.
zSeries File System (zFS)
zFile System (zFS) is the strategic UNIX fi le system for
z/OS and complements the z/OS Hierarchical File System
(HFS). zFS uses the same APIs as HFS.
zFS can provide the following benefi ts over HFS:
• Improved performance
• Additional function
– Disk space can be shared between fi le systems in the
same data set
– File system quota (maximum fi le system size). With
zFS the fi le system quota can be increased with a
simple administrative command
– Can improve failure recovery. zFS performs asynchro-
nous writes to disk and does not wait for a synch inter-
val to begin writes.
zFS or zFS-related administration, system management,
performance, confi guration support, and scalability have
been further enhanced in z/OS 1.4:
Distributed Computing Services
z/OS Version 1 Release 6 base elements and components
Network File System (NFS)
DCE Base Services
Distributed File Service (including zFS and SMB)
• Dynamic reconfi guration for fi le system confi guration
options.
• Dynamic use of secondary allocation for a zFS aggregate (data set) or fi le system.
• Improvements in the UNIX System Services automount
support for zFS.
72
• Addition of ISHELL support for zFS.
• Ability to perform I/O operations in parallel for a zFS
aggregate that spans multiple DASD volumes. This is
designed to provide improved performance when using
multi-DASD volume aggregates.
• Support for 64-bit user virtual buffer addresses.
The Hierarchical File System (HFS) functionality has been
stabilized. HFS is expected to continue shipping as part of
the operating system and will be supported in accordance
with the terms of a customer’s applicable support agree-
ment. IBM intends to continue enhancing zFS functional-
ity, including RAS and performance capabilities, in future
z/OS releases. All requirements for UNIX fi le services are
expected to be addressed in the context of zFS only.
SMB Support has been further enhanced in z/OS 1.4 by:
• Simplifi ed user administration with Windows Domain ID
mapping
• Performance improvements with RFS and large PDS or
PDS/E fi le systems
• Exploitation of zFS performance
• Network File System (NFS) acts as a fi le server to workstations, personal computers, or other authorized systems in a TCP/IP network. It also provides a z/OS client.
The remote data sets or fi les are mounted from the
mainframe (z/OS) to appear as local directories and fi les
on the client system. NFS also provides access to the
Hierarchical File System (HFS) and zSeries File System
(zFS).
Distributed File Services (DFS) Server Message Block
(SMB)
Microsoft® Windows® networking compatible fi le and print
serving is available in z/OS with Native SMB File and Print
Serving for Windows Clients. SMB fi le serving enables
z/OS to share HFS, zFS, Sequential fi les and Record Files
Systems (RFS) such as PDS, PDS/E or VSAM data sets
with Windows workstations. SMB can automatically handle
the conversion between ASCII and EBCDIC, making full
use of USS fi le tagging Access Control Lists (ACLs) sup-
port. This enhances the ability to develop applications on
Windows and deploy on z/OS. z/OS also supports printing
of SMB fi les without requiring that code be installed on the
clients and without requiring unique printer setup on the
workstations.
Internet Services
z/OS Version 1 Release 6 base elements
IBM HTTP Server
z/OS Version 1 Release 6 optional no charge features
IBM HTTP Server North America Secure
TThe IBM HTTP Server offers HTTP 1.1 compliance, sup-
port for Java technology, and the ability to manage Internet
processing through the Workload Manager (WLM). Ben-
efi ts can include:
• Utilization of large storage capacity
• Single point of entry and control
• Consolidation of multiple Web sites
• Exploitation of z/OS WL
73
Print Services
z/OS Version 1 Release 6 optional priced features
Infoprint® Server
– IP PrintWay
– NetSpool
– z/OS Print Interface
™
™
Infoprint Server provides a reliable, high availability, secure
and scalable foundation for customer’s enterprise printing
infrastructure. Infoprint Server and its companion prod-
uct, Infoprint Server Transforms, include a print interface,
printer inventory, application output capture program, and
print drivers and management tools that let you manage
any print job to any printer defi ned to Infoprint Server,
including electronic distribution for presentation over the
Web.
Infoprint Central
Infoprint Central is a Web-based, GUI for managing print
jobs and printers throughout the enterprise from anywhere
in the enterprise using a Web browser. Intended primar-
ily for help desk operators, it lets users query the status
of jobs and printers, see job and printer messages, stop
and start printers, move jobs from one printer to another,
cancel or hold jobs, and many other functions. Infoprint
Central can use integrated z/OS security services so that
users can be authorized to perform only certain tasks, or
to perform tasks only on designated devices.
IP PrintWayextended mode: Infoprint Central is backed
by a new architecture in the component that delivers print
or e-mail output to printers, servers or users over TCP/IP
or Internet Printing Protocol (IPP). IP PrintWay extended
mode uses the SYSOUT Application Programming Inter-
face (SAPI) to access print jobs and job information from
the JES spool. The advantage of this change can be
higher availability and throughput, more fl exibility for han-
dling print-related tasks, and scalability of Infoprint Server
for very large distributed print environments.
Common message log: A new common message log helps
to improve productivity of help desk operators for print
problem diagnosis and resolution, thus helping to increase
system availability and user satisfaction. Messages can
easily be accessed from Infoprint Central for a particular
job or printer.
These capabilities give you the fl exibility to deliver output
on demand, anywhere you need it:
• Legacy CICS and IMS applications that generate SNA
Character String (SCS) or 3270 output formats can print
to LAN-attached PCL printers, without changes to the
application program.
• Output can be sent as e-mail instead of, or in addition to
print.
• A consolidated printer inventory lets you defi ne all printers used with Infoprint Server, and printers driven by
Print Services Facility (PSF) in one place. Printers can
be defi ned and modifi ed from a single easy-to-use interface.
• IP PrintWay provides support for printers attached to the
network using TCP/IP, VTAM-controlled coax printers,
and for printers and servers over the Internet using the
industry-standard Internet Printing Protocol (IPP). Easyto-use ISPF menus also enable management of distributed printers.
• The Print Interface supports print submission from applications running in UNIX System Services (USS), from
Windows users via native Windows SMB, from applications on other servers, and over the Internet using IPP.
74
• Data stream transforms let you print AFP™ applications
on printers using PCL, PostScript or PDF. You can also
print PCL, PostScript and PDF output on AFP printers.
• A transform from SAP to AFP and a certifi ed SAP Output
Management System lets you print SAP application
output on your fast, reliable AFP printers, and receive
print completion notifi cation back at the SAP Application
Server.
Benefi ts of consolidating your enterprise printing onto z/OS
using Infoprint Server can include:
• Reduced total cost of ownership for distributed print
operations
• Improved productivity with simplifi ed print operations
and management
• Investment protection and leverage for your AFP applications and printers
• Faster deployment of on demand initiatives with fl exible
output delivery options
Softcopy Publications Support
Library Server converts BookManager documents to HTML
for display through a Web browser.
Library Center
IBM is providing an alternative way to navigate our z/OS
library on the Internet. Beginning with z/OS 1.5, the Library
Center for z/OS provides a Microsoft Windows Explorer-like
view of the contents of the entire z/OS and Software Prod-
ucts DVD Collection. The Library Center uses the new IBM
Library Server with new advanced search functions to help
users fi nd information “on demand.”
The Library Center offers easier navigation and new
advanced search features:
•
An IBM Redbooks™ bookshelf lets the user perform a
BookManager search and locate a corresponding Redbook in PDF format. The search scope pull-down lets the
user launch searches in other repositories such as the
WebSphere Application Server for z/OS or Google.
•
The Library Center also provides a handheld mode to
support both connected and disconnected handhelds
.
z/OS Version 1 Release 6 base elements
BookManager® READ V3
Library Server
GDDM
Library Center
z/OS Version 1 Release 6 optional priced features
BookManager Build
BookManager READ is used to display, search, and
manage online documents and bookshelves. BookManager
BUILD is an optional feature that allows the creation of
softcopy documents that can be used by any of the
BookManager products.
Integrated Testing
z/OS is system-integration tested using a production-like
environment. The z/OS environment includes subsystems,
such as CICS, IMS, DB2 and WebSphere. This additional
testing supplements existing functional tests, with a focus
on tasks performed by customers in the production environ-
ment, thus helping establishments move more quickly to
new functions.
Publications
For a list of the publications available for z/OS, visit the
z/OS library Web site at: ibm.com/servers/eserver/zseries/
zos/bkserv.
75
Installation Considerations
CustomPac is a suite of services designed to help you
effi ciently install, migrate and maintain a z/OS system. It
can also help with migrating and maintaining z/OS system-
related products and/or third parties’ software vendor
products. Options include:
• RefreshPac®, which includes preventative software
services
®
• ProductPac
• SystemPac
for custom-built products
®
for installation or system replacement
Highlights
z/OS 1.4 and 1.5 are supported on the following IBM servers:
z/VM V4.4 extends its virtualization technology in support of
Linux and other guests while providing some enhancements
that enable z/VM to be self-optimized and self-managed:
• Reducing contention for the z/VM Control Program (CP)
scheduler lock may help increase the number of Linux
and other guest virtual machines that can be managed
concurrently.
• Enhancing the Virtual Machine Resource Manager
(VMRM) to provide the infrastructure necessary to support more extensive workload and systems resource
management features by providing:
– monitor data showing actual workload achievement
80
– an interface to dynamically change users in work-
loads, workload characteristics, and goals
– more fl exibility using the VMRM confi guration fi le when
managing multiple users
– improvements in the reliability and performance of the
VMRM service virtual machine’s monitor data handling
– serviceability enhancements including improved mes-
sages, logfi le entries, and new server options
• Simulating virtual FICON CTCA devices for guest operating systems enhances previous virtual-CTCA support
by adding the FICON protocol as an option for guest
operating systems. Guests use virtual CTCAs to communicate among themselves within a single z/VM system
image, without the need for real FICON CTCAs.
• Supporting real and virtual integrated 3270 console
devices. Real support enables this device, provided by
the Hardware Management Console (HMC) to be used
as the system operator console. Virtual support enables
testing of guest operating systems and utilities such as
Stand-Alone Program Loader (SAPL) and standalone
DASD Dump-Restore (DDR), that support the integrated
3270 console device.
™
• Delivering the Performance Toolkit for VM
to process
Linux performance data obtained from the Resource
Management Facility (RMF) Performance Monitoring
(PM) client application, rmfpms. Linux performance data
obtained from RMF is presented on display screens and
in printed reports similar to the way VM data is viewed
and presented.
With corresponding function available in Linux on zSeries
and S/390, z/VM 4.4 provides:
• The attachment of Small Computer System Interface
(SCSI) devices to guest Linux images using Fibre Channel Protocol (FCP) channels on zSeries processors
• IPL from FCP-attached disks for Linux and other guest
operating systems with necessary SCSI support, when
z/VM is running on a z990, z890, z900, or z800 server
equipped with the SCSI IPL Feature Enabler
• Enhanced page-fault handling
• Clear-key RSA functions of the IBM PCI Cryptographic
Coprocessor (PCICC) or the IBM PCI Cryptographic
Accelerator (PCICA) z/OS.e, OS/390, TPF, VSE/ESA,
z/VM 3.1, or VM/ESA are not supported nor can they
operate on IFL processor features. Only Linux workloads
in an LPAR or Linux guests of z/VM V4 can operate on
the IFL processor feature.
Exploiting New Technology
z/VM provides a highly fl exible test and production environ-
ment for enterprises deploying the latest e-business solutions.
Enterprises that require multi-system server solutions will fi nd
that z/VM helps them meet the demands of their businesses
and IT infrastructures with a broad range of support for such
operating system environments as z/OS, z/OS.e, OS/390, TPF,
VSE/ESA, CMS, and Linux on zSeries and S/390. The ability to
support multiple machine images and architectures enables
z/VM to run multiple production and test versions of zSeries
and S/390 operating systems, all on the same system. z/VM
81
can help simplify migration from one release to another,
facilitate the transition to newer applications, provide a test
system whenever one is needed, and consolidate several
to provide virtual access to the latest DASD and processor
architecture for systems that lack such support. New tech-
nological enhancements in z/VM 4.4 provide:
• Exploitation of the zSeries 890 and 990 server
– Extending Dynamic-I/O confi guration support allows
channel paths, control units, and devices to be
dynamically added, changed, and deleted in a Logi-
cal Channel SubSystem (LCSS) environment.
– Support for extended I/O measurement facilities
provides improved capacity planning and I/O perfor-
mance measurement
– Handling I/O confi guration defi nition and dynamic I/O
confi guration in an environment of up to 30 LPARs,
an increase from the previous limit of 15
• Support for the zSeries capability to cascade two FICON
directors within a Fibre-Channel fabric. z/VM and its
guests can take advantage of this enhanced and simplifi ed connectivity, which is particularly useful in disasterrecovery and business-continuity situations.
• Support for the IBM TotalStorage Enterprise Storage
Server (ESS) FlashCopy V2 providing increased fl exibility for improved capacity management and utilization
• Support for the IBM ESS Peer-to-Peer Remote Copy
Extended Distance (PPRC-XD) function, extending the
distance, well beyond the 103 km supported with PPRC
synchronous mode. PPRC-XD is suitable for data migration, backup, and disaster recovery procedures. PPRC
Version 2 (V2) is also supported for guest operating
systems, offering an asynchronous cascading solution
providing a complete, consistent, and coherent copy of
data at a remote site.
• Support for IBM TotalStorage Enterprise Tape Controller
3592 Model J70 and Tape Drive 3592 Model J1A
Systems Management
Improvements in systems management, some of which
help to provide self-confi guring, self-managing, and self-
optimizing facilities in z/VM V4.4 include:
• Functions that may be called by client applications to
allocate and manage resources for guests running in z/VM
virtual machines (virtual images). Use of the application
programming interfaces (APIs) through an application
provided by a customer or solution provider are designed
o that such applications can allow administrators who
s
lack in-depth VM knowledge to manage a large number of
virtual images, running in a single z/VM system.
• Hardware Confi guration Manager (HCM) and Hardware
Confi guration Defi nition (HCD) components to create
and manage your I/O confi guration. This new support
provides a comprehensive, easy-to-use I/O-confi guration-management environment similar to that available
with the z/OS operating system.
• Performance Toolkit for VM that provides enhanced
capabilities for a z/VM systems programmer, operator, or
performance analyst to monitor and report performance
data. The toolkit is an optional, per-engine-priced feature
derived from the FCON/ESA program (5788-LGA), providing:
– full-screen mode system console operation and man-
agement of multiple z/VM systems
82
– post-processing of Performance Toolkit for VM history
fi les and of VM monitor data captured by the MON-
WRITE utility
– viewing of performance monitor data using either Web
browsers or PC-based 3270 emulator graphics
The toolkit also provides the capability to monitor TCP/IP
for z/VM, as well as to process Linux performance data.
Application Enablement
CMS will host the new C/C++ for z/VM compiler (5654-A22).
This environment allows C/C++ programs to be compiled
and executed on CMS and creates portability between z/VM
and z/OS C/C++ programs. C/C++ source fi les can be read
from a CMS minidisk, the SFS, or the Byte File System (BFS)
and output can be written to any of these fi le systems. C/C++
will only execute on z/VM V4.4 and can only be licensed to
operate on standard processor engines. In order to support
the C/C++ for z/VM compiler, the C/C++, the Language
Environment has been updated to the level shipped with
z/OS V1.4 and is integrated into the base of z/VM V4.4.
Networking with z/VM
TCP/IP for z/VM delivers expanded Internet/intranet
access, improved e-business performance and extended
function. Performance of the TCP/IP stack was enhanced
by redesigning algorithms to reduce path lengths, recod-
ing procedures to optimize high-use paths, identifying
and implementing performance improvement items, and
adding virtual multiprocessing capabilities.
TCP/IP is designed to support the z/Architecture
HiperSockets function for high-speed communication
among virtual machines and logical partitions within the
same zSeries server. The HiperSockets function allows vir-
tual machines and logical partitions to communicate inter-
nally over the memory bus using the internal-queued-direct
(IQD) channel type in the z990, z890, z900, and z800. TCP/
IP broadcast support is now available for the HiperSockets
environment when utilizing Internet Protocol version 4 (IPv4)
with z/VM V4.4. Applications that use the broadcast function
can now propagate frames to all TCP/IP applications.
The z890 and z990 servers include an important perfor-
mance enhancement that virtualizes adapter interruptions
and can be used with V=V guests (pageable guests) on
z/VM V4.4. With the enhancement of the TCP/IP stack in
z/VM V4.4 to use adapter interruptions for OSA-Express,
TCP/IP for VM can benefi t from this performance assist for
both HiperSockets and OSA-Express adapters.
z/VM V4.4 exploits the Virtual Local Area Network (VLAN)
technology. VLANs ease the administration of logical
groups of users so that they can communicate as if they
were on the same physical LAN. VLANs help increase
traffi c fl ow and may help reduce overhead to allow the
organization of networks by traffi c patterns rather than by
physical location. To support VLAN, z/VM V4.4 provides:
• Enhancements to TCP/IP for z/VM to enable membership in a VLAN for QDIO and HiperSockets adapters
• Enhancements to z/VM guest-LAN simulation to allow
virtual QDIO and HiperSockets adapters to participate in
a VLAN
• Management and control of VLAN topology by the z/VM
virtual switch
83
The guest LAN support provided in z/VM V4.2 simulates
the HiperSockets function for communication among
virtual machines without the need for real IQD channels,
much as VM simulates channel-to-channel adapters for
communication among virtual machines without the need
for ESCON, FICON, or other real channel-to-channel con-
nections. With the guest LAN capability, customers with
S/390 servers can gain the benefi ts of HiperSockets com-
munication among the virtual machines within a VM image,
since no real IQD channels are required.
z/VM V4.4 further enhances its virtualization technology by
providing the capability to deploy virtual IP switches in the
guest LAN environment. The z/VM virtual switch replaces
the need for virtual machines acting as routers to provide
IPv4 connectivity to a physical LAN through an OSA-
cycles and require additional copying of data being trans-
ported. The virtual-switch function alleviates this problem
and also provides centralized network confi guration and
control. These controls allow the LAN administrator to
more easily grant and revoke access to the network and to
manage the confi guration of VLAN segments.
TCP/IP for z/VM provides numerous self-protection func-
tions. A Secure Sockets Layer (SSL) server is available to
facilitate secure and private conversations between z/VM
servers and external clients. The upgraded SSL server in
z/VM V4.4 provides appropriate RPM format packages
for the SUSE LINUX Enterprise Server 7 (SLES 7) at the
2.4.7 kernel level, SUSE LINUX Enterprise Server 8 (SLES
8) powered by UnitedLinux at the 2.4.19 kernel level,
and Turbolinux Enterprise Server 8 (TLES 8) powered by
United Linux at the 2.4.19 kernel level. Security of the
TCP/IP stack has been improved to help prevent additional
types of Denial of Service (DoS) attacks including: Smurf,
Fraggle, Ping-o-Death, Kiss of Death (KOD), KOX, Blat,
SynFlood, Stream, and R4P3D. The overall security and
auditability of the TCP/IP for z/VM stack and the integrity of
the z/VM system have been improved by providing better
controls, monitoring, and defaults. An IMAP user authenti-
cation exit has been added that removes prior user ID and
password length restrictions and eliminates the need for
every IMAP client to have a VM user ID and password.
TCP/IP for z/VM, formerly a priced, optional feature of
VM/ESA and z/VM V3, is packaged at no additional charge
and shipped enabled for use with z/VM V4 and V5. The
former priced, optional features of TCP/IP — the Network
File System (NFS) server and TCP/IP source — are also
packaged with TCP/IP for z/VM at no additional change.
In addition to the new function provided by the Performance
Toolkit for VM, RealTime Monitor (RTM), and Performance
Reporting Facility (PRF) are still available in z/VM V4.4 to
support new and changed monitor records in z/VM. RTM
simplifi es performance analysis and the installation man-
agement of VM environments. PRF uses system monitor
data to analyze system performance and to detect and
diagnose performance problems. RACF for z/VM is avail-
able as an priced, optional feature of z/VM V4 and provides
improved data security for an installation. RTM, PRF, and
the Performance Toolkit are also priced, optional features of
™
z/VM V4 as is the Directory Maintenance Facility (DirMaint
).
84
z/VM Version 5 (V5)
z/VM Version 5 Release 1 (V5.1) continues the evolution of its
premier and world-class zSeries virtualization technology with
a new version to offer traditional capabilities to manage
operating systems, including Linux, on a single mainframe
as guests of z/VM. z/VM V5.1 is designed to operate only
on zSeries servers that support the z/Architecture (64-bit)
including the z990, z890, z900, and z800 or equivalent.
zSeries
Engine-based Value Unit Pricing
z/VM V5 introduces engine-based Value Unit pricing which
replaces the per-engine pricing model that is available
with z/VM V4 as well as providing a lower entry price.
Engine-based Value Unit pricing is designed to provide a
decreasing price curve which may help provide improved
price/performance as hardware capacities and workload
grow. Value Unit pricing for z/VM V5 can provide for a
lower price per processor engine as more processor
engines are licensed with z/VM V5.1 across the enterprise.
Value Unit pricing helps you to:
• Add capacity and workload with an incremental and
improved price
• Manage software costs better
• Aggregate licenses acquired across machines that are
part of your enterprise.
Engine-base Value Unit pricing of z/VM V5 should not be
tied, or associated with, MSU-based Value Unit pricing.
Enhancements in z/VM V5.1 include:
Virtualization Technology and Linux Enablement
• Support for SCSI FCP disks enable the deployment of a
Linux server farm on z/VM using only SCSI disks. SCSI
disks can be used as such by guests through dedicated
FCP subchannels, and are also supported as emulated
9336 Fixed-Block Architecture (FBA) devices for use by
guests, CMS, and CP. With this support, you can install,
IPL, and operate z/VM from SCSI disks.
• z/VM V5.1 includes the capability to install z/VM from
a DVD both to an ESS SCSI disk emulated as an FBA
device and to a 3390 DASD. Installing from a DVD can
signifi cantly reduce the required installation media and
allows you to install to a zSeries server using only SCSI
disks. This is expected to be most benefi cial in a z/VM
environment with Linux guests and without traditional
installation devices such as IBM TotalStorage tape
drives attached to the IBM zSeries server.
• Coordinated near-continuous availability and disaster
recovery for Linux guests by providing a new HyperSwap
function so that the virtual devices associated with one
real disk can be swapped transparently to another.
HyperSwap can be used to switch to secondary disk
storage subsystems mirrored by Peer-to-Peer Remote
Copy (PPRC). HyperSwap is planned to be exploited
by Geographically Dispersed Parallel Sysplex (GDPS)
3.1 to provide a coordinated near-continuous availability
and disaster recovery solution for distributed applications, such as WebSphere, that span z/OS images running natively and Linux guests running under z/VM.
85
• PCIX Cryptographic Coprocessor (PCIXCC) support
provides z/OS and Linux guest support for the PCIXCC
Feature available with the z990 and z890 severs. Delivery of the z/VM PCIXCC support satisfi es the Statement
of Direction made on May 13, 2003.
The Systems Management APIs, introduced in z/VM
•
V4.4, provided a basic set of functions that may be
called by applications to allocate and manage resources
for guests running in z/VM virtual machines (virtual
images). Although these APIs are primarily intended
for managing Linux virtual images, they can be used
to manage many types of z/VM virtual machine. All
enhancements to the APIs in z/VM V5.1 have been implemented using Version 2 (V2) of the RPC server. In addition to usability enhancements, new functions include:
– DASD volume management for virtual images
– VMRM confi guration fi le management
– Query status of active images
– Query VMRM measurement data
– Removal of user ID entries in an authorization fi le with
a single request
– Query all shared storage segments instead of one at a
time
• A new programming service is provided by an emulated
DIAGNOSE instruction that helps enable a guest virtual
machine to specify an action to be taken by CP when
the guest becomes unresponsive. A time interval and
action are specifi ed by the guest. If the guest fails to
reissue the DIAGNOSE instruction within the specifi ed
time interval, CP performs the action.
• A new publication, Getting Started with Linux on zSeries,
describes z/VM basics and how to confi gure and use
z/VM functions and facilities to create and manage Linux
servers running on zSeries processors. The publication
is designed to help systems personnel (system programmers, administrators, and operators) with limited
knowledge of z/VM deploy Linux servers on z/VM more
quickly and more easily.
Network Virtualization and Security
• The virtual IP switch, introduced in z/VM V4.4, was
designed to improve connectivity to a physical LAN for
hosts coupled to a guest LAN. The virtual switch has
been enhanced to provide enhanced failover support
for less disruptive recovery for some common network
failures helping to provide business continuity as well as
infrastructure reliability and availability.
• Authorization capabilities have been enhanced for z/VM
guest LANs and virtual switches by using Resource
Access Control Facility (RACF) or any equivalent External Security Manager (ESM) that supports this function.
It is designed to provide ESM centralized control of
authorizations and Virtual LAN (VLAN) assignment.
Technology Exploitation
• z/VM V5.1 supports the new z890 as well as the new
enhancements to the z990 including:
– Four Logical Channel SubSystems (LCSSs) on the
z990 and two on the z890
– Transparent sharing of internal and external channel
types across LCSSs such as ICB-3, ICB-4, ISC-3,
FICON Express, and OSA-Express
– Open Systems Adapter-Express Integrated Console
Controller (OSA-ICC) function
• Up to 24 real processor engines in a single z/VM image
on a z990 satisfi es the Statement of Direction made on
May 13, 2003.
86
• IPv6 support for guest LANs has been enhanced to
allow the z/VM TCP/IP stack to be confi gured for IPv6
networks connected through OSA-Express operating
in QDIO mode. The stack can be confi gured to provide
static routing of IPv6 packets and to send IPv6 Router
Advertisements. In addition, support is being provided
to help application developers to develop socket applications for IPv6 communications.
Systems Management Improvements
The Performance Toolkit for VM has been enhanced in
z/VM V5.1 to provide functional equivalence to the Perfor-
thereby virtually eliminating the need for separate products
(PRF and RealTime Monitor (RTM)) to help manage your
performance more effi ciently. Other new function includes:
• New high-level Linux reports based on Application Monitor records from Linux
• A new report for SCSI disks
Delivery of equivalent function to PRF in the Performance
Toolkit for VM satisfi es the Statement of Direction made on
May 13, 2003 to remove the RTM and PRF features in a future
release of z/VM. The RTM and PRF features have been with-
drawn from z/VM V5.1. These features are still available with
z/VM V4.4 but cannot be licensed with z/VM V5.1.
For further information see the
GM13-0137.
z/VM Reference Guide
,
87
To learn more
Visit the zSeries World Wide Web site at ibm.com/eserver/
zseries or call IBM DIRECT at 1 800 IBM-CALL in the U.S.
and Canada.
Australia 132 426
Austria 0660.5109
Belgium 02-225.33.33
Brazil 0800-111426
China (20) 8755 3828
France 0800-03-03-03
Germany 01803-313233
Hong Kong (20) 2825 6222
Hungary 165-4422
India (80) 526 9050
Indonesia (21) 252 1222
Ireland 1-850-205-205
Israel 03-6978111
Italy 167-017001
Japan 0120 300 426
Korea (02) 781 7800
Malaysia (03) 717 7890
Mexico 91-800-00316
Netherlands 020-513.5151
New Zealand 0800-801-800
Philippines (02) 819 2426
Poland (022) 878-6777
Singapore 1800 320 1975
South Africa 0800-130130
Spain 900-100400
Sweden 020-220222
Switzerland 0800 55 12 25
Taiwan 0800 016 888
Thailand (02) 273 4444
Vietnam Hanoi (04) 843 6675
Vietnam HCM (08) 829 8342
United Kingdom 0990-390390
Copyright IBM Corporation 2004
Integrated Marketing Communications, Server Group
Route 100
Somers, NY 10589
U.S.A.
Produced in the United States of America
08/04
All Rights Reserved
References in this publication to IBM products or services do not imply that
IBM intends to make them available in every country in which IBM operates.
Consult your local IBM business contact for information on the products,
features, and services available in your area.
IBM, IBM eServer, IBM ^, the IBM logo, the e-business logo, AFP,
APPN, BookManager, CICS, DB2, DB2 Connect, DB2 Universal Database,
DFSMSdfp, DFSMSdss, DFSMShsm, DFSMSrmm, DFSMS/MVS,
DFSORT, DirMaint, e-business on demand, ECKD, Enterprise Storage
Server, ESCON, FICON, FICON Express, FlashCopy, FFST, GDDM, GDPS,
Geographically Dispersed Parallel Sysplex, HiperSockets, Hiperspace,
HyperSwap, IMS, InfoPrint, Intelligent Miner, IP PrintWay, Language
Environment, MQSeries, Multiprise, MVS, Net.Data,
NetView, Open Class, OS/390, Parallel Sysplex, Performance Toolkit
PR/SM, Processor Resource/Systems Manager, ProductPac, pSeries, RACF,
RAMAC, RefreshPac, Resource Link, RMF, RS/6000, S/390, S/390 Parallel
Enterprise Server, SecureWay, Sysplex Timer, SystemPac, Tivoli,
age Manager, TotalStorage, VM/ESA, VSE/ESA, VTAM, WebSphere
z/Architecture, z/OS, z/VM, and zSeries are trademarks or registered trademarks of the International Business Machines Corporation in the United
States and other countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States or other
countries.
UNIX is a registered trademark of The Open Group in the Unites States and
other countries.
Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation In the United States, other countries, or both.
Intel is a trademark of the Intel Corporation in the United States and other
countries.
Other trademarks and registered trademarks are the properties of their
respective companies.
IBM hardware products are manufactured from new parts, or new and used
parts. Regardless, our warranty terms apply.
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary
depending upon considerations such as the amount of multiprogramming
in the user’s job stream, the I/O confi guration, the storage confi guration,
and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
Photographs shown are engineering prototypes. Changes may be incorporated in production models.
This equipment is subject to all applicable FCC rules and will comply with
them upon delivery.
Information concerning non-IBM products was obtained from the suppliers of those products. Questions concerning those products should be
directed to those suppliers.
All customer examples described are presented as illustrations of how
these customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics may
vary by customer.
All statements regarding IBM’s future direction and intent are subject to
change or withdrawal without notice, and represent goals and objectives only.
Prices subject to change without notice. Contact your IBM representative
or Business Partner for the most current pricing in your geography.
GM13-0229-03
Netfi nity, NetSpool,
for VM,
Tivoli Stor-
, xSeries,
88
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.