IBM has previewed z/VSE 4.2. When available, z/VSE 4.2
is designed to help address the needs of VSE clients with
growing core VSE workloads. z/VSE V4.2 is designed to
support:
• Increased fl exibility with support for new z/VM-mode
logical partitions
• Dynamic addition of memory to an active z/VM LPAR
by exploiting System z dynamic storage-reconfi guration
capabilities
• Enhanced physical connectivity by exploiting all OSA-
Express3 ports
• Capability to install Linux on System z from the HMC
without requiring an external network connection
• Enhancements for scalability and constraint relief
• Operation of the SSL server in a CMS environment
• Systems management enhancements for Linux and
other virtual images
For the most current information on z/VM, refer to the z/VM
Web site at http://www.vm.ibm.com.
z/VSE
z/VSE 4.1, the latest advance in the ongoing evolution of
VSE, is designed to help address needs of VSE clients
with growing core VSE workloads and/or those who wish
to exploit Linux on System z for new, Web-based business
solutions and infrastructure simplifi cation.
• More than 255 VSE tasks to help clients grow their CICS
workloads and to ease migration from CS/VSE to CICS
Transaction Server for VSE/ESA
™
• Up to 32 GB of processor storage
• Sub-Capacity Reporting Tool running “natively”
•
Encryption Facility for z/VSE as an optional priced feature
• IBM System Storage TS3400 Tape Library (via the
TS1120 Controller)
• IBM System Storage TS7740 Virtualization Engine
Release 1.3
z/VSE V4.2 plans to continue the focus on hybrid solutions
exploiting z/VSE and Linux on System z, service-oriented
architecture (SOA), and security. It is the preferred replace-
ment for z/VSE V4.1, z/VSE V3, or VSE/ESA. It is designed
to protect and leverage existing VSE information assets.
z/TPF
z/TPF is a 64-bit operating system that allows you to move
legacy applications into an open development environ-
ment, leveraging large scale memory spaces for increased
diagnostics and functionality. The open develop-
speed,
ment environment allows access to commodity skills and
9
enhanced access to open code libraries, both of which
can be used to lower development costs. Large memory
spaces can be used to increase both system and appli-
cation effi ciency as I/Os or memory management can be
eliminated.
z/TPF is designed to support:
• 64-bit mode
• Linux development environment (GCC and HLASM for
Linux)
• 32 processors/cluster
• Up to 84* engines/processor
• 40,000 modules
• Workload License Charge
Linux on System z
The System z10 BC supports the following Linux on
System z distributions (most recent service levels):
• Novell SUSE SLES 9
• Novell SUSE SLES 10
• Red Hat RHEL 4
• Red Hat RHEL 5
Operating System ESA/390 z/Architecture
(31-bit) (64-bit)
z/OS V1R8, 9 and 10 No Yes
z/OS V1R7
(1)(2)
with BM Lifecycle
Extension for z/OS V1.7 No Yes
Linux on System z
(2)
, Red Hat
RHEL 4, & Novell SUSE SLES 9 Yes Yes
Linux on System z
(2)
, Red Hat
RHEL 5, & Novell SUSE SLES 10 No Yes
(3)
z/VM V5R2
z/VSE V3R1
z/VSE V4R1
(3)
, 3
and 4 No* Yes
(2)(4)
(2)(5)
and 2
(5)
Yes No
No Yes
z/TPF V1R1 No Yes
TPF V4R1 (ESA mode only) Yes No
1. z/OS V1.7 support on the z10 BC requires the Lifecycle Extension for
z/OS V1.7, 5637-A01. The Lifecycle Extension for z/OS R1.7 + zIIP Web
Deliverable required for z10 to enable HiperDispatch on z10 (does not
require a zIIP). z/OS V1.7 support was withdrawn September 30, 2008.
The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2009. With
this Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain
functions and features of the z10 BC server require later releases of
z/OS. For a complete list of software support, see the PSP buckets and
the Software Requirements section of the z10 BC announcement letter,
dated October 21, 2008.
2. Compatibility Support for listed releases. Compatibility support allows OS
to IPL and operate on z10 BC.
3. Requires Compatibility Support which allows z/VM to IPL and operate on
the System z10 providing IBM System z9
and Guests. *z/VM supports 31-bit and 64-bit guests.
4. z/VSE V3 31-bit mode only. It does not implement z/Architecture, and
specifi cally does not implement 64-bit mode capabilities. z/VSE is
designed to exploit select features of System z10, System z9, and IBM
™
eServer
5. z/VSE V4 is designed to exploit 64-bit real memory addressing, but will
Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive
Planning (PSP) bucket prior to installing a z10 BC.
zSeries® hardware.
not support 64-bit virtual memory addressing.
®
functionality for the base OS
10
z10 BC
The IBM System z10 Business Class (z10 BC) delivers
innovative technologies for small and medium enter-
prises that give you a whole new world of capabilities
to run modern applications. Ideally suited in a Dynamic
Infratructure, this competitively priced server delivers
unparalleled qualities of service to help manage growth
and reduce cost and risk in your business.
The z10 BC further extends the leadership of System z by
delivering expanded granularity and optimized scalability
for growth, enriched virtualization technology for consoli-
dation of distributed workloads, improved availability and
security to help increase business resiliency, and just-in-
time management of resources. The z10 BC is at the core
of the enhanced System z platform and is the new face
of System z.
The z10 BC has the machine type of 2098, with one model
(E10) offering between one to ten confi gurable Processor
Units (PUs). This model design offers increased fl exibility
over the two model IBM System z9 Business Class (z9® BC)
by delivering seamless growth within a single model, both
temporary and permanent.
The z10 BC delivers improvements in both the granular
increments and total scalability compared to previous
System z midrange servers, achieved by both increasing
the performance of the individual PU as well as increasing
the number of PUs per server. The z10 BC Model E10 is
designed to provide up to 1.5 times the total system capac-
ity for general purpose processing, and over 40% more
confi gurable processors than the z9 BC Model S07.
The z10 BC advances the innovation of the System z10
platform and brings value to a wider audience. It is built
using a redesigned air cooled drawer package which
replaces the prior “book” concept in order to reduce cost
and increase fl exibility. A redesigned I/O drawer offers
higher availability and can be concurrently added or
replaced when at least two drawers are installed. Reduced
capacity and priced I/O features will continue to be offered
on the z10 BC to help lower your total cost of acquisition.
The quad core design z10 processor chip delivers higher
frequency and will be introduced at 3.5 GHz which can
help improve the execution of CPU intensive workloads on
the z10 BC. These design approaches facilitate the high-
availability, dynamic capabilities and lower cost that differ-
entiate this z10 BC from other servers.
The z10 BC supports from 4 GB up to 248 GB of real
customer memory. This is almost four times the maximum
memory available on the z9 BC. The increased available
memory on the server can help to benefi t workloads that
perform better with larger memory confi gurations, such
as DB2, WebSphere and Linux. In addition to the cus-
tomer purchased memory, an additional 8 GB of memory
is included for the Hardware System Area (HSA). The
HSA holds the I/O confi guration data for the server and is
entirely fenced from customer memory.
High speed connectivity and high bandwidth out to the
data and the network are critical in achieving high levels of
transaction throughput and enabling resources inside and
outside the server to maximize application requirements.
The z10 BC has a host bus interface with a link data rate
of 6 GB using the industry standard Infi niBand protocol to
help satisfy requirements for coupling (ICF and server-to-
server connectivity), cryptography (Crypto Express2 with
secure coprocessors and SSL transactions), I/O (ESCON
®
FICON
or FCP) and LAN (OSA-Express3 Gigabit, 10
Gigabit and 1000BASE-T Ethernet features). High Perfor-
mance FICON for System z (zHPF) also brings new levels
of performance when accessing data on enabled storage
™
devices such as the IBM System Storage DS8000
.
®
,
11
PUs defi ned as Internal Coupling Facilities (ICFs), Inte-
grated Facility for Linux (IFLs), System z10 Application
Assist Processor (zAAPs) and System z10 Integrated Infor-
mation Processor (zIIPs) are no longer grouped together in
one pool as on the IBM eServer
™
zSeries® 890 (z890), but
are grouped together in their own pool, where they can be
managed separately. The separation signifi cantly simpli-
fi es capacity planning and management for LPAR and can
have an effect on weight management since CP weights
and zAAP and zIIP weights can now be managed sepa-
rately. Capacity BackUp (CBU) features are available for
IFLs, ICFs, zAAPs and zIIPs.
LAN connectivity has been enhanced with the introduction
of the third generation of Open Systems Adapter-Express
(OSA-Express3). This new family of LAN adapters have
been introduced to reduce latency and overhead, deliver
double the port density of OSA-Express2 and provide
increased throughput. The z10 BC continues to support
OSA-Express2 1000BASE-T and GbE Ethernet features,
supports IP version 6 (IPv6) on HiperSockets. While
and
OSA-Express2 OSN (OSA for NCP) is still available on
System z10 BC to support the Channel Data Link Control
(CDLC) protocol, the OSA-Express3 will also provide this
function.
Additional channel and networking improvements include
support for Layer 2 and Layer 3 traffi c, FCP management
facility for z/VM and Linux for System z, FCP security
improvements, and Linux support for HiperSockets IPv6.
STP enhancements include the additional support for NTP
clients and STP over Infi niBand links.
Like the System z9 BC, the z10 BC offers a confi gurable
Crypto Express2 feature, with PCI-X adapters that can
be individually confi gured as a secure coprocessor or
an accelerator for SSL, the TKE workstation with optional
Smart Card Reader, and provides the following CP Assist
for Cryptographic Function (CPACF):
• DES, TDES, AES-128, AES-192, AES-256
• SHA-1, SHA-224, SHA-256, SHA-384, SHA-512
• Pseudo Random Number Generation (PRNG)
z10 BC is designed to deliver the industry leading Reli-
ability, Availability and Serviceability (RAS) customers
expect from System z servers. RAS is designed to
reduce all sources of outages by reducing unscheduled,
scheduled and planned outages. Planned outages are
further designed to be reduced by reducing preplanning
requirements.
z10 BC preplanning improvements are designed to avoid
(multimode fi ber), and 1000BASE-T (copper) are designed
for use in high-speed enterprise backbones, for local area
network connectivity between campuses, to connect server
farms to System z10, and to consolidate fi le servers onto
System z10. With reduced latency, improved throughput,
and up to 96 ports of LAN connectivity, (when all are 4-port
features, 24 features per server), you can “do more with
less.”
The key benefi ts of OSA-Express3 compared to OSA-
Express2 are:
• Reduced latency (up to 45% reduction) and increased
throughput (up to 4x) for applications
• More physical connectivity to service the network and
fewer required resources:
– Fewer CHPIDs to defi ne and manage
– Reduction in the number of required I/O slots
– Possible reduction in the number of I/O drawers
– Double the port density of OSA-Express2
– A solution to the requirement for more than 48 LAN
ports (now up to 96 ports)
The OSA-Express3 features are exclusive to System z10.
OSA-Express2 availability
OSA-Express2 Gigabit Ethernet and 1000BASE-T Ethernet
continue to be available for ordering, for a limited time, if
you are not yet in a position to migrate to the latest release
of the operating system for exploitation of two ports per
PCI-E adapter and if you are not resource-constrained.
Historical summary: Functions that continue to be sup-
ported by OSA-Express3 and OSA-Express2:
• Queued Direct Input/Output (QDIO) – uses memory
queues and a signaling protocol to directly exchange
data between the OSA microprocessor and the network
software for high-speed communication.
– QDIO Layer 2 (Link layer) – for IP (IPv4, IPv6) or non-
IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA) workloads. Using this mode the Open Systems Adapter
(OSA) is protocol-independent and Layer-3 independent. Packet forwarding decisions are based upon the
Medium Access Control (MAC) address.
– QDIO Layer 3 (Network or IP layer) – for IP workloads.
Packet forwarding decisions are based upon the IP
address. All guests share OSA’s MAC address.
Jumbo frames in QDIO mode (8992 byte frame size) when
•
operating at 1 Gbps (fi ber or copper) and 10 Gbps (fi ber).
• 640 TCP/IP stacks per CHPID – for hosting more images
• Large send for IPv4 packets – for TCP/IP traffi c and CPU
effi ciency, offl oading the TCP segmentation processing
from the host TCP/IP stack to the OSA-Express feature
• Concurrent LIC update – to help minimize the disrup-
tion of network traffi c during an update; when properly
confi gured, designed to avoid a confi guration off or on
(applies to CHPID types OSD and OSN)
• Multiple Image Facility (MIF) and spanned channels – for
sharing OSA among logical channel subsystems
The OSA-Express3 and OSA-Express2 Ethernet features
support the following CHPID types:
CHPID OSA-Express3, Purpose/Traffi c
Type OSA-Express2
Features
OSC 1000BASE-T
OSD
GbETCP/IP traffi c when Layer 3
10 GbE Protocol-independent when Layer 2
1000BASE-T
OSE 1000BASE-T Non-QDIO, SNA/APPN
passthru (LCS)
OSA-Integrated Console Controller (OSA-ICC)
TN3270E, non-SNA DFT, IPL to CPC and LPARs
Operating system console operations
Queued Direct Input/Output (QDIO)
®
/HPR and/or TCP/IP
OSN 1000BASE-T OSA for NCP
GbESupports channel data link control (CDLC)
OSA-Express3 10 GbE
OSA-Express3 10 Gigabit Ethernet LR
The OSA-Express3 10 Gigabit Ethernet (GbE) long reach
(LR) feature has two ports. Each port resides on a PCIe
adapter and has its own channel path identifi er (CHPID).
There are two PCIe adapters per feature. OSA-Express3
10 GbE LR is designed to support attachment to a 10
Gigabits per second (Gbps) Ethernet Local Area Net-
work (LAN) or Ethernet switch capable of 10 Gbps.
OSA-Express3 10 GbE LR supports CHPID type OSD
exclusively. It can be defi ned as a spanned channel and
can be shared among LPARs within and across LCSSs.
OSA-Express3 10 Gigabit Ethernet SR
The OSA-Express3 10 Gigabit Ethernet (GbE) short reach
(LR) feature has two ports. Each port resides on a PCIe
adapter and has its own channel path identifi er (CHPID).
There are two PCIe adapters per feature. OSA-Express3
10 GbE SR is designed to support attachment to a 10
Gigabits per second (Gbps) Ethernet Local Area Net-
work (LAN) or Ethernet switch capable of 10 Gbps.
OSA-Express3 10 GbE SR supports CHPID type OSD
exclusively. It can be defi ned as a spanned channel and
can be shared among LPARs within and across LCSSs.
OSA-Express3 Gigabit Ethernet LX
The OSA-Express3 Gigabit Ethernet (GbE) long wave-
length (LX) feature has four ports. Two ports reside on a
PCIe adapter and share a channel path identifi er (CHPID).
There are two PCIe adapters per feature. Each port sup-
ports attachment to a one Gigabit per second (Gbps) Eth-
ernet Local Area Network (LAN). OSA-Express3 GbE LX
supports CHPID types OSD and OSN. It can be defi ned
as a spanned channel and can be shared among LPARs
within and across LCSSs.
OSA-Express3 Gigabit Ethernet SX
The OSA-Express3 Gigabit Ethernet (GbE) short wave-
length (SX) feature has four ports. Two ports reside on a
PCIe adapter and share a channel path identifi er (CHPID).
There are two PCIe adapters per feature. Each port sup-
ports attachment to a one Gigabit per second (Gbps) Eth-
ernet Local Area Network (LAN). OSA-Express3 GbE SX
supports CHPID types OSD and OSN. It can be defi ned
as a spanned channel and can be shared among LPARs
within and across LCSSs.
27
OSA-Express3-2P Gigabit Ethernet SX
The OSA-Express3-2P Gigabit Ethernet (GbE) short
wavelength (SX) feature has two ports which reside on a
single PCIe adapter and share one channel path identifi er
(CHPID). Each port supports attachment to a one Gigabit
per second (Gbps) Ethernet Local Area Network (LAN).
OSA-Express3 GbE SX supports CHPID types OSD and
OSN. It can be defi ned as a spanned channel and can be
shared among LPARs within and across LCSSs.
Four-port exploitation on OSA-Express3 GbE SX and LX
For the operating system to recognize all four ports on an
OSA-Express3 Gigabit Ethernet feature, a new release
and/or PTF is required. If software updates are not applied,
only two of the four ports will be “visible” to the operating
system.
Activating all four ports on an OSA-Express3 feature pro-
vides you with more physical connectivity to service the
network and reduces the number of required resources
(I/O slots, I/O cages, fewer CHPIDs to defi ne and manage).
Four-port exploitation is supported by z/OS, z/VM, z/VSE,
z/TPF, and Linux on System z.
OSA-Express3 1000BASE-T Ethernet
The OSA-Express3 1000BASE-T Ethernet feature has
four ports. Two ports reside on a PCIe adapter and share
a channel path identifi er (CHPID). There are two PCIe
adapters per feature. Each port supports attachment to
either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or
1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area
Network (LAN). The feature supports auto-negotiation and
automatically adjusts to 10, 100, or 1000 Mbps, depending
upon the LAN. When the feature is set to autonegotiate,
the target device must also be set to autonegotiate. The
feature supports the following settings: 10 Mbps half or full
duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps)
full duplex. OSA-Express3 1000BASE-T Ethernet supports
CHPID types OSC, OSD, OSE, and OSN. It can be defi ned
as a spanned channel and can be shared among LPARs
within and across LCSSs.
When confi gured at 1 Gbps, the 1000BASE-T Ethernet
feature operates in full duplex mode only and supports
jumbo frames when in QDIO mode (CHPID type OSD).
OSA-Express3-2P 1000BASE-T Ethernet
The OSA-Express3-2P 1000BASE-T Ethernet feature has
two ports which reside on a single PCIe adapter and share
one channel path identifi er (CHPID). Each port supports
attachment to either a 10BASE-T (10 Mbps), 100BASE-TX
(100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ether-
net Local Area Network (LAN). The feature supports auto-
negotiation and automatically adjusts to 10, 100, or 1000
Mbps, depending upon the LAN. When the feature is set to
autonegotiate, the target device must also be set to auto-
negotiate. The feature supports the following settings: 10
Mbps half or full duplex, 100 Mbps half or full duplex, 1000
Mbps (1 Gbps) full duplex. OSA-Express3 1000BASE-T
Ethernet supports CHPID types OSC, OSD, OSE, and
OSN. It can be defi ned as a spanned channel and can be
shared among LPARs within and across LCSSs. Software
updates are required to exploit both ports.
28
When confi gured at 1 Gbps, the 1000BASE-T Ethernet
feature operates in full duplex mode only and supports
jumbo frames when in QDIO mode (CHPID type OSD).
OSA-Express QDIO data connection isolation for the z/VM
environment
Multi-tier security zones are fast becoming the network
confi guration standard for new workloads. Therefore, it is
essential for workloads (servers and clients) hosted in a
virtualized environment (shared resources) to be protected
from intrusion or exposure of data and processes from
other workloads.
Internal “routing” can be disabled on a per QDIO connec-
tion basis. This support does not affect the ability to share
an OSA-Express port. Sharing occurs as it does today, but
the ability to communicate between sharing QDIO data
connections may be restricted through the use of this sup-
port. You decide whether an operating system’s or z/VM’s
Virtual Switch OSA-Express QDIO connection is to be non-
isolated (default) or isolated.
QDIO data connection isolation applies to the device
statement defi ned at the operating system level. While
an OSA-Express CHPID may be shared by an operating
system, the data device is not shared.
With Queued Direct Input/Output (QDIO) data connection
isolation you:
• Have the ability to adhere to security and HIPAA-security
guidelines and regulations for network isolation between
the operating system instances sharing physical network
connectivity
• Can establish security zone boundaries that have been
defi ned by your network administrators
Have a mechanism to isolate a QDIO data connection (on
•
an OSA port), ensuring all internal OSA routing between
the isolated QDIO data connections and all other shar-
ing QDIO data connections is disabled. In this state, only
external communications to and from the isolated QDIO
data connection are allowed. If you choose to deploy
an external fi rewall to control the access between hosts
on an isolated virtual switch and sharing LPARs then an
external fi rewall needs to be confi gured and each indi-
vidual host and or LPAR must have a route added to their
TCP/IP stack to forward local traffi c to the fi rewall.
QDIO data connection isolation applies to the z/VM 5.3 and
5.4 with PTFs environment and to all of the OSA-Express3
and OSA-Express2 features (CHPID type OSD) on System
z10 and to the OSA-Express2 features on System z9.
Network Traffi c Analyzer
With the large volume and complexity of today’s network
traffi c, the z10 BC offers systems programmers and net-
work administrators the ability to more easily solve net-
work problems. With the introduction of the OSA-Express
Network Traffi c Analyzer and QDIO Diagnostic Synchro-
nization on the System z and available on the z10 BC,
customers will have the ability to capture trace/trap data
and forward it to z/OS 1.8 tools for easier problem determi-
nation and resolution.
This function is designed to allow the operating system
to control the sniffer trace for the LAN and capture the
records into host memory and storage (fi le systems), using
existing host operating system tools to format, edit, and
process the sniffer records.
29
OSA-Express Network Traffi c Analyzer is exclusive to the
z10 BC, z9 BC, z10 EC, and z9 EC, and is applicable
to the OSA-Express3 and OSA-Express2 features when
confi gured as CHPID type OSD (QDIO), and is supported
by z/OS.
Dynamic LAN idle for z/OS
Dynamic LAN idle is designed to reduce latency and
improve network performance by dynamically adjusting
the inbound blocking algorithm. When enabled, the z/OS
TCP/IP stack is designed to adjust the inbound blocking
algorithm to best match the application requirements.
For latency sensitive applications, the blocking algo-
rithm is modifi ed to be “latency sensitive.” For streaming
(throughput sensitive) applications, the blocking algorithm
is adjusted to maximize throughput. The z/OS TCP/IP stack
can dynamically detect the application requirements,
making the necessary adjustments to the blocking algo-
rithm. The monitoring of the application and the blocking
algorithm adjustments are made in real-time, dynamically
adjusting the application’s LAN performance.
System administrators can authorize the z/OS TCP/IP stack
to enable a dynamic setting, which was previously a static
setting. The z/OS TCP/IP stack is able to help determine
the best setting for the current running application, based
on system confi guration, inbound workload volume, CPU
utilization, and traffi c patterns.
OSA-Express2 (or OSA-Express3) port to the z/VM operat-
ing system when the port is participating in an aggregated
group when confi gured in Layer 2 mode. Link aggregation
(trunking) is designed to allow you to combine multiple
physical OSA-Express3 and OSA-Express2 ports (of the
same type for example 1GbE or 10GbE) into a single logi-
cal link for increased throughput and for nondisruptive
failover in the event that a port becomes unavailable.
• Aggregated link viewed as one logical trunk and con-
taining all of the Virtual LANs (VLANs) required by the
LAN segment
• Load balance communications across several links in a
trunk to prevent a single link from being overrun
• Link aggregation between a VSWITCH and the physical
network switch
• Point-to-point connections
• Up to eight OSA-Express3 or OSA-Express2 ports in one
aggregated link
• Ability to dynamically add/remove OSA ports for “on
demand” bandwidth
• Full-duplex mode (send and receive)
• Target links for aggregation must be of the same type
(for example, Gigabit Ethernet to Gigabit Ethernet)
The Open Systems Adapter/Support Facility (OSA/SF) will
provide status information on an OSA port – its “shared” or
“exclusive use” state. OSA/SF is an integrated component
of z/VM.
Link aggregation for z/VM in Layer 2 mode
z/VM Virtual Switch-controlled (VSWITCH-controlled) link
aggregation (IEEE 802.3ad) allows you to dedicate an
Link aggregation is exclusive to System z10 and System
z9, is applicable to the OSA-Express3 and OSA-Express2
features in Layer 2 mode when confi gured as CHPID type
OSD (QDIO), and is supported by z/VM 5.3 and later.
30
Layer 2 transport mode: When would it be used?
If you have an environment with an abundance of Linux
images in a guest LAN environment, or you need to defi ne
router guests to provide the connection between these
guest LANs and the OSA-Express3 features, then using the
Layer 2 transport mode may be the solution. If you have
Internetwork Packet Exchange (IPX), NetBIOS, and SNA pro-
tocols, in addition to Internet Protocol Version 4 (IPv4) and
IPv6, use of Layer 2 could provide “protocol independence.”
The OSA-Express3 features have the capability to perform
like Layer 2 type devices, providing the capability of being
protocol- or Layer-3-independent (that is, not IP-only).
the Layer 2 interface, packet forwarding decisions
With
are based
of Network
system attached
upon Link Layer (Layer 2) information, instead
Layer (Layer 3) information. Each operating
to the Layer 2 interface uses its own MAC
address. This means the traffi c can be IPX, NetBIOS, SNA,
IPv4, or IPv6.
An OSA-Express3 feature can fi lter inbound datagrams by
Virtual Local Area Network identifi cation (VLAN ID, IEEE
802.1q), and/or the Ethernet destination MAC address. Fil-
tering can reduce the amount of inbound traffi c being pro-
cessed by the operating system, reducing CPU utilization.
OSA Layer 3 Virtual MAC for z/OS
To simplify the infrastructure and to facilitate load balanc-
ing when an LPAR is sharing the same OSA Media Access
Control (MAC) address with another LPAR, each operating
system instance can now have its own unique “logical” or
“virtual” MAC (VMAC) address. All IP addresses associ-
ated with a TCP/IP stack are accessible using their own
VMAC address, instead of sharing the MAC address of
an OSA port. This applies to Layer 3 mode and to an OSA
port shared among Logical Channel Subsystems.
This support is designed to:
• Improve IP workload balancing
• Dedicate a Layer 3 VMAC to a single TCP/IP stack
• Remove the dependency on Generic Routing Encapsu-
lation (GRE) tunnels
• Improve outbound routing
• Simplify confi guration setup
• Allow WebSphere Application Server content-based
routing to work with z/OS in an IPv6 network
• Allow z/OS to use a “standard” interface ID for IPv6
addresses
• Remove the need for PRIROUTER/SECROUTER function
in z/OS
Layer 2 transport mode is supported by z/VM and Linux on
System z.
OSA Layer 3 VMAC for z/OS is exclusive to System z, and
is applicable to OSA-Express3 and OSA-Express2 features
when confi gured as CHPID type OSD (QDIO).
31
Direct Memory Access (DMA)
OSA-Express3 and the operating systems share a
common storage area for memory-to-memory communi-
cation, reducing system overhead and improving perfor-
mance. There are no read or write channel programs for
data exchange. For write processing, no I/O interrupts
have to be handled. For read processing, the number of
I/O interrupts is minimized.
Hardware data router
With OSA-Express3, much of what was previously done in
fi rmware (packet construction, inspection, and routing) is
now performed in hardware. This allows packets to fl ow
directly from host memory to the LAN without fi rmware
intervention.
With the hardware data router, the “store and forward”
technique is no longer used, which enables true direct
memory access, a direct host memory-to-LAN fl ow, return-
ing CPU cycles for application use.
This avoids a “hop” and is designed to reduce latency and
to increase throughput for standard frames (1492 byte)
and jumbo frames (8992 byte).
CCL helps preserve mission critical SNA functions, such
as SNI, and z/OS applications workloads which depend
upon these functions, allowing you to collapse SNA inside
a z10 BC while exploiting and leveraging IP.
The OSA-Express3 and OSA-Express2 GbE and
1000BASE-T Ethernet features provide support for CCL.
This support is designed to require no changes to operat-
ing systems (does require a PTF to support CHPID type
OSN) and also allows TPF to exploit CCL. Supported by
z/VM for Linux and z/TPF guest environments.
OSA-Express3 and OSA-Express2 OSN (OSA for NCP)
OSA-Express for Network Control Program (NCP), Channel
path identifi er (CHPID) type OSN, is now available for use
with the OSA-Express3 GbE features as well as the OSA-
Express3 1000BASE-T Ethernet features.
OSA-Express for NCP, supporting the channel data link
control (CDLC) protocol, provides connectivity between
System z operating systems and IBM Communication Con-
troller for Linux (CCL). CCL allows you to keep your busi-
ness data and applications on the mainframe operating
systems while moving NCP functions to Linux on System z.
IBM Communication Controller for Linux (CCL)
CCL is designed to help eliminate hardware dependen-
cies, such as 3745/3746 Communication Controllers,
ESCON channels, and Token Ring LANs, by providing a
software solution that allows the Network Control Program
(NCP) to be run in Linux on System z freeing up valuable
data center fl oor space.
CCL provides a foundation to help enterprises simplify
their network infrastructure while supporting traditional
Systems Network Architecture (SNA) functions such as
SNA Network Interconnect (SNI).
Communication Controller for Linux on System z (Program
Number 5724-J38) is the solution for companies that
want to help improve network availability by replacing
32
Token-Ring networks and ESCON channels with an Ether-
net network and integrated LAN adapters on System z10,
OSA-Express3 or OSA-Express2 GbE or 1000BASE-T.
OSA-Express for NCP is supported in the z/OS, z/VM,
z/VSE, TPF, z/TPF, and Linux on System z environments.
OSA Integrated Console Controller
The OSA-Express Integrated Console Controller
(OSA-ICC) support is a no-charge function included in
cation over unrepeated distances of up to 10 km (6.2
miles) using 9 micron single mode fi ber optic cables
and even greater distances with System z qualifi ed opti-
cal networking solutions. ISC-3s are supported exclu-
sively in peer mode (CHPID type CFP).
System z now supports 12x Infi niBand single data rate
(12x IB-SDR) coupling link attachment between System
z10 and System z9 general purpose (no longer limited to
standalone coupling facility)
5) Long Reach 1x Infi niBand coupling links (1x IB-
SDR or 1x IB-DDR) are an alternative to ISC-3 and
offer greater distances with support for point-to-point
unrepeated connections of up to 10 km (6.2 miles)
using 9 micron single mode fi ber optic cables. Greater
distances can be supported with System z qualifi ed
optical networking solutions. Long reach 1x Infi niBand
coupling links support the same sharing capability as
the 12x Infi niBand version allowing one physical link to
be shared across multiple CF images on a system.
Note: The Infi niBand link data rates do not represent the
performance of the link. The actual performance is depen-
dent upon many factors including latency through the
adapters, cable lengths, and the type of workload. Specifi -
cally, with 12x Infi niBand coupling links, while the link data
rate can be higher than that of ICB, the service times of
coupling operations are greater, and the actual throughput
is less.
Refer to the Coupling Facility Confi guration Options white-
paper for a more specifi c explanation of when to continue
using the current ICB or ISC-3 technology versus migrat-
ing to Infi niBand coupling links.
The whitepaper is available at: http://www.ibm.com/
systems/z/advantages/pso/whitepaper.html.
4) 12x Infi niBand coupling links (12x IB-SDR or 12x
IB-DDR) offer an alternative to ISC-3 in the data center
and facilitate coupling link consolidation; physical links
can be shared by multiple systems or CF images on a
single system. The 12x IB links support distances up to
150 meters (492 feet) using industry-standard OM3 50
micron fi ber optic cables.
52
z10 Coupling Link Options
Type Description Use Link Distance
d
Max
PSIFB
1x IB-DDR LR
(6.2 miles)
100 km repeated
PSIFB 12x IB-DDR z10 to z10 6 GBps 150 meters 12*/32*
z10 to z9 3 GBps
IC Internal
Coupling Communi- Speeds
Channel cation
ICB-4 Copper z10, z9 2 GBps 10 meters*** 12/16
connection z990, z890 (33 ft)
between OS
and CF
ISC-3 Fiber z10, z9 2 Gbps 10 km 48/48
connection z990, z890 unrepeated
between OS (6.2 miles)
and CF 100 km repeated
• The maximum number of Coupling Links combined cannot exceed 64
per server (PSIFB, ICB-4, ISC-3). There is a maximum of 64 Coupling
CHPIDs (CIB, ICP, CBP, CFP) per server.
• For each MBA fanout installed for ICB-4s, the number of possible cus-
tomer HCA fanouts is reduced by one
* Each link supports defi nition of multiple CIB CHPIDs, up to 16 per fanout
** z10 negotiates to 3 GBps (12x IB-SDR) when connected to a System z9
*** 3 meters (10 feet) reserved for internal routing and strain relief
Note: The Infi niBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5
Gbps do not represent the performance of the link. The actual performance
is dependent upon many factors including latency through the adapters,
cable lengths, and the type of workload. With Infi niBand coupling links,
while the link data rate may be higher than that of ICB (12x IB-SDR or 12x
IB-DDR) or ISC-3 (1x IB-SDR or 1x IB-DDR), the service times of coupling
operations are greater, and the actual throughput may be less than with ICB
links or ISC-3 links.
z10 to z10
Internal Internal N/A 32/32
ata rate
5 Gbps 10 km unrepeated 12*/32*
**
(492 ft)***
z10 BC z10
z10 EC Max
64
CHPIDS
The Sysplex Timer Model 2 is the centralized time source
that sets the Time-Of-Day (TOD) clocks in all attached
servers to maintain synchronization. The Sysplex Timer
Model 2 provides the stepping signal that helps ensure
that all TOD clocks in a multi-server environment incre-
ment in unison to permit full read or write data sharing with
integrity. The Sysplex Timer Model 2 is a key component of
an IBM Parallel Sysplex environment and a Geographically
™
Dispersed Parallel Sysplex
(GDPS®) availability solution
for On Demand Business.
The z10 BC server requires the External Time Reference
(ETR) feature to attach to a Sysplex Timer. The ETR fea-
is standard on the z10 BC and supports attachment
ture
at an unrepeated distance of up to three kilometers (1.86
miles) and a link data rate of 8 Megabits per second.
The distance from the Sysplex Timer to the server can be
extended to 100 km using qualifi ed Dense Wavelength
Division Multiplexers (DWDMs). However, the maximum
repeated distance between Sysplex Timers is limited to
40 km.
Server Time Protocol (STP)
STP messages: STP is a message-based protocol in
which timekeeping information is transmitted between
servers over externally defi ned coupling links. ICB-4, ISC-
3, and Infi niBand coupling links can be used to transport
STP messages.
Time synchronization and time accuracy on z10 BC
If you require time synchronization across multiple servers
(for example you have a Parallel Sysplex environment) or
you require time accuracy either for one or more System z
servers or you require the same time across heterogeneous
®
platforms (System z, UNIX, AIX
, etc.) you can meet these
requirements by either installing a Sysplex Timer Model 2
(9037-002) or by implementing Server Time Protocol (STP).
Server Time Protocol enhancements
STP confi guration and time information restoration
after Power on Resets (POR) or power outage: This
enhancement delivers system management improvements
by restoring the STP confi guration and time information
after Power on Resets (PORs) or power failure that affects
both servers of a two server STP-only Coordinated Timing
Network (CTN). To enable this function the customer has to
select an option that will assure than no other servers can
53
join the two server CTN. Previously, if both the Preferred
Time Server (PTS) and the Backup Time Server (BTS)
experienced a simultaneous power outage (site failure),
or both experienced a POR, reinitialization of time, and
special roles (PTS, BTS, and CTS) was required. With this
enhancement, you will no longer need to reinitialize the
time or reassign the roles for these events.
Preview - Improved STP System Management with
new z/OS Messaging: This is a new function planned to
generate z/OS messages when various hardware events
that affect the External Time Sources (ETS) confi gured for
an STP-only CTN occur. This may improve problem deter-
mination and correction times. Previously, the messages
were generated only on the Hardware Management Con-
sole (HMC).
The ability to generate z/OS messages will be supported
on IBM System z10 and System z9 servers with z/OS 1.11
(with enabling support rolled back to z/OS 1.9) in the
second half of 2009.
The following Server Time Protocol (STP) enhancements
are available on the z10 EC, z10 BC, z9 EC, and z10 BC.
The prerequisites are that you install STP feature and that
the latest MCLs are installed for the applicable driver.
NTP client support: This enhancement addresses the
requirements of customers who need to provide the same
accurate time across heterogeneous platforms in an enter-
prise.
The STP design has been enhanced to include support
for a Simple Network Time Protocol (SNTP) client on the
Support Element. By confi guring an NTP server as the
STP External Time Source (ETS), the time of an STP-only
Coordinated Timing Network (CTN) can track to the time
provided by the NTP server, and maintain a time accuracy
of 100 milliseconds.
Note: NTP client support has been available since October
2007.
Enhanced accuracy to an External Time Source: The
time accuracy of an STP-only CTN has been improved by
adding the capability to confi gure an NTP server that has
a pulse per second (PPS) output signal as the ETS device.
This type of ETS device is available worldwide from sev-
eral vendors that provide network timing solutions.
STP has been designed to track to the highly stable,
accurate PPS signal from the NTP server, and maintain
an accuracy of 10 microseconds as measured at the PPS
input of the System z server. A number of variables such
as accuracy of the NTP server to its time source (GPS,
radio signals for example), and cable used to connect the
PPS signal will determine the ultimate accuracy of STP to
Coordinated Universal Time (UTC).
In comparison, the IBM Sysplex Timer is designed to
maintain an accuracy of 100 microseconds when attached
to an ETS with a PPS output. If STP is confi gured to use
a dial-out time service or an NTP server without PPS, it is
designed to provide a time accuracy of 100 milliseconds
to the ETS device.
For this enhancement, the NTP output of the NTP server
has to be connected to the Support Element (SE) LAN,
and the PPS output of the same NTP server has to be con-
nected to the PPS input provided on the External Time Ref-
erence (ETR) card of the System z10 or System z9 server.
54
Continuous Availability of NTP servers used as Exter-
nal Time Source: Improved External Time Source (ETS)
availability can now be provided if you confi gure different
NTP servers for the Preferred Time Server (PTS) and the
Backup Time Server (BTS). Only the PTS or the BTS can
be the Current Time Server (CTS) in an STP-only CTN.
Prior to this enhancement, only the CTS calculated the
time adjustments necessary to maintain time accuracy.
With this enhancement, if the PTS/CTS cannot access the
NTP Server or the pulse per second (PPS) signal from the
NTP server, the BTS, if confi gured to a different NTP server,
may be able to calculate the adjustment required and
propagate it to the PTS/CTS. The PTS/CTS in turn will per-
form the necessary time adjustment steering.
This avoids a manual reconfi guration of the BTS to be the
CTS, if the PTS/CTS is not able to access its ETS. In an
ETR network when the primary Sysplex Timer is not able
to access the ETS device, the secondary Sysplex Timer
takes over the role of the primary – a recovery action not
always accepted by some customers. The STP design
provides continuous availability of ETS while maintaining
the special roles of PTS and BTS as – signed by the cus-
tomer.
The availability improvement is available when the ETS is
confi gured as an NTP server or an NTP server using PPS.
NTP Server on Hardware Management Console:
Improved security can be obtained by providing NTP
server support on the HMC. If an NTP server (with or with-
out PPS) is confi gured as the ETS device for STP, it needs
to be attached directly to the Support Element (SE) LAN.
The SE LAN is considered by many users to be a private
dedicated LAN to be kept as isolated as possible from the
intranet or Internet.
attaching NTP servers to the SE LAN. The HMC, via a
separate LAN connection, can access an NTP server avail-
able either on the intranet or Internet for its time source.
Note that when using the HMC as the NTP server, there is
no pulse per second capability available. Therefore, you
should not confi gure the ETS to be an NTP server using
PPS.
Enhanced STP recovery when Internal Battery Feature
is in use: Improved availability can be obtained when
power has failed for a single server (PTS/CTS), or when
there is a site power outage in a multi site confi guration
where the PTS/CTS is installed (the site with the BTS is a
different site not affected by the power outage).
If an Internal Battery Feature (IBF) is installed on your
System z server, STP now has the capability of receiving
notifi cation that customer power has failed and that the
IBF is engaged. When STP receives this notifi cation from a
server that has the role of the PTS/CTS, STP can automati-
cally reassign the role of the CTS to the BTS, thus automat-
ing the recovery action and improving availability.
STP confi guration and time information saved across
Power on Resets (POR) or power outages: This
enhancement delivers system management improvements
by saving the STP confi guration across PORs and power
failures for a single server STP-only CTN. Previously, if
the server was PORed or experienced a power outage,
the time, and assignment of the PTS and CTS roles would
have to be reinitialized. You will no longer need to reinitial-
ize the time or reassign the role of PTS/CTS across POR or
power outage events.
Note that this enhancement is also available on the z990
and z890 servers.
Since the HMC is normally attached to the SE LAN, pro-
viding an NTP server capability on the HMC addresses
the potential security concerns most users may have for
55
Application Programming Interface (API) to automate
STP CTN reconfi guration: The concept of “a pair and
a spare” has been around since the original Sysplex
Couple Data Sets (CDSs). If the primary CDS becomes
unavailable, the backup CDS would take over. Many sites
have had automation routines bring a new backup CDS
online to avoid a single point of failure. This idea is being
extended to STP. With this enhancement, if the PTS fails
and the BTS takes over as CTS, an API is now available
on the HMC so you can automate the reassignment of the
PTS, BTS, and Arbiter roles. This can improve availability
by avoiding a single point of failure after the BTS has taken
over as the CTS.
Prior to this enhancement, the PTS, BTS, and Arbiter roles
had to be reassigned manually using the System (Sysplex)
Time task on the HMC.
For additional details on the API, please refer to System z
Application Programming Interfaces, SB10-7030-11.
Additional information is available on the STP Web page:
http://www.ibm.com/systems/z/pso/stp.html.
The following Redbooks are available on the Redbooks
Web site: http://www.redbooks.ibm.com/.
• Server Time Protocol Planning Guide, SG24-7280
• Server Time Protocol Implementation Guide, SG24-7281
Internal Battery Feature Recommendation
Single data center
• CTN with 2 servers, install IBF on at least the PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
– CTN with 3 or more servers IBF not required for STP
recovery, if Arbiter confi gured
Two data centers
• CTN with 2 servers (one in each data center) install IBF
on at least the PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
• CTN with 3 or more servers, install IBF on at least the
PTS/CTS
– Also recommend IBF on BTS to provide recovery pro-
tection when BTS is the CTS
Message Time Ordering (Sysplex Timer Connectivity to Coupling
Facilities)
As processor and Coupling Facility link technologies have
improved, the requirement for time synchronization toler-
ance between systems in a Parallel Sysplex environment
has become ever more rigorous. In order to enable any
exchange of timestamped information between systems
in a sysplex involving the Coupling Facility to observe the
correct time ordering, time stamps are now included in
the message-transfer protocol between the systems and
the Coupling Facility. Therefore, when a Coupling Facility
is confi gured on any System z10 or System z9, the Cou-
pling Facility will require connectivity to the same 9037
Sysplex Timer or Server Time Protocol (STP) confi gured
Coordinated Timing Network (CTN) that the systems in its
Parallel Sysplex cluster are using for time synchroniza-
tion. If the ICF is on the same server as a member of its
Parallel Sysplex environment, no additional connectivity is
required, since the server already has connectivity to the
Sysplex Timer.
However, when an ICF is confi gured on any z10 which
does not host any systems in the same Parallel Sysplex
cluster, it is necessary to attach the server to the 9037
Sysplex Timer or implement STP.
56
HMC System Support
The new functions available on the Hardware Management
Console (HMC) version 2.10.1 as described apply exclu-
sively to System z10. However, the HMC version 2.10.1 will
continue to support the systems as shown.
The 2.10.1 HMC will continue to support up to two 10/100
Mbps Ethernet LANs. Token Ring LANs are not supported.
The 2.10.1 HMC applications have been updated to sup-
port HMC hardware without a diskette drive. DVD-RAM,
CD-ROM, and/or USB fl ash memory drive media will be
used.
Family Machine Type Firmware Driver SE Version
z10 BC 2098 76 2.10.1
z10 EC 2097 73 2.10.0
z9 BC 2096 67 2.9.2
z9 EC 2094 67 2.9.2
z890 2086 55 1.8.2
z990 2084 55 1.8.2
z800 2066 3G 1.7.3
z900 2064 3G 1.7.3
9672 G6 9672/9674 26 1.6.2
9672 G5 9672/9674 26 1.6.2
Internet Protocol, Version 6 (IPv6)
HMC version 2.10.1 and Support Element (SE) version
2.10.1 can now communicate using IP Version 4 (IPv4),
IP Version 6 (IPv6), or both. It is no longer necessary to
assign a static IP address to an SE if it only needs to com-
municate with HMCs on the same subnet. An HMC and
SE can use IPv6 link-local addresses to communicate with
each other.
HMC/SE support is addressing the following requirements:
• The availability of addresses in the IPv4 address space
is becoming increasingly scarce.
• The demand for IPv6 support is high in Asia/Pacifi c
countries since many companies are deploying IPv6.
• The U.S. Department of Defense and other U.S. govern-
ment agencies are requiring IPv6 support for any prod-
ucts purchased after June 2008.
More information on the U.S. government requirements
1) Minimum of one I/O feature (ESCON, FICON) or Coupling Link (PSIFB,
ICB-4, ISC-3) required.
2) The maximum number of external Coupling Links combined cannot
exceed 56 per server. There is a maximum of 64 coupling link CHPIDs
per server (ICs, ICB-4s, active ISC-3 links, and IFBs)
3) ICB-4 and 12x IB-DDR are not included in the maximum feature count for
I/O slots but are included in the CHPID count.
4) Initial order of Crypto Express2 is 2/4 PCI-X adapters (two features).
Each PCI-X adapter can be confi gured as a coprocessor or an accelerator.
* FICON Express4-2C 4KM LX has two channels per feature, OSA-
Express3 GbE and 1000BASE-T have 2 and 4 port options and Crypto
Express2-1P has 1 coprocessor
** Available only when carried forward on an upgrade from z890 or or z9
BC. Limited availability for OSA-Express2 GbE features.
2* PCI-X
(4)
z10 BC Concurrent PU Conversions
• Must order (characterize one PU as) a CP, an ICF or an
IFL
• Concurrent model upgrade is supported
• Concurrent processor upgrade is supported if PUs are
available
– Add CP, IFL, unassigned IFL, ICF, zAAP, zIIP or
optional SAP
• PU Conversions
– Standard SAP cannot be converted to other PU types
To CP IFL Unassigned ICF
From IFL SAP
CP X Yes
IFL Yes X
Unassigned Yes Yes
Yes Yes
Yes Yes
X Yes
IFL
ICF Yes Yes
zAAP Yes Yes
zIIP Yes Yes
Optional Yes Yes
Yes X
Yes Yes
Yes Yes
Yes Yes
SAP
Exceptions: Disruptive if ALL current PUs are converted to different types
may require individual LPAR disruption if dedicated PUs are converted.
zAAP
zIIP Optional
Yes Yes Yes
Yes Yes Yes
Yes Yes Yes
Yes Yes Yes
X Yes Yes
Yes X Yes
Yes Yes X
62
z10 BC Model Structure
z10 BC System weight and IBF hold-up times
Model PU PUs for Max Avail Standard Standard CP/IFL/ Max Max
E10 4 10
* Max is for ESCON channels.
** For each zAAP and/or zIIP installed there must be a corresponding CP.
The CP may satisfy the requirement for both the zAAP and/or zIIP. The
combined number of zAAPs and/or zIIPs can not be more than 2x the
number of general purpose processors (CPs).
0
5/10/10/5/5
248 GB
Chan.
480
z10 BC Minimum Maximum
E10
Memory DIMM sizes: 2 GB and 4 GB. (Fixed HSA not included, up to 248
GB for customer use June 30, 2009)
• 1x PSIFBs support single data rate (SDR) at 2.5 Gbps when connected
to a DWDM capable of SDR speed and double data rate (DDR) at 5
Gbps when connected to a DWDM capable of DDR speed
• System z9 does NOT support 1x IB-DDR or SDR Infi niBand Coupling
Links
*Note: The Infi niBand link data rate of 6 GBps, 3 GBps or 5 Gbps does not
represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable
lengths, and the type of workload. With Infi niBand coupling links, while the
link data rate may be higher than that of ICB, the service times of coupling
operations are greater, and the actual throughput may be less than with ICB
links.
63
Coupling Facility – CF Level of Support
CF Level Function
z10 EC z9 EC z990
z10 BC z9 BC z890
16 CF Duplexing Enhancements X
List Notifi cation Improvements
Structure Size increment increase from 512 MB –> 1 MB
15 Increasing the allowable tasks in the CF from 48 to 112 X X
14 CFCC Dispatcher Enhancements X X
13 DB2 Castout Performance X X
12 z990 Compatibility 64-bit CFCC X X
Addressability Message Time Ordering X X
DB2 Performance SM Duplexing Support for zSeries X X
11 z990 Compatibility SM Duplexing Support for 9672 G5/G6/R06 X X
10 z900 GA2 Level X X
9 Intelligent Resource Director IC3 / ICB3 / ISC3 Peer Mode X X
MQSeries
WLM Multi-System Enclaves X X
Note: zSeries 900/800 and prior generation servers are not supported with System z10 for Coupling Facility or Parallel Sysplex levels.
®
Shared Queues X X
64
Statement of Direction
IBM intends to support optional water cooling on future
high end System z servers. This cooling technology will
tap into building chilled water that already exists within the
datacenter for computer room air conditioning systems.
External chillers or special water conditioning will not be
required. Water cooling technology for high end System z
servers will be designed to deliver improved energy effi -
ciencies.
IBM intends to support the ability to operate from High
Voltage DC power on future System z servers. This will
be in addition to the wide range of AC power already
supported. A direct HV DC datacenter power design can
improve data center energy effi ciency by removing the
need for an additional DC to AC inversion step.
The System z10 will be the last server to support Dynamic
ICF expansion. This is consistent with the System z9 hard-
ware announcement 107-190 dated April 18, 2007, IBM
System z9 Enterprise Class (z9 EC) and System z9 Busi-
ness Class (z9 BC) – Delivering greater value for every-
one, in which the following Statement of Direction was
made: IBM intends to remove the Dynamic ICF expansion
function from future System z servers.
The System z10 will be the last server to support connec-
tions to the Sysplex Timer (9037). Servers that require time
synchronization, such as to support a base or Parallel Sys-
plex, will require Server Time Protocol (STP). STP has been
available since January 2007 and is offered on the System
z10, System z9, and zSeries 990 and 890 servers.
ESCON channels to be phased out: It is IBM’s intent for
ESCON channels to be phased out. System z10 EC and
System z10 BC will be the last servers to support greater
than 240 ESCON channels.
ICB-4 links to be phased out: Restatement of SOD) from
RFA46507) IBM intends to not offer Integrated Cluster Bus-
4 (ICB-4) links on future servers. IBM intends for System
z10 to be the last server to support ICB-4 links.
65
Publications
The following Redbook publications are available now:
z10 BC Technical Overview
SG24-7632
z10 BC Technical Guide SG24-7516
System z Connectivity Handbook SG24-5444
Server Time Protocol Planning Guide SG24-7280
Server Time Protocol Implementation Guide
The following publications are shipped with the product and
available in the Library section of Resource Link:
z10 BC Installation Manual GC28-6874
z10 BC Service Guide GC28-6878
z10 BC Safety Inspection Guide GC28-6877
System Safety Notices G229-9054
The following publications are available in the Library section of
Resource Link:
Agreement for Licensed Machine Code SC28-6872
Application Programming Interfaces
for Java API-JAVA
Application Programming Interfaces SB10-7030
Capacity on Demand User’s Guide SC28-6871
CHPID Mapping Tool User’s Guide GC28-6825
Common Information Model (CIM)
Management Interface SB10-7154
IBM Systems and Technology Group
Route 100
Somers, NY 10589
U.S.A
Produced in the United States of America,
04-09
All Rights Reserved
References in this publication to IBM products or services do not imply
that IBM intends to make them available in every country in which IBM
operates. Consult your local IBM business contact for information on the
products, features, and services available in your area.
IBM, IBM eServer, the IBM logo, the e-business logo, AIX, APPN, CICS,
Cognos, Cool Blue, DB2, DRDA, DS8000, Dynamic Infrastructure, ECKD,
ESCON, FICON, Geographically Dispersed Parallel Sysplex, GDPS,
HiperSockets, HyperSwap, IMS, Lotus, MQSeries, MVS, OS/390, Parallel
Sysplex, PR/SM, Processor Resource/Systems Manager, RACF, Rational,
Redbooks, Resource Link, RETAIN, REXX, RMF, Scalable Architecture
for Financial Reporting, Sysplex Timer, Systems Director Active Energy
Manager, System Storage, System z, System z9, System z10, Tivoli,
TotalStorage, VSE/ESA, VTAM, WebSphere, z9, z10, z10 BC, z10 EC, z/
Architecture, z/OS, z/VM, z/VSE, and zSeries are trademarks or registered
trademarks of the International Business Machines Corporation in the
Unites States and other countries.
Infi niBand is a trademark and service mark of the Infi niBand Trade Association.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States or other
countries.
Linux is a registered trademark of Linus Torvalds in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the Unites States
and other countries.
Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation In the United States, other countries, or both.
Intel is a trademark of the Intel Corporation in the United States and other
countries.
Other trademarks and registered trademarks are the properties of their
respective companies.
IBM hardware products are manufactured from new parts, or new and
used parts. Regardless, our warranty terms apply.
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput that any user will experience will vary
depending upon considerations such as the amount of multiprogramming
in the user’s job stream, the I/O confi guration, the storage confi guration,
and the workload processed. Therefore, no assurance can be given that
an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
All performance information was determined in a controlled environment.
Actual results may vary. Performance information is provided “AS IS” and
no warranties or guarantees are expressed or implied by IBM.
Photographs shown are of engineering prototypes. Changes may be
incorporated in production models.
This equipment is subject to all applicable FCC rules and will comply with
them upon delivery.
Information concerning non-IBM products was obtained from the suppliers of those products. Questions concerning those products should be
directed to those suppliers.
All customer examples described are presented as illustrations of how
those customers have used IBM products and the results they may have
achieved. Actual environmental costs and performance characteristics
67
ZSO03021-USEN-02
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.