IBM Z10 EC User Manual

394.21 Kb
Loading...

IBM System z10 Enterprise Class (z10 EC)

Reference Guide

April 2009

Table of Contents

IBM System z10 EnterpriseClass (z10 EC) Overview

page 3

z/Architecture

page 6

z10 EC

page 11

z10

EC Design and Technology

page 14

z10 EC Model

page 15

z10

EC Performance

page 17

z10

EC I/O Subsystem

page 18

z10

EC Channels and I/O Connectivity

page 19

HiperSockets

page 32

Security

page 34

Cryptography

page 34

On Demand Capabilities

page 39

Reliability, Availability, and Serviceability (RAS)

page 43

Availability Functions

page 44

Environmental Enhancements

page 47

Parallel Sysplex Cluster Technology

page 48

HMC System Support

page 57

Implementation Services for Parallel Sysplex

page 59

Fiber Quick Connect for FICON LX Environments

page 60

z10

EC Physical Characteristics

page 60

z10

EC Configuration Detail

page 61

Coupling Facility – CF Level of Support

page 64

Statement of Direction

page 65

Publications

page 66

2

IBM System z10 Enterprise Class

(z10 EC) Overview

The IBM System z10Enterprise Class (z10EC) server is designed to meet the challenges of today’s business world and to be the cornerstone of an evolutionary new model for effi cient IT delivery called the Dynamic Infrastructure®. This model helps reset the economics of IT and can dramatically improve operational effi ciency, security, and responsiveness – to help keep a business competitive.

The z10 EC, with its advanced combination of reliability, availability, serviceability, security, scalability, and virtualization, delivers the technology that can help defi ne this framework for the future. The z10 EC delivers improvements to performance, capacity, and memory which can help enterprises grow their existing business while providing a cost-effective infrastructure for large-scale consolidation.

The October 2008 announcements extend the z10 EC leadership with improved access to data and the network; tighter security with longer Personal Account Numbers for stronger protection of data; enhancements for improved performance when connecting to the network; increased

fl exibility in defi ning your options to handle backup requirements; and enhanced time accuracy to an external time source.

Any successful business needs to be able to deliver timely, integrated information to business leaders, support personnel, and customers on a 24x7 basis. This means that access to data needs to be fast, secure, and dependable. Enhancements made to z/Architecture® and the FICON® interface architecture with the High Performance FICON for System z (zHPF) are optimized for online transaction processing (OLTP) workloads. The FICON Express4 and FICON Express2 features support the native FICON protocol and the zHPF protocol.

The System z10 was introduced with a new connectivity option for LANs – Open Systems Adapter-Express3 (OSAExpress3). The OSA-Express3 features provide improved performance by reducing latency at the TCP/IP application. Direct access to the memory allows packets to fl ow directly from the memory to the LAN without fi rmware intervention in the adapter.

An IT system needs to be available and protected every day. The z10 EC offers availability enhancements which include faster service time for CF Duplexing, updates to Server Time Protocol (STP) for enhanced time accuracy to an External Time Source, and support for heterogeneous platforms in an enterprise to track to the same time source. Security enhancements to the Crypto Express2 feature deliver support for 13-, 14-, 15-, 16-, 17-, 18-, and 19-digit Personal Account Numbers for stronger protection of data.

The z10 EC has a new architectural approach for temporary offerings that have the potential to change the thinking about on demand capacity. The z10 EC can have one or more fl exible confi guration defi nitions that can be available to solve multiple temporary situations and multiple capacity confi gurations that can be active at once. This means that On/Off Capacity on Demand (CoD) can be active and up to seven other offerings can be active simultaneously. Tokens are available that can be purchased for On/Off CoD either before or after execution.

Updates to the z10 EC are designed to help improve IT today, outline a compelling case for the future running on System z, and lock in the z10 EC as the cornerstone in your Dynamic Infrastructure by delivering superior business and IT services with agility and speed.

3

Just-in-time deployment of IT resources

Infrastructures must be more fl exible to changing capacity requirements and provide users with just-in-time deployment of resources. Having the 16 GB dedicated HSA on the z10 EC means that some preplanning confi guration changes and associated outages may be avoided. IBM Capacity Upgrade on Demand (CUoD) provides a permanent increase in processing capacity that can be initiated by the customer.

IBM On/Off Capacity on Demand (On/Off CoD) provides temporary capacity needed for short-term spikes in capacity or for testing new applications. Capacity Backup Upgrade (CBU) can help provide reserved emergency backup capacity for all processor confi gurations.

An additional temporary capacity offering on the z10 EC is Capacity for Planned Events (CPE), a variation on CBU. If unallocated capacity is available in a server, it will allow the maximum capacity available to be used for planned events such as planned maintenance in a data center.

By having fl exible and dynamic confi guration defi nitions, when capacity is needed, activation of any portion of an offering can be done (for example activation of just two CBUs out of a defi nition that has four CBUs is acceptable). And if the defi nition doesn’t have enough resources defi ned, an order can easily be processed to increase the capacity (so if four CBUs aren’t enough it can be redefi ned to be six CBUs) as long as enough server infrastructure is available to meet maximum needs.

All activations can be done without having to interact with IBM—when it is determined that capacity is required, no passwords or phone connections are necessary. As long as the total z10 EC can support the maximums that are defi ned, then they can be made available.

With the z10 EC, it is now possible to add permanent capacity while a temporary capacity is currently activated, without having to return fi rst to the original confi guration.

The activation of On/Off CoD on z10 EC can be simplifi ed or automated by using z/OS Capacity Provisioning (available with z/OS® 1.9 and above). This capability enables the monitoring of multiple systems based on Capacity Provisioning and Workload Manager (WLM) defi nitions. When the defi ned conditions are met, z/OS can suggest capacity changes for manual activation from a z/OS console, or the system can add or remove temporary capacity automatically and without operator intervention.

Specialty engines offer an attractive alternative

The z10 EC continues the long history of providing integrated technologies to optimize a variety of workloads. The use of specialty engines can help users expand the use

of the mainframe for new workloads, while helping to lower the cost of ownership. The IBM System z® specialty engines can run independently or complement each other. For example, the zAAP and zIIP processors enable you to purchase additional processing capacity exclusively for specifi c workloads, without affecting the MSU rating of the IBM System z model designation. This means that adding a specialty engine will not cause increased charges for IBM System z software running on general purpose processors in the server.

4

In order of introduction:

The Internal Coupling Facility (ICF) processor was introduced to help cut the cost of Coupling Facility functions by reducing the need for an external Coupling Facility.

IBM System z Parallel Sysplex® technology allows for greater scalability and availability by coupling mainframes together. Using Parallel Sysplex clustering, System z servers are designed for up to 99.999% availability.

The Integrated Facility for Linux (IFL) processor offers support for Linux® and brings a wealth of available applications that can be run in a real or virtual environment on the z10 EC. An example is the z/VSEstrategy which supports integration between the IFL, z/VSE and Linux on System z to help customers integrate timely production of z/VSE data into new Linux applications, such as data warehouse environments built upon a DB2® data server. To consolidate distributed servers onto System z, the IFL with Linux and the System z virtualization technologies fulfi ll the qualifi cations for business-critical workloads as well as for infrastructure workloads. For customers interested to use a z10 EC only for Linux workload, the z10 EC can be confi gured as a server with IFLs only.

Available on System z since 2004, the System z10 Application Assist Processor (zAAP) is designed to help enable strategic integration of new application technologies

such as Javatechnology-based Web applications and XML-based data interchange services with core business database environments. This helps provide a more costeffective, specialized z/OS application Java execution environment. Workloads eligible for the zAAP (with z/OS V1.8) include all Java processed via the IBM Solution Developers Kit (SDK) and XML processed locally via z/OS XML System Services.

The System z10 Integrated Information Processor (zIIP) is designed to support select data and transaction processing and network workloads and thereby make the consolidation of these workloads on to the System z platform more cost effective. Workloads eligible for the zIIP (with z/OS V1.7 or later) include remote connectivity to DB2 to help support these workloads: Business Intelligence (BI), Enterprise Relationship Management (ERP), Customer Relationship Management (CRM) and Extensible Markup Language (XML) applications. In addition to supporting remote connectivity to DB2 (via DRDA® over TCP/IP) the zIIP also supports DB2 long running parallel queries—a workload integral to Business Intelligence and Data Warehousing solutions. The zIIP (with z/OS V1.8) also supports IPSec processing, making the zIIP an IPSec encryption engine helpful in creating highly secure connections in an enterprise. In addition, zIIP (with z/OS V1.10) supports select z/OS Global Mirror (formerly called Extended Remote Copy, XRC) disk copy service functions. z/OS V1.10 also introduces zIIP-Assisted HiperSocketsfor large messages (available on System z10 servers only).

The new capability provided with z/VM®-Mode partitions increases fl exibility and simplifi es systems management by allowing z/VM 5.4 to manage guests to operate Linux on System z on IFLs, to operate z/VSE and z/OS on CPs,

to offl oad z/OS system software overhead, such as DB2 workloads on zIIPs, and to offer an economical Java execution environment under z/OS on zAAPs, all in the same z/VM LPAR.

Numerical computing on the chip

Integrated on the z10 EC processor unit is a Hardware Decimal Floating Point unit to accelerate decimal fl oating point transactions. This function is designed to markedly improve performance for decimal fl oating point operations which offer increased precision compared to binary fl oating

5

z/Architecture

point operations. This is expected to be particularly useful for the calculations involved in many fi nancial transactions.

Decimal calculations are often used in fi nancial applications and those done using other fl oating point facilities have typically been performed by software through the use of libraries. With a hardware decimal fl oating point unit, some of these calculations may be done directly and accelerated.

Liberating your assets with System z

Enterprises have millions of dollars worth of mainframe assets and core business applications that support the heart of the business. The convergence of service oriented architecture (SOA) and mainframe technologies can help liberate these core business assets by making it easier

to enrich, modernize, extend and reuse them well beyond their original scope of design. The z10 EC, along with the inherent strengths and capabilities of a z/OS environment, provides an excellent platform for being an enterprise hub. Innovative System z software solutions from WebSphere®, CICS®, Rational® and Lotus® strengthen the fl exibility of

doing SOA.

Evolving for your business

The z10 EC is the next step in the evolution of the System z mainframe, fulfi lling our promise to deliver technol-

ogy improvements in areas that the mainframe excels in—energy effi ciency, scalability, virtualization, security and availability. The redesigned processor chip helps the z10 EC make high performance compute-intensive processing a reality. Flexibility and control over capacity gives IT the upper edge over planned or unforeseen demands. And new technologies can benefi t from the inherit strengths of the mainframe. This evolving technology delivers a compelling case for the future to run on System z.

The z10 EC continues the line of upward compatible mainframe processors and retains application compatibility since 1964. The z10 EC supports all z/Architecture-compli- ant Operating Systems. The heart of the processor unit is the Enterprise Quad Core z10 Processor Unit chip which is specifi cally designed and optimized for mainframe systems. New features enhance enterprise data serving performance as well as CPU-intensive workloads.

The z10 EC, like its predecessors, supports 24-, 31-, and 64-bit addressing, as well as multiple arithmetic formats. High-performance logical partitioning via Processor Resource/Systems Manager(PR/SM) is achieved by industry-leading virtualization support provided by z/VM.

z10 EC Architecture

Rich CISC Instruction Set Architecture (ISA)

894 instructions (668 implemented entirely in hardware)

Multiple address spaces robust inter-process security

Multiple arithmetic formats

Architectural extensions for z10 EC

50+ instructions added to z10 EC to improve compiled code effi ciency

Enablement for software/hardware cache optimization

Support for 1 MB page frames

Full hardware support for Hardware Decimal Floatingpoint Unit (HDFU)

z/Architecture operating system support

Delivering the technologies required to address today’s IT challenges also takes much more than just a server; it requires all of the system elements to be working together.

IBM system z10 operating systems and servers are designed with a collaborative approach to exploit each other’s strengths.

6

The z10 EC is also able to exploit numerous operating systems concurrently on a single server, these include z/OS, z/VM, z/VSE, z/TPF, TPF and Linux for System z. These operating systems are designed to support existing application investments without anticipated change and help you realize the benefi ts of the z10 EC. System z10 – the new business equation.

z/OS

August 5, 2008, IBM announced z/OS V1.10. This release of the z/OS operating system builds on leadership capabilities, enhances time-tested technologies, and leverages deep synergies with the IBM System z10 and IBM System Storagefamily of products. z/OS V1.10 supports new capabilities designed to provide:

Storage scalability. Extended Address Volumes (EAVs) enable you to defi ne volumes as large as 223 GB to relieve storage constraints and help you simplify storage management by providing the ability to manage fewer, large volumes as opposed to many small volumes.

Application and data serving scalability. Up to 64 engines, up to 1.5 TB per server with up to 1.0 TB of real memory per LPAR, and support for large (1 MB) pages on the System z10 can help provide scale and performance for your critical workloads.

Intelligent and optimized dispatching of workloads. HiperDispatch can help provide increased scalability and performance of higher n-way z10 EC systems by improving the way workload is dispatched within the server.

Low-cost, high-availability disk solution. The Basic HyperSwapcapability (enabled by TotalStorage® Productivity Center for Replication Basic Edition for System z) provides a low-cost, single-site, high-availability disk solution which allows the confi guration of disk replication services using an intuitive browser-based graphical user interface (GUI) served from z/OS.

Improved total cost of ownership. zIIP-Assisted HiperSockets for Large Messages, IBM Scalable Architecture for Financial Reportingenabled for zIIP (a service offering of IBM Global Business Services), zIIPAssisted z/OS Global Mirror (XRC), and additional z/OS XML System Services exploitation of zIIP and zAAP help make these workloads more attractive on System z.

Improved management of temporary processor capacity. A Capacity Provisioning Manager, which is available on z/OS V1.10, and available on z/OS V1.9 with PTFs, can monitor z/OS systems on z10 EC servers. Activation and deactivation of temporary capacity can be suggested or performed automatically based on user-defi ned schedules and workload criteria. RMFor equivalent function is required to use the Capacity Provisioning Manager.

Improved network security. z/OS Communications Server introduces new defensive fi ltering capability. Defensive

fi lters are evaluated ahead of confi gured IP fi lters, and can be created dynamically, which can provide added protection and minimal disruption of services in the event of an attack.

z/OS V1.10 also supports RSA key, ISO Format-3 PIN block, 13-Digit through 19-Digit PANdata, secure key AES, and SHA algorithms.

Improved productivity. z/OS V1.10 provides improvements in or new capabilities for: simplifying diagnosis and problem determination; expanded Health Check Services; network and security management; automatic dump and re-IPL capability; as well as overall z/OS, I/O confi guration, sysplex, and storage operations

With z/OS 1.9, IBM delivers functionality that continues to solidify System z leadership as the premier data server. z/OS 1.9 offers enhancements in the areas of security, networking, scalability, availability, application development, integration, and improved economics with more exploitation for specialty engines. A foundational element of the platform — the z/OS tight interaction with the System z hardware and its high level of system integrity.

7

With z/OS 1.9, IBM introduces:

A revised and expanded Statement of z/OS System Integrity

Large Page Support (1 MB)

Capacity Provisioning

Support for up to 64 engines in a single image (on z10 EC model only)

Simplifi ed and centralized policy-based networking

Expanded IBM Health Checker

Simplifi ed RACF® Administration

Hardware Decimal Floating Point

Parallel Sysplex support for Infi niband® Coupling Links

NTP Support for STP

HiperSockets Multiple Write Facility

OSA-Express3 support

Advancements in ease of use for both new and existing IT professionals coming to z/OS

Support for zIIP-Assisted IPSec, System Data Mover (SDM) offl oad to zIIP, and support for eligible portions of DB2 9 XML parsing workloads to be offl oaded to zAAP processors

Expanded options for AT-TLS and System SSL network security

Improved creation and management of digital certifi - cates with RACF, SAF, and z/OS PKI Services

Additional centralized ICSF encryption key management functions for applications

Improved availability with Parallel Sysplex and Coupling Facility improvement

Enhanced application development and integration with new System REXXfacility, Metal C facility, and z/OS UNIX® System Services commands

Enhanced Workload Manager in managing discretionary work and zIIP and zAAP workloads

Commitment to system integrity

First issued in 1973, IBM’s MVSSystem Integrity Statement and subsequent statements for OS/390® and z/OS stand as a symbol of IBM’s confi dence and commitment to the z/OS operating system. Today, IBM reaffi rms its commitment to z/OS system integrity.

IBM’s commitment includes designs and development practices intended to prevent unauthorized application programs, subsystems, and users from bypassing z/OS security—that is, to prevent them from gaining access, circumventing, disabling, altering, or obtaining control of key z/OS system processes and resources unless allowed by the installation. Specifi cally, z/OS “System Integrity” is defi ned as the inability of any program not authorized by a mechanism under the installation’s control to circumvent or disable store or fetch protection, access a resource protected by the z/OS Security Server (RACF), or obtain control in an authorized state; that is, in supervisor state, with a protection key less than eight (8), or Authorized Program Facility (APF) authorized. In the event that an IBM System

Integrity problem is reported, IBM will always take action to resolve it.

IBM’s long-term commitment to System Integrity is unique in the industry, and forms the basis of the z/OS industry leadership in system security. z/OS is designed to help you protect your system, data, transactions, and applications from accidental or malicious modifi cation. This is one of the many reasons System z remains the industry’s premier data server for mission-critical workloads.

8

z/VM

z/VM V5.4 is designed to extend its System z virtualization technology leadership by exploiting more capabilities of System z servers including:

Greater fl exibility, with support for the new z/VM-mode logical partitions, allowing all System z processor-types (CPs, IFLs, zIIPs, zAAPs, and ICFs) to be defi ned in the same z/VM LPAR for use by various guest operating systems

Capability to install Linux on System z as well as z/VM from the HMC on a System z10 that eliminates the need for any external network setup or a physical connection between an LPAR and the HMC

Enhanced physical connectivity by exploiting all OSAExpress3 ports, helping service the network and reducing the number of required resources.

Dynamic memory upgrade support that allows real memory to be added to a running z/VM system. With z/VM V5.4, memory can be added non-disruptively to individual guests that support the dynamic memory reconfi guration architecture. Systems can now be confi gured to reduce the need to re-IPL z/VM. Processors, channels, OSA adapters, and now memory can be dynamically added to both the z/VM system itself and to individual guests.

The operation and management of virtual machines has been enhanced with new systems management APIs, improvements to the algorithm for distributing a

guest’s CPU share among virtual processors, and usability enhancements for managing a virtual network.

Security capabilities of z/VM V5.4 provide an upgraded LDAP server at the functional level of the z/OS V1.10 IBM Tivoli® Directory Server for z/OS and enhancements to the RACF Security Server to create LDAP change log entries in response to updates to RACF group and user profiles,

including user passwords and password phrases. The z/VM

SSL server now operates in a CMS environment, instead of requiring a Linux distribution, thus allowing encryption services to be deployed more quickly and helping to simplify installation, service, and release-to-release migration.

The z/VM hypervisor is designed to help clients extend the business value of mainframe technology across the enterprise by integrating applications and data while providing exceptional levels of availability, security, and operational ease. z/VM virtualization technology is designed to provide the capability for clients to run hundreds to thousands of Linux servers in a single mainframe, together with other System z operating systems such as z/OS, or as a largescale Linux-only enterprise-server solution. z/VM V5.4 can also help to improve productivity by hosting non-Linux workloads such as z/OS, z/VSE, and z/TPF.

August 5, 2008, IBM announced z/VM 5.4. Enhancements in z/VM 5.4 include:

Increased fl exibility with support for new z/VM-mode logical partitions

Dynamic addition of memory to an active z/VM LPAR by exploiting System z dynamic storage-reconfi guration capabilities

Enhanced physical connectivity by exploiting all OSAExpress3 ports

Capability to install Linux on System z from the HMC without requiring an external network connection

Enhancements for scalability and constraint relief

Operation of the SSL server in a CMS environment

Systems management enhancements for Linux and other virtual images

For the most current information on z/VM, refer to the z/VM Web site at http://www.vm.ibm.com.

9

z/VSE

z/VSE 4.1, the latest advance in the ongoing evolution of VSE, is designed to help address needs of VSE clients with growing core VSE workloads and/or those who wish to exploit Linux on System z for new, Web-based business solutions and infrastructure simplifi cation.

z/VSE 4.1 is designed to support:

z/Architecture mode only

64-bit real addressing and up to 8 GB of processor storage

System z encryption technology including CPACF, con- fi gurable Crypto Express2, and TS1120 encrypting tape

Midrange Workload License Charge (MWLC) pricing, including full-capacity and sub-capacity options.

IBM has previewed z/VSE 4.2. When available, z/VSE 4.2 is designed help address the needs of VSE clients with growing core VSE workloads. z/VSE V4.2 is designed to support:

More than 255 VSE tasks to help clients grow their CICS workloads and to ease migration from CS/VSE to CICS Transaction Server for VSE/ESA

Up to 32 GB of processor storage

Sub-Capacity Reporting Tool running “natively”

Encryption Facility for z/VSE as an optional priced feature

IBM System Storage TS3400 Tape Library (via the TS1120 Controller)

IBM System Storage TS7740 Virtualization Engine Release 1.3

z/VSE V4.2 plans to continue the focus on hybrid solutions exploiting z/VSE and Linux on System z, service-ori- ented architecture (SOA), and security. It is the preferred replacement for z/VSE V4.1, z/VSE V3, or VSE/ESA. It is designed to protect and leverage existing VSE information assets.

z/TPF

z/TPF is a 64-bit operating system that allows you to move legacy applications into an open development environment, leveraging large scale memory spaces for increased speed, diagnostics and functionality. The open development environment allows access to commodity skills and enhanced access to open code libraries, both of which can be used to lower development costs. Large memory spaces can be used to increase both system and application efficiency as I/Os or memory management can be eliminated.

z/TPF is designed to support:

64-bit mode

Linux development environment (GCC and HLASM for Linux)

32 processors/cluster

Up to 84* engines/processor

40,000 modules

Workload License Charge

Linux on System z

The System z10 EC supports the following Linux on System z distributions (most recent service levels):

Novell SUSE SLES 9

Novell SUSE SLES 10

Red Hat RHEL 4

Red Hat RHEL 5

10

z10 EC

Operating System

ESA/390

z/Architecture

 

(31-bit)

(64-bit)

z/OS V1R8, 9 and 10

No

Yes

z/OS V1R7(1)(2) with BM Lifecycle

 

 

Extension for z/OS V1.7

No

Yes

Linux on System z(2), Red Hat

 

 

RHEL 4, & Novell SUSE SLES 9

Yes

Yes

Linux on System z(2), Red Hat

 

 

RHEL 5, & Novell SUSE SLES 10

No

Yes

z/VM V5R2(3), 3(3) and 4

No*

Yes

z/VSE V3R1(2)(4)

Yes

No

z/VSE V4R1(2)(5) and 2(5)

No

Yes

z/TPF V1R1

No

Yes

TPF V4R1 (ESA mode only)

Yes

No

1.z/OS V1.7 support on the z10 BCrequires the Lifecycle Extension for z/OS V1.7, 5637-A01. The Lifecycle Extension for z/OS R1.7 + zIIP Web Deliverable required for z10 to enable HiperDispatch on z10 (does not require a zIIP). z/OS V1.7 support was withdrawn September 30, 2008. The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2009. With this Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain functions and features of the z10 BC server require later releases of z/OS. For a complete list of software support, see the PSP buckets and the Software Requirements section of the System z10 BC announcement letter, dated October 21, 2008.

2.Compatibility Support for listed releases. Compatibility support allows OS to IPL and operate on z10 BC

3.Requires Compatibility Support which allows z/VM to IPL and operate on the z10 providing IBM System z9® functionality for the base OS and Guests. *z/VM supports 31-bit and 64-bit guests

4.z/VSE V3 operates in 31-bit mode only. It does not implement z/ Architecture, and specifically does not implement 64-bit mode capabilities. z/VSE is designed to exploit select features of IBM System z10, System z9, and IBM eServerzSeries® hardware.

5.z/VSE V4 is designed to exploit 64-bit real memory addressing, but will not support 64-bit virtual memory addressing

Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive Planning (PSP) bucket prior to installing a z10 BC

Everyday the IT system needs to be available to users

– customers that need access to the company Web site, line of business personnel that need access to the system, application development that is constantly keeping the environment current, and the IT staff that is operating and maintaining the environment. If applications are not consistently available, the business can suffer.

The z10 EC continues our commitment to deliver improvements in hardware Reliability, Availability and Serviceability (RAS) with every new System z server. They include microcode driver enhancements, dynamic segment sparing for memory as well as the fi xed HSA. The z10 EC is a server that can help keep applications up and running in the event of planned or unplanned disruptions to the system.

IBM System z servers stand alone against competition and have stood the test of time with our business resiliency solutions. Our coupling solutions with Parallel Sysplex technology allows for greater scalability and availability. The

Infi niBand Coupling Links on the z10 EC provides a high speed solution to the 10 meter limitation of ICB-4 since they will be available in lengths up to 150 meters.

What the z10 EC provides over its predecessors are improvements in the processor granularity offerings, more options for specialty engines, security enhancements, additional high availability characteristics, Concurrent Driver Upgrade (CDU) improvements, enhanced networking and on demand offerings. The z10 EC provides our IBM customers an option for continued growth, continuity, and upgradeability.

The IBM System z10 EC builds upon the structure introduced on the IBM System z9 EC – scalability and z/Architecture. The System z10 EC expands upon a key attribute of the platform – availability – to help ensure a resilient infrastructure designed to satisfy the demands

of your business. With the potential for increased performance and capacity, you have an opportunity to continue to consolidate diverse applications on a single platform. The z10 EC is designed to provide up 1.7 times the total system capacity than the z9 EC, and has up to triple the available memory. The maximum number of Processor Units (PUs) has grown from 54 to 64, and memory has increased from 128 GB per book and 512 GB per system to 384 GB per book and 1.5 TB per system.

The z10 EC will continue to use the Cargo cage for its I/O, supporting up to 960 Channels on the Model E12 (64 I/O features) and up to 1,024 (84 I/O features) on the Models E26, E40, E56 and E64.

HiperDispatch helps provide increased scalability and performance of higher n-way and multi-book z10 EC systems by improving the way workload is dispatched across the server. HiperDispatch accomplishes this by recognizing the physical processor where the work was started and then dispatching subsequent work to the same physical processor. This intelligent dispatching helps reduce the movement of cache and data and is designed to improve CPU time and performance. HiperDispatch is available only with new z10 EC PR/SM and z/OS functions.

Processor Units (cores) defi ned as Internal Coupling Facilities (ICFs), Integrated Facility for Linux (IFLs), System z10 Application Assist Processor (zAAPs) and System z10 Integrated Information Processor (zIIPs) are no longer grouped together in one pool as on the z990, but are grouped together in their own pool, where they can be managed separately. The separation signifi cantly simpli-

fi es capacity planning and management for LPAR and can have an effect on weight management since CP weights and zAAP and zIIP weights can now be managed separately. Capacity BackUp (CBU) features are available for IFLs, ICFs, zAAPs and zIIPs.

For LAN connectivity, z10 EC provides a OSA-Express3 2-port 10 Gigabit Ethernet (GbE) Long Reach feature along with the OSA-Express3 Gigabit Ethernet SX and LX with four ports per features. The z10 EC continues to support OSA-Express2 1000BASE-T and GbE Ethernet features, and supports IP version 6 (IPv6) on HiperSockets. OSAExpress2 OSN (OSA for NCP) is also available on System z10 EC to support the Channel Data Link Control (CDLC) protocol, providing direct access from the host operating system images to the Communication Controller for Linux on the z10 EC, z10 BC, z9 EC and z9 (CCL) using OSAExpress3 or OSA-Express2 to help eliminate the requirement for external hardware for communications.

Additional channel and networking improvements include support for Layer 2 and Layer 3 traffi c, FCP management facility for z/VM and Linux for System z, FCP security improvements, and Linux support for HiperSockets IPv6. STP enhancements include the additional support for NTP clients and STP over Infi niBand links.

Like the System z9 EC, the z10 EC offers a confi gurable Crypto Express2 feature, with PCI-X adapters that can be individually confi gured as a secure coprocessor or an accelerator for SSL, the TKE workstation with optional Smart Card Reader, and provides the following CP Assist for Cryptographic Function (CPACF):

DES, TDES, AES-128, AES-192, AES-256

SHA-1, SHA-224, SHA-256, SHA-384, SHA-512

Pseudo Random Number Generation (PRNG)

z10 EC is designed to deliver the industry leading Reliability, Availability and Serviceability (RAS) customers expect from System z servers. RAS is designed to

reduce all sources of outages by reducing unscheduled, scheduled and planned outages. Planned outages are further designed to be reduced by reducing preplanning requirements.

12

z10 EC preplanning improvements are designed to avoid planned outages and include:

Flexible Customer Initiated Upgrades

Enhanced Driver Maintenance

Multiple “from” sync point support

Reduce Pre-planning to avoid Power-On-Reset

16 GB for HSA

Dynamic I/O enabled by default

Add Logical Channel Subsystems (LCSS)

Change LCSS Subchannel Sets

Add/delete Logical partitions

Designed to eliminate a logical partition deactivate/ activate/IPL

Dynamic Change to Logical Processor Defi nition – z/VM 5.3

Dynamic Change to Logical Cryptographic Coprocessor Defi nition – z/OS ICSF

Additionally, several service enhancements have also been designed to avoid scheduled outages and include concurrent fi rmware fi xes, concurrent driver upgrades, concurrent parts replacement, and concurrent hardware upgrades. Exclusive to the z10 EC is the ability to hot swap ICB-4 and Infi niBand hub cards.

Enterprises with IBM System z9 EC and IBM z990 may upgrade to any z10 Enterprise Class model. Model upgrades within the z10 EC are concurrent with the exception of the E64, which is disruptive. If you desire a consolidation platform for your mainframe and Linux capable applications, you can add capacity and even expand your current application workloads in a cost-effective manner. If your traditional and new applications are growing, you may fi nd the z10 EC a good fi t with its base qualities of service and its specialty processors designed for assisting with new workloads. Value is leveraged with improved hardware price/performance and System z10 EC software pricing strategies.

The z10 EC processor introduces IBM System z10 Enterprise Class with Quad Core technology, advanced pipeline design and enhanced performance on CPU intensive workloads. The z10 EC is specifi cally designed and optimized for full z/Architecture compatibility. New features enhance enterprise data serving performance, industry leading virtualization capabilities, energy effi ciency at system and data center levels. The z10 EC is designed

to further extend and integrate key platform characteristics such as dynamic fl exible partitioning and resource management in mixed and unpredictable workload environments, providing scalability, high availability and Qualities of Service (QoS) to emerging applications such as WebSphere, Java and Linux.

With the logical partition (LPAR) group capacity limit on z10 EC, z10 BC, z9 EC and z9 BC, you can now specify LPAR group capacity limits allowing you to defi ne each LPAR with its own capacity and one or more groups of LPARs on a server. This is designed to allow z/OS to manage the groups in such a way that the sum of the LPARs’ CPU utilization within a group will not exceed the group’s defi ned capacity. Each LPAR in a group can still optionally continue to defi ne an individual LPAR capacity limit.

The z10 EC has fi ve models with a total of 100 capacity settings available as new build systems and as upgrades from the z9 EC and z990.

The fi ve z10 EC models are designed with a multi-book system structure that provides up to 64 Processor Units (PUs) that can be characterized as either Central Processors (CPs), IFLs, ICFs, zAAPs or zIIPs.

Some of the signifi cant enhancements in the z10 EC that help bring improved performance, availability and function to the platform have been identifi ed. The following sections highlight the functions and features of the z10 EC.

13

z10 EC Design and Technology

The System z10 EC is designed to provide balanced system performance. From processor storage to the system’s I/O and network channels, end-to-end bandwidth is provided and designed to deliver data where and when it is needed.

The processor subsystem is comprised of one to four books connected via a point-to-point SMP network. The change to a point-to-point connectivity eliminates the need for the jumper book, as had been used on the System z9 and z990 systems. The z10 EC design provides growth paths up to a 64 engine system where each of the 64

PUs has full access to all system resources, specifi cally memory and I/O.

Each book is comprised of a Multi-Chip Module (MCM), memory cards and I/O fanout cards. The MCMs, which measure approximately 96 x 96 millimeters, contain the Processor Unit (PU) chips, the “SCD” and “SCC” chips of z990 and z9 have been replaced by a single “SC” chip which includes both the L2 cache and the SMP fabric (“storage controller”) functions. There are two SC chips on each MCM, each of which is connected to all fi ve CP chips on that MCM. The MCM contain 103 glass ceramic layers to provide interconnection between the chips and the off-module environment. Four models (E12, E26, E40 and E56) have 17 PUs per book, and the high capacity z10 EC Model E64 has one 17 PU book and three 20 PU books. Each PU measures 21.973 mm x 21.1658 mm and has an L1 cache divided into a 64 KB cache for instructions and a 128 KB cache for data. Each PU also has an L1.5 cache. This cache is 3 MB in size. Each L1 cache has a Translation Look-aside Buffer (TLB) of 512 entries associated with it. The PU, which uses a high-frequency z/Architecture microprocessor core, is built on CMOS 11S chip technology and has a cycle time of approximately 0.23 nanoseconds.

The design of the MCM technology on the z10 EC provides the fl exibility to confi gure the PUs for different uses; there are two spares and up to 11 System Assist Processors (SAPs) standard per system. The remaining inactive PUs on each installed MCM are available to be characterized as either CPs, ICF processors for Coupling Facility applications, or IFLs for Linux applications and z/VM hosting Linux as a guest, System z10 Application Assist Processors (zAAPs), System z10 Integrated Information Processors (zIIPs) or as optional SAPs and provide you with tremendous fl exibility in establishing the best system for running applications. Each model of the z10 EC must always be ordered with at least one CP, IFL or ICF.

Each book can support from the 16 GB minimum memory, up to 384 GB and up to 1.5 TB per system. 16 GB of

the total memory is delivered and reserved for the fi xed Hardware Systems Area (HSA). There are up to 48 IFB links per system at 6 GBps each.

The z10 EC supports a combination of Memory Bus Adapter (MBA) and Host Channel Adapter (HCA) fanout cards. New MBA fanout cards are used exclusively for ICB-4. New ICB-4 cables are needed for z10 EC and are only available on models E12, E26, E40 and E56. The E64 model may not have ICBs. The Infi niBand Multiplexer (IFBMP) card replaces the Self-Timed Interconnect Multiplexer (STI-MP) card. There are two types of HCA fanout cards: HCA2-C is copper and is always used to connect to I/O (IFB-MP card) and the HCA2-O which is optical and used for customer Infi niBand coupling.

Data transfers are direct between books via the level 2 cache chip in each MCM. Level 2 Cache is shared by all PU chips on the MCM. PR/SM provides the ability to con- fi gure and operate as many as 60 Logical Partitions which may be assigned processors, memory and I/O resources from any of the available books.

14

z10 EC Model

The z10 EC has been designed to offer high performance and effi cient I/O structure. All z10 EC models ship with two frames: an A-Frame and a Z-Frame, which together support the installation of up to three I/O cages. The z10 EC will continue to use the Cargo cage for its I/O, supporting up to 960 ESCON® and 256 FICON channels on the Model E12 (64 I/O features) and up to 1,024 ESCON and 336 FICON channels (84 I/O features) on the Models E26, E40, E56 and E64.

To increase the I/O device addressing capability, the I/O subsystem provides support for multiple subchannels sets (MSS), which are designed to allow improved device connectivity for Parallel Access Volumes (PAVs). To support the highly scalable multi-book system design, the z10 EC I/O subsystem uses the Logical Channel Subsystem (LCSS) which provides the capability to install up to 1024 CHPIDs across three I/O cages (256 per operating system image). The Parallel Sysplex Coupling Link architecture and technology continues to support high speed links providing effi cient transmission between the Coupling Facility and z/OS systems. HiperSockets provides high-speed capability to communicate among virtual servers and logical partitions. HiperSockets is now improved with the IP version 6 (IPv6) support; this is based on high-speed TCP/ IP memory speed transfers and provides value in allowing applications running in one partition to communicate with applications running in another without dependency on an external network. Industry standard and openness are design objectives for I/O in System z10 EC.

The z10 EC has fi ve models offering between 1 to 64 processor units (PUs), which can be confi gured to provide a highly scalable solution designed to meet the needs

of both high transaction processing applications and On Demand Business. Four models (E12, E26, E40 and E56) have 17 PUs per book, and the high capacity z10 EC Model E64 has one 17 PU book and three 20 PU books. The PUs can be characterized as either CPs, IFLs, ICFs, zAAPs or zIIPs. An easy-to-enable ability to “turn off” CPs or IFLs is available on z10 EC, allowing you to purchase capacity for future use with minimal or no impact on software billing. An MES feature will enable the “turned off” CPs or IFLs for use where you require the increased capacity. There are a wide range of upgrade options available in getting to and within the z10 EC.

The z10 EC hardware model numbers (E12, E26, E40, E56 and E64) on their own do not indicate the number of PUs which are being used as CPs. For software billing purposes only, there will be a Capacity Identifi er associated with the number of PUs that are characterized as CPs. This

15

number will be reported by the Store System Information (STSI) instruction for software billing purposes only. There is no affi nity between the hardware model and the number of CPs. For example, it is possible to have a Model E26 which has 13 PUs characterized as CPs, so for software billing purposes, the STSI instruction would report 713.

z10 EC model upgrades

There are full upgrades within the z10 EC models and upgrades from any z9 EC or z990 to any z10 EC. Upgrade of z10 EC Models E12, E26, E40 and E56 to the E64 is disruptive. When upgrading to z10 EC Model E64, unlike the z9 EC, the fi rst book is retained. There are no direct upgrades from the z9 BC or IBM eServer zSeries 900 (z900), or previous generation IBM eServer zSeries.

IBM is increasing the number of sub-capacity engines on the z10 EC. A total of 36 sub-capacity settings are available on any hardware model for 1-12 CPs. Models with 13 CPs or greater must be full capacity.

For the z10 EC models with 1-12 CPs, there are four capacity settings per engine for central processors (CPs). The entry point (Model 401) is approximately 23.69% of

a full speed CP (Model 701). All specialty engines continue to run at full speed. Sub-capacity processors have availability of z10 EC features/functions and any-to-any upgradeability is available within the sub-capacity matrix. All CPs must be the same capacity setting size within one z10 EC.

z10 EC Model Capacity Identifi ers:

700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764

Capacity setting 700 does not have any CP engines

Nxx, where n = the capacity setting of the engine, and xx = the number of PU characterized as CPs in the CEC

Once xx exceeds 12, then all CP engines are full capacity

z10 EC Base and Sub-capacity Offerings

The z10 EC has 36 additional capacity settings at the low end

Available on ANY H/W Model for 1 to 12 CPs. Models with 13 CPs or greater have to be full capacity

All CPs must be the same capacity within the z10 EC

All specialty engines run at full capacity. The one for one entitlement to purchase one zAAP or one zIIP for each CP purchased is the same for CPs of any capacity.

Only 12 CPs can have granular capacity, other PUs must be CBU or characterized as specialty engines

16

z10 EC Performance

The performance design of the z/Architecture can enable the server to support a new standard of performance for applications through expanding upon a balanced system approach. As CMOS technology has been enhanced to support not only additional processing power, but also more PUs, the entire server is modifi ed to support the increase in processing power. The I/O subsystem supports a greater amount of bandwidth than previous generations through internal changes, providing for larger and faster volume of data movement into and out of the server. Support of larger amounts of data within the server required improved management of storage confi gurations, made available through integration of the operating system and hardware support of 64-bit addressing. The combined balanced system design allows for increases in performance across a broad spectrum of work.

Large System Performance Reference

IBM’s Large Systems Performance Reference (LSPR) method is designed to provide comprehensive z/Architecture processor capacity ratios for different confi gurations of Central Processors (CPs) across a wide variety of system control programs and workload environments. For z10 EC, z/Architecture processor capacity identifi er is defi ned with a (7XX) notation, where XX is the number of installed CPs.

Based on using an LSPR mixed workload, the performance of the z10 EC (2097) 701 is expected to be up to 1.62 times that of the z9 EC (2094) 701.

The LSPR contains the Internal Throughput Rate Ratios (ITRRs) for the z10 EC and the previous-generation zSeries processor families based upon measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user

may experience will vary depending upon considerations such as the amount of multiprogramming in the user’s job stream, the I/O confi guration, and the workload processed.

LSPR workloads have been updated to refl ect more closely your current and growth workloads. The classifi cation Java Batch (CB-J) has been replaced with a new classifi cation for Java Batch called ODE-B. The remainder of the LSPR workloads are the same as those used for the z9 EC LSPR. The typical LPAR confi guration table is used to establish single-number-metrics such as MIPS and MSUs. The z10 EC LSPR will rate all z/Architecture processors running in LPAR mode, 64-bit mode, and assumes that HiperDispatch is enabled.

For more detailed performance information, consult the Large Systems Performance Reference (LSPR) available at: http://www.ibm.com/servers/eserver/zseries/lspr/.

CPU Measurement Facility

The CPU Measurement Facility is a hardware facility which consists of counters and samples. The facility provides a means to collect run-time data for software performance tuning. The detailed architecture information for this facility can be found in the System z10 Library in Resource Link.

17

z10 EC I/O Subsystem

The z10 EC contains an I/O subsystem infrastructure which uses an I/O cage that provides 28 I/O slots and the ability to have one to three I/O cages delivering a total of 84 I/O slots. ESCON, FICON Express4, FICON

Express2, FICON Express, OSA-Express3, OSA-Express2, and Crypto Express2 features plug into the z10 EC I/O cage along with any ISC-3s and Infi niBand Multiplexer (IFB-MP) cards. All I/O features and their support cards can be hot-plugged in the I/O cage. Installation of an I/O cage remains a disruptive MES, so the Plan Ahead feature remains an important consideration when ordering a z10 EC system. Each model ships with one I/O cage as standard in the A-Frame (the A-Frame also contains the Central Electronic Complex [CEC] cage where the books reside) and any additional I/O cages are installed in the Z-Frame. Each IFB-MP has a bandwidth up to 6 GigaBytes per second (GB/sec) for I/O domains and MBA fanout cards provide 2.0 GB/sec for ICB-4s.

The z10 EC continues to support all of the features announced with the System z9 EC such as:

Logical Channel Subsystems (LCSSs) and support for up to 60 logical partitions

Increased number of Subchannels (63.75k)

Multiple Subchannel Sets (MSS)

Redundant I/O Interconnect

Physical Channel IDs (PCHIDs)

System Initiated CHPID Reconfi guration

Logical Channel SubSystem (LCSS) Spanning

system hardware administrator access to the information from these many sources in one place. This will make it much easier to manage I/O confi gurations, particularly across multiple CPCs. The SIOA is a “view-only” tool. It does not offer any options other than viewing options.

First the SIOA tool analyzes the current active IOCDS on the SE. It extracts information about the defi ned channel, partitions, link addresses and control units. Next the SIOA tool asks the channels for their node ID information. The FICON channels support remote node ID information, so that is also collected from them. The data is then formatted and displayed on fi ve screens:

1)PCHID Control Unit Screen – Shows PCHIDs, CSS. CHPIDs and their control units

2)PCHID Partition Screen – Shows PCHIDS, CSS. CHPIDs and what partitions they are in

3)Control Unit Screen – Shows the control units, their PCHIDs and their link addresses in each of the CSS’s

4)Link Load Screen – Shows the Link address and the PCHIDs that use it

5)Node ID Screen – Shows the Node ID data under the PCHIDs

The SIOA tool allows the user to sort on various columns and export the data to a USB fl ash drive for later viewing.

System I/O Configuration Analyzer

Today the information needed to manage a system’s I/O confi guration has to be obtained from many separate applications. The System’s I/O Confi guration Analyzer (SIOA) tool is a SE/HMC-based tool that will allow the

18

z10 EC Channels and

I/O Connectivity

ESCON Channels

The z10 EC supports up to 1,024 ESCON channels. The high density ESCON feature has 16 ports, 15 of which can be activated for customer use. One port is always reserved as a spare which is activated in the event of a failure

of one of the other ports. For high availability the initial order of ESCON features will deliver two 16-port ESCON features and the active ports will be distributed across those features.

Fibre Channel Connectivity

The on demand operating environment requires fast data access, continuous data availability, and improved fl exibility, all with a lower cost of ownership. The four port FICON Express4 and FICON Express2 features available on the z9 EC continue to be supported on the System z10 EC.

Choose the FICON Express4 features that best meet

your business requirements.

To meet the demands of your Storage Area Network (SAN), provide granularity, facilitate redundant paths, and satisfy your infrastructure requirements, there are three features from which to choose.

Feature

FC #

Infrastructure

Ports per

 

 

Feature

 

 

 

 

 

FICON Express4 10KM LX

3321

Single mode fiber

4

FICON Express4 4KM LX

3324

Single mode fiber

4

FICON Express4 SX

3322

Multimode fiber

4

Choose the features that best meet your granularity, fi ber optic cabling, and unrepeated distance requirements.

FICON Express4 Channels

The z10 EC supports up to 336 FICON Express4 channels, each one operating at 1, 2 or 4 Gb/sec auto-negotiated. The FICON Express4 features are available in long wavelength (LX) and short wavelength (SX). For customers exploiting LX, there are two options available for unrepeated distances of up to 4 kilometers (2.5 miles) or up

to 10 kilometers (6.2 miles). Both LX features use 9 micron single mode fi ber optic cables. The SX feature uses 50

or 62.5 micron multimode fi ber optic cables. Each FICON Express4 feature has four independent channels (ports) and can be confi gured to carry native FICON traffi c or Fibre Channel (SCSI) traffi c. LX and SX cannot be intermixed on a single feature. The receiving devices must correspond to the appropriate LX or SX feature. The maximum number of FICON Express4 features is 84 using three I/O cages.

FICON Express2 Channels

The z10 EC supports carrying forward up to 336 FICON Express2 channels, each one operating at 1 or 2 Gb/sec auto-negotiated. The FICON Express2 features are available in long wavelength (LX) using 9 micron single mode fi ber optic cables and short wavelength (SX) using 50 and 62.5 micron multimode fi ber optic cables. Each FICON Express2 feature has four independent channels (ports) and each can be confi gured to carry native FICON traffi c or Fibre Channel (SCSI) traffi c. LX and SX cannot be inter-

mixed on a single feature. The maximum number of FICON Express2 features is 84, using three I/O cages.

FICON Express Channels

The z10 EC also supports carrying forward FICON Express LX and SX channels from z9 EC and z990 (up to 120 channels) each channel operating at 1 or 2 Gb/sec auto-negoti- ated. Each FICON Express feature has two independent channels (ports).

19

The System z10 EC Model E12 is limited to 64 features

– any combination of FICON Express4, FICON Express2 and FICON Express LX and SX features.

The FICON Express4, FICON Express2 and FICON Express feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between any combination of servers, directors, switches, and devices in a Storage Area Network (SAN). Each of the four independent channels (FICON Express only supports two channels per feature) is capable of 1 Gigabit per second (Gb/sec), 2 Gb/sec, or 4 Gb/sec (only FICON Express4 supports 4 Gbps) depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Not all switches and devices support 2 or 4 Gb/sec link data rates.

FICON Express4 and FICON Express2 Performance

Your enterprise may benefi t from FICON Express4 and FICON Express2 with:

Increased data transfer rates (bandwidth)

Improved performance

Increased number of start I/Os

Reduced backup windows

Channel aggregation to help reduce infrastructure costs

For more information about FICON, visit the IBM Redbooks® Web site at: http://www.redbooks.ibm.com/ search for SG24-5444. There are also various FICON I/O Connectivity information at: www-03.ibm.com/systems/z/connectivity/.

Concurrent Update

The FICON Express4 SX and LX features may be added to an existing z10 EC concurrently. This concurrent update capability allows you to continue to run workloads through other channels while the new FICON Express4 features are being added. This applies to CHPID types FC and FCP.

Continued Support of Spanned Channels and Logical

Partitions

The FICON Express4 and FICON Express2, FICON and FCP (CHPID types FC and FCP) channel types, can be defi ned as a spanned channel and can be shared among logical partitions within and across LCSSs.

Modes of Operation

There are two modes of operation supported by FICON Express4 and FICON Express2 SX and LX. These modes are confi gured on a channel-by-channel basis – each of the four channels can be confi gured in either of two supported modes.

Fibre Channel (CHPID type FC), which is native FICON or FICON Channel-to-Channel (server-to-server)

Fibre Channel Protocol (CHPID type FCP), which supports attachment to SCSI devices via Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z10 environments

Native FICON Channels

Native FICON channels and devices can help to reduce bandwidth constraints and channel contention to enable easier server consolidation, new application growth, large business intelligence queries and exploitation of On Demand Business.

The FICON Express4, FICON Express2 and FICON Express channels support native FICON and FICON Channel-to-Channel (CTC) traffi c for attachment to servers, disks, tapes, and printers that comply with the FICON architecture. Native FICON is supported by all of the z10 EC operating systems. Native FICON and FICON CTC are defi ned as CHPID type FC.

Because the FICON CTC function is included as part of the native FICON (FC) mode of operation, FICON CTC is not limited to intersystem connectivity (as is the case with ESCON), but will support multiple device defi nitions.

20

FICON Support for Cascaded Directors

Native FICON (FC) channels support cascaded directors. This support is for a single hop confi guration only. Twodirector cascading requires a single vendor high integrity fabric. Directors must be from the same vendor since cascaded architecture implementations can be unique. This type of cascaded support is important for disaster recovery and business continuity solutions because it can help provide high availability, extended distance connectivity, and (particularly with the implementation of 2 Gb/sec Inter Switch Links) has the potential for fi ber infrastructure cost savings by reducing the number of channels for interconnecting the two sites.

FICON cascaded directors have the added value of high integrity connectivity. Integrity features introduced within the FICON Express channel and the FICON cascaded switch fabric to aid in the detection and reporting of any miscabling actions occurring within the fabric can prevent data from being delivered to the wrong end point.

FCP Channels

z10 EC supports FCP channels, switches and FCP/ SCSI disks with full fabric connectivity under Linux on System z and z/VM 5.2 (or later) for Linux as a guest under z/VM, under z/VM 5.2 (or later), and under z/VSE 3.1 for system

usage including install and IPL. Support for FCP devices means that z10 EC servers are capable of attaching to select FCP-attached SCSI devices and may access these devices from Linux on z10 EC and z/VSE. This expanded attachability means that enterprises have more choices for new storage solutions, or may have the ability to use existing storage devices, thus leveraging existing investments and lowering total cost of ownership for their Linux implementations.

The same FICON features used for native FICON channels can be defi ned to be used for Fibre Channel Protocol (FCP) channels. FCP channels are defi ned as CHPID type FCP. The 4 Gb/sec capability on the FICON Express4 channel means that 4 Gb/sec link data rates are available for FCP channels as well.

FCP – increased performance for small block sizes

The Fibre Channel Protocol (FCP) Licensed Internal Code has been modifi ed to help provide increased I/O operations per second for small block sizes. With FICON Express4, there may be up to 57,000 I/O operations

per second (all reads, all writes, or a mix of reads and writes), an 80% increase compared to System z9. These results are achieved in a laboratory environment using one channel confi gured as CHPID type FCP with no other processing occurring and do not represent actual fi eld measurements. A signifi cant increase in I/O operations per second for small block sizes can also be expected with FICON Express2.

This FCP performance improvement is transparent to operating systems that support FCP, and applies to all the FICON Express4 and FICON Express2 features when confi gured as CHPID type FCP, communicating with SCSI devices.

21

+ 46 hidden pages