IBM z13s Technical Manual

Page 1

Front cover

IBM z13s Technical Guide
Octavian Lascu Barbara Sannerud Cecilia A. De Leon Edzard Hoogerbrug Ewerson Palacio Franco Pinto Jin J. Yang John P. Troy Martin Soellig
In partnership with
IBM Academy of Technology
Redbooks
Page 2
Page 3
International Technical Support Organization
IBM z13s Technical Guide
June 2016
SG24-8294-00
Page 4
Note: Before using this information and the product it supports, read the information in “Notices” on page xix.
First Edition (June 2016)
This edition applies to IBM z13s servers.
© Copyright International Business Machines Corporation 2016. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Page 5

Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xx
IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi
Chapter 1. Introducing IBM z13s servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Overview of IBM z13s servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 z13s servers highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Processor and memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Capacity and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 I/O subsystem and I/O features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.4 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Reliability, availability, and serviceability design. . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 z13s technical overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Model upgrade paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.4 CPC drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.5 I/O connectivity: PCIe and InfiniBand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.6 I/O subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.7 Parallel Sysplex Coupling and Server Time Protocol connectivity . . . . . . . . . . . . 20
1.3.8 Special-purpose features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3.9 Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 Hardware Management Consoles and Support Elements . . . . . . . . . . . . . . . . . . . . . . 27
1.5 IBM z BladeCenter Extension (zBX) Model 004 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.5.1 Blades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5.2 IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise . . . . . . . 28
1.6 IBM z Unified Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7 IBM Dynamic Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.8 Operating systems and software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.8.1 Supported operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.8.2 IBM compilers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter 2. Central processor complex hardware components . . . . . . . . . . . . . . . . . . 35
2.1 Frame and drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.1 The z13s frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.2 PCIe I/O drawer and I/O drawer features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2 Processor drawer concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.1 CPC drawer interconnect topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.2 Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.3 System control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
© Copyright IBM Corp. 2016. All rights reserved. iii
Page 6
2.2.4 CPC drawer power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3 Single chip modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.1 Processor units and system control chips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.3.2 Processor unit (PU) chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.3.3 Processor unit (core). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.3.4 PU characterization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.3.5 System control chip. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.3.6 Cache level structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.4.1 Memory subsystem topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4.2 Redundant array of independent memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.4.3 Memory configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.4.4 Memory upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.4.5 Preplanned memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.5 Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.5.1 RAS in the CPC memory subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.5.2 General z13s RAS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.6 Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.6.1 Redundant I/O interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.7 Model configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.7.1 Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.7.2 Concurrent PU conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.7.3 Model capacity identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.7.4 Model capacity identifier and MSU values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.7.5 Capacity BackUp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.7.6 On/Off Capacity on Demand and CPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.8 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.8.1 Power considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.8.2 High-voltage DC power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.8.3 Internal Battery Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.8.4 Power capping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.8.5 Power Estimation tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.8.6 Cooling requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.9 Summary of z13s structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Chapter 3. Central processor complex system design . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2 Design highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3 CPC drawer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3.1 Cache levels and memory structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.3.2 CPC drawer interconnect topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.4 Processor unit design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.4.1 Simultaneous multithreading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.4.2 Single-instruction multiple-data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.4.3 Out-of-order execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.4.4 Superscalar processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4.5 Compression and cryptography accelerators on a chip . . . . . . . . . . . . . . . . . . . . 95
3.4.6 Decimal floating point accelerator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.4.7 IEEE floating point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.4.8 Processor error detection and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.4.9 Branch prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4.10 Wild branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4.11 Translation lookaside buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
iv IBM z13s Technical Guide
Page 7
3.4.12 Instruction fetching, decoding, and grouping . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.4.13 Extended Translation Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.4.14 Instruction set extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.4.15 Transactional Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.4.16 Runtime Instrumentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.5 Processor unit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.5.2 Central processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.5.3 Integrated Facility for Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.5.4 Internal Coupling Facility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.5.5 IBM z Integrated Information Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.5.6 System assist processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.5.7 Reserved processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.5.8 Integrated firmware processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.5.9 Processor unit assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.5.10 Sparing rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.5.11 Increased flexibility with z/VM mode partitions . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.6 Memory design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.6.2 Main storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.6.3 Expanded storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.6.4 Hardware system area (HSA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.7 Logical partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.7.2 Storage operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.7.3 Reserved storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.7.4 Logical partition storage granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.7.5 LPAR dynamic storage reconfiguration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.8 Intelligent Resource Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.9 Clustering technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.9.1 Coupling Facility Control Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.9.2 Coupling Thin Interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.9.3 Dynamic CF dispatching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.9.4 CFCC and Flash Express use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Chapter 4. Central processor complex I/O system structure . . . . . . . . . . . . . . . . . . . 137
4.1 Introduction to the InfiniBand and PCIe for I/O infrastructure . . . . . . . . . . . . . . . . . . . 138
4.1.1 InfiniBand I/O infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.1.2 PCIe I/O infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.1.3 InfiniBand specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.1.4 PCIe Generation 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.2 I/O system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.2.1 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.2.2 Summary of supported I/O features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.3 I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.4 PCIe I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.5 PCIe I/O drawer and I/O drawer offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.6 Fanouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.6.1 PCIe Generation 3 fanout (FC 0173) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.6.2 HCA2-C fanout (FC 0162). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.6.3 Integrated Coupling Adapter (FC 0172) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.6.4 HCA3-O (12x IFB) fanout (FC 0171). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.6.5 HCA3-O LR (1x IFB) fanout (FC 0170). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Contents v
Page 8
4.6.6 Fanout considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.7 I/O features (cards) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.7.1 I/O feature card ordering information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.7.2 Physical channel report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.8 Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.8.1 I/O feature support and configuration rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.8.2 FICON channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.8.3 OSA-Express5S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.8.4 OSA-Express4S features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.8.5 OSA-Express for ensemble connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.8.6 HiperSockets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.9 Parallel Sysplex connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.9.1 Coupling links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.9.2 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.9.3 Pulse per second input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.10 Cryptographic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.10.1 CPACF functions (FC 3863) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.10.2 Crypto Express5S feature (FC 0890) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.11 Integrated firmware processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.12 Flash Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.12.1 IBM Flash Express read/write cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.13 10GbE RoCE Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.14 zEDC Express. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Chapter 5. Central processor complex channel subsystem. . . . . . . . . . . . . . . . . . . . 187
5.1 Channel subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.1.1 Multiple logical channel subsystems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.1.2 Multiple subchannel sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.1.3 Channel path spanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.2 I/O configuration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.3 Channel subsystem summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Chapter 6. Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
6.1 Cryptography in IBM z13 and z13s servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.2 Some fundamentals on cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.2.1 Modern cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.2.2 Kerckhoffs’ principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
6.2.3 Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
6.2.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6.3 Cryptography on IBM z13s servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
6.4 CP Assist for Cryptographic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
6.4.1 Cryptographic synchronous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
6.4.2 CPACF protected key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.5 Crypto Express5S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6.5.1 Cryptographic asynchronous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.5.2 Crypto Express5S as a CCA coprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
6.5.3 Crypto Express5S as an EP11 coprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.5.4 Crypto Express5S as an accelerator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.5.5 Management of Crypto Express5S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.6 TKE workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.6.1 Logical partition, TKE host, and TKE target . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.6.2 Optional smart card reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.6.3 TKE workstation with Licensed Internal Code 8.0. . . . . . . . . . . . . . . . . . . . . . . . 223
vi IBM z13s Technical Guide
Page 9
6.6.4 TKE workstation with Licensed Internal Code 8.1. . . . . . . . . . . . . . . . . . . . . . . . 223
6.6.5 TKE hardware support and migration information. . . . . . . . . . . . . . . . . . . . . . . . 224
6.7 Cryptographic functions comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.8 Cryptographic software support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Chapter 7. Software support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
7.1 Operating systems summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.2 Support by operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.2.1 z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.2.2 z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.2.3 z/VSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.2.4 z/TPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.2.5 Linux on z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.2.6 KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
7.2.7 z13s function support summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
7.3 Support by function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.3.1 Single system image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.3.2 zIIP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
7.3.3 Transactional Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.3.4 Maximum main storage size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
7.3.5 Flash Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
7.3.6 z Enterprise Data Compression Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.3.7 10GbE RoCE Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.3.8 Large page support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
7.3.9 Hardware decimal floating point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.3.10 Up to 40 LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.3.11 Separate LPAR management of PUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.3.12 Dynamic LPAR memory upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.3.13 LPAR physical capacity limit enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.3.14 Capacity Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.3.15 Dynamic PU add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.3.16 HiperDispatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
7.3.17 The 63.75-K subchannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
7.3.18 Multiple Subchannel Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.3.19 Three subchannel sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.3.20 IPL from an alternative subchannel set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.3.21 Modified Indirect Data Address Word facility . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.3.22 HiperSockets Completion Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.3.23 HiperSockets integration with the intraensemble data network . . . . . . . . . . . . 258
7.3.24 HiperSockets Virtual Switch Bridge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7.3.25 HiperSockets Multiple Write Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.3.26 HiperSockets IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.3.27 HiperSockets Layer 2 support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.3.28 HiperSockets network traffic analyzer for Linux on z Systems . . . . . . . . . . . . . 260
7.3.29 FICON Express16S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
7.3.30 FICON Express8S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.3.31 FICON Express8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
7.3.32 z/OS Discovery and Auto-Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
7.3.33 High-performance FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.3.34 Request node identification data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
7.3.35 32 K subchannels for the FICON Express16S . . . . . . . . . . . . . . . . . . . . . . . . . 266
7.3.36 Extended distance FICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
7.3.37 Platform and name server registration in FICON channel . . . . . . . . . . . . . . . . 266
Contents vii
Page 10
7.3.38 FICON link incident reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
7.3.39 FCP provides increased performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
7.3.40 N_Port ID Virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
7.3.41 OSA-Express5S 10-Gigabit Ethernet LR and SR . . . . . . . . . . . . . . . . . . . . . . . 268
7.3.42 OSA-Express5S Gigabit Ethernet LX and SX. . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.3.43 OSA-Express5S 1000BASE-T Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.3.44 OSA-Express4S 10-Gigabit Ethernet LR and SR . . . . . . . . . . . . . . . . . . . . . . . 270
7.3.45 OSA-Express4S Gigabit Ethernet LX and SX. . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.3.46 OSA-Express4S 1000BASE-T Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
7.3.47 Open Systems Adapter for IBM zAware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
7.3.48 Open Systems Adapter for Ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
7.3.49 Intranode management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.3.50 Intraensemble data network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.3.51 OSA-Express5S and OSA-Express4S NCP support . . . . . . . . . . . . . . . . . . . . 274
7.3.52 Integrated Console Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
7.3.53 VLAN management enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.3.54 GARP VLAN Registration Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
7.3.55 Inbound workload queuing for OSA-Express5S and OSA-Express4S . . . . . . . 276
7.3.56 Inbound workload queuing for Enterprise Extender . . . . . . . . . . . . . . . . . . . . . 277
7.3.57 Querying and displaying an OSA configuration . . . . . . . . . . . . . . . . . . . . . . . . 277
7.3.58 Link aggregation support for z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.3.59 Multi-VSwitch Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.3.60 QDIO data connection isolation for z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.3.61 QDIO interface isolation for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.3.62 QDIO optimized latency mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.3.63 Large send for IPv6 packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.3.64 OSA-Express5S and OSA-Express4S checksum offload. . . . . . . . . . . . . . . . . 280
7.3.65 Checksum offload for IPv4and IPv6 packets when in QDIO mode. . . . . . . . . . 280
7.3.66 Adapter interruptions for QDIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.3.67 OSA Dynamic LAN idle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.3.68 OSA Layer 3 virtual MAC for z/OS environments . . . . . . . . . . . . . . . . . . . . . . . 281
7.3.69 QDIO Diagnostic Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.3.70 Network Traffic Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.3.71 Program-directed re-IPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
7.3.72 Coupling over InfiniBand and Integrated Coupling Adapter . . . . . . . . . . . . . . . 282
7.3.73 Dynamic I/O support for InfiniBand and ICA CHPIDs . . . . . . . . . . . . . . . . . . . . 283
7.3.74 Simultaneous multithreading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
7.3.75 Single Instruction Multiple Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
7.3.76 Shared Memory Communication - Direct Memory Access . . . . . . . . . . . . . . . . 284
7.4 Cryptographic support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.4.1 CP Assist for Cryptographic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.4.2 Crypto Express5S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.4.3 Web deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.4.4 z/OS Integrated Cryptographic Service Facility FMIDs. . . . . . . . . . . . . . . . . . . . 286
7.4.5 ICSF migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.5 GDPS Virtual Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.6 z/OS migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.6.1 General guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.6.2 Hardware configuration definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.6.3 Coupling links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.6.4 Large page support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.6.5 Capacity Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.6.6 Decimal floating point and z/OS XL C/C++ considerations. . . . . . . . . . . . . . . . . 291
viii IBM z13s Technical Guide
Page 11
7.7 IBM z Advanced Workload Analysis Reporter (zAware) . . . . . . . . . . . . . . . . . . . . . . . 291
7.7.1 z Appliance Container Infrastructure mode LPAR . . . . . . . . . . . . . . . . . . . . . . . 292
7.8 Coupling facility and CFCC considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.8.1 CFCC Level 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.8.2 Flash Express exploitation by CFCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
7.8.3 CFCC Coupling Thin Interrupts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.9 Simultaneous multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.10 Single-instruction multiple-data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7.11 The MIDAW facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7.11.1 MIDAW technical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
7.11.2 Extended format data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
7.11.3 Performance benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
7.12 IOCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
7.13 Worldwide port name tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7.14 ICKDSF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7.15 IBM z BladeCenter Extension (zBX) Model 004 software support . . . . . . . . . . . . . . 304
7.15.1 IBM Blades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
7.15.2 IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise . . . . . 304
7.16 Software licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7.16.1 Software licensing considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
7.16.2 Monthly license charge pricing metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.16.3 Advanced Workload License Charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
7.16.4 Advanced Entry Workload License Charge . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.16.5 System z New Application License Charges. . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.16.6 Midrange workload license charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.16.7 Parallel Sysplex License Charges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.16.8 z Systems International Program License Agreement . . . . . . . . . . . . . . . . . . . 309
7.16.9 zBX licensed software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Chapter 8. System upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1 Upgrade types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
8.1.1 Overview of upgrade types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
8.1.2 Terminology related to CoD for z13s systems . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.1.3 Permanent upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.1.4 Temporary upgrades. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
8.2 Concurrent upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
8.2.1 Model upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
8.2.2 Customer Initiated Upgrade facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.2.3 Summary of concurrent upgrade functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
8.3 Miscellaneous equipment specification upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
8.3.1 MES upgrade for processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.3.2 MES upgrade for memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.3.3 Preplanned Memory feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.3.4 MES upgrades for the zBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
8.4 Permanent upgrade through the CIU facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
8.4.1 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8.4.2 Retrieval and activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.5 On/Off Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
8.5.2 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
8.5.3 On/Off CoD testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.5.4 Activation and deactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Contents ix
Page 12
8.5.5 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.5.6 IBM z/OS capacity provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.6 Capacity for Planned Event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8.7 Capacity BackUp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.7.1 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.7.2 CBU activation and deactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8.7.3 Automatic CBU for Geographically Dispersed Parallel Sysplex . . . . . . . . . . . . . 349
8.8 Nondisruptive upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.8.1 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.8.2 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.8.3 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.8.4 Cryptographic adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.8.5 Special features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.8.6 Concurrent upgrade considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.9 Summary of capacity on-demand offerings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Chapter 9. Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . 355
9.1 The RAS strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
9.2 Availability characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
9.3 RAS functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
9.3.1 Unscheduled outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
9.3.2 Scheduled outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
9.4 Enhanced Driver Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
9.5 RAS capability for the HMC and SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
9.6 RAS capability for zBX Model 004 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
BladeCenter components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
zBX firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
9.6.1 zBX RAS and the IBM z Unified Resource Manager . . . . . . . . . . . . . . . . . . . . . 364
9.6.2 zBX Model 004: 2458-004 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
9.7 Considerations for PowerHA in zBX environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
9.8 IBM z Advanced Workload Analysis Reporter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
9.9 RAS capability for Flash Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Chapter 10. Environmental requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10.1 IBM z13s power and cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10.1.1 Power and I/O cabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10.1.2 Power consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10.1.3 Internal Battery Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10.1.4 Balanced Power Plan Ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10.1.5 Emergency power-off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10.1.6 Cooling requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10.1.7 New environmental class for IBM z13s servers: ASHREA Class A3 . . . . . . . . 374
10.2 IBM z13s physical specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
10.2.1 Weights and dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10.2.2 Four-in-one (4-in-1) bolt-down kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10.3 IBM zBX environmental components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10.3.1 IBM zBX configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
10.3.2 IBM zBX power components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
10.3.3 IBM zBX cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
10.3.4 IBM zBX physical specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
10.4 Energy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10.4.1 Power estimation tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10.4.2 Query maximum potential power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
x IBM z13s Technical Guide
Page 13
10.4.3 System Activity Display and Monitors Dashboard. . . . . . . . . . . . . . . . . . . . . . . 382
10.4.4 IBM Systems Director Active Energy Manager . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.4.5 Unified Resource Manager: Energy management . . . . . . . . . . . . . . . . . . . . . . 384
Chapter 11. Hardware Management Console and Support Elements . . . . . . . . . . . . 387
11.1 Introduction to the HMC and SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
11.2 HMC and SE enhancements and changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
11.2.1 Driver Level 27 HMC and SE enhancements and changes . . . . . . . . . . . . . . . 389
11.2.2 Rack-mounted HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
11.2.3 New Support Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
11.2.4 New backup options for HMCs and primary SEs . . . . . . . . . . . . . . . . . . . . . . . 393
11.2.5 SE driver support with the HMC driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
11.2.6 HMC feature codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
11.2.7 Tree Style User Interface and Classic Style User Interface . . . . . . . . . . . . . . . 398
11.3 HMC and SE connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
11.3.1 Network planning for the HMC and SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
11.3.2 Hardware prerequisite changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
11.3.3 RSF is broadband-only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
11.3.4 TCP/IP Version 6 on the HMC and SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
11.3.5 Assigning addresses to the HMC and SE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
11.4 Remote Support Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
11.4.1 Security characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
11.4.2 RSF connections to IBM and Enhanced IBM Service Support System . . . . . . 404
11.4.3 HMC and SE remote operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
11.5 HMC and SE key capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
11.5.1 Central processor complex management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
11.5.2 Logical partition management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
11.5.3 Operating system communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.5.4 HMC and SE microcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
11.5.5 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
11.5.6 Capacity on demand support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
11.5.7 Features on Demand support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11.5.8 Server Time Protocol support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11.5.9 NTP client and server support on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
11.5.10 Security and user ID management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11.5.11 System Input/Output Configuration Analyzer on the SE and HMC. . . . . . . . . 424
11.5.12 Automated operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
11.5.13 Cryptographic support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
11.5.14 Installation support for z/VM using the HMC. . . . . . . . . . . . . . . . . . . . . . . . . . 426
11.5.15 Dynamic Partition Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
11.6 HMC in an ensemble. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11.6.1 Unified Resource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11.6.2 Ensemble definition and management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
11.6.3 HMC availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
11.6.4 Considerations for multiple HMCs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
11.6.5 HMC browser session to a primary HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
11.6.6 HMC ensemble topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Chapter 12. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
12.1 Performance information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
12.2 LSPR workload suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
12.3 Fundamental components of workload capacity performance . . . . . . . . . . . . . . . . . 439
12.3.1 Instruction path length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Contents xi
Page 14
12.3.2 Instruction complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
12.3.3 Memory hierarchy and memory nest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
12.4 Relative nest intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
12.5 LSPR workload categories based on relative nest intensity . . . . . . . . . . . . . . . . . . . 443
12.6 Relating production workloads to LSPR workloads . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.7 Workload performance variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
12.7.1 Main performance improvement drivers with z13s . . . . . . . . . . . . . . . . . . . . . . 446
Appendix A. IBM z Appliance Container Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . 449
A.1 What is zACI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
A.2 Why use zACI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
A.3 IBM z Systems servers and zACI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
12.7.2 Example: Deploying IBM zAware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Appendix B. IBM z Systems Advanced Workload Analysis Reporter (IBM zAware). 453
B.1 Troubleshooting in complex IT environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
B.2 Introducing IBM zAware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
B.2.1 Hardware requirements overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
B.2.2 Value of IBM zAware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
B.2.3 IBM z/OS Solutions to improve problem diagnostic procedures. . . . . . . . . . . . . 457
B.3 Understanding IBM zAware technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
B.3.1 Training period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
B.3.2 Priming IBM zAware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
B.3.3 IBM zAware ignore message support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
B.3.4 IBM zAware graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
B.3.5 IBM zAware complements your existing tools . . . . . . . . . . . . . . . . . . . . . . . . . . 464
B.4 IBM zAware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
B.4.1 IBM zAware features and ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
12.7.3 Feature on Demand (FoD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
B.4.2 IBM zAware operating requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
B.5 Configuring and using IBM zAware virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . 468
Appendix C. Channel options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
C.1 Channel options supported on z13s servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
C.2 Maximum unrepeated distance for FICON SX features . . . . . . . . . . . . . . . . . . . . . . . 473
Appendix D. Shared Memory Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
D.1 Shared Memory Communications overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
D.2 Shared Memory Communication over RDMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
D.2.1 RDMA technology overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
D.2.2 Shared Memory Communications over RDMA. . . . . . . . . . . . . . . . . . . . . . . . . . 477
D.2.3 Single Root I/O Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
D.2.4 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
D.2.5 10GbE RoCE Express feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
D.2.6 10GbE RoCE Express configuration example . . . . . . . . . . . . . . . . . . . . . . . . . . 481
D.2.7 Hardware configuration definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
D.2.8 Software exploitation of SMC-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
D.2.9 SMC-R support overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
D.2.10 SMC-R use cases for z/OS to z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
D.2.11 Enabling SMC-R support in z/OS Communications Server . . . . . . . . . . . . . . . 488
D.3 Shared Memory Communications - Direct Memory Access . . . . . . . . . . . . . . . . . . . . 489
D.3.1 Internal Shared Memory technology overview . . . . . . . . . . . . . . . . . . . . . . . . . . 490
D.3.2 SMC-D over Internal Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
D.3.3 Internal Shared Memory - Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
xii IBM z13s Technical Guide
Page 15
D.3.4 Virtual PCI Function (vPCI Adapter). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
D.3.5 Planning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
D.3.6 Hardware configuration definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
D.3.7 Sample IOCP FUNCTION statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
D.3.8 Software exploitation of ISM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
D.3.9 SMC-D over ISM prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
D.3.10 Enabling SMC-D support in z/OS Communications Server . . . . . . . . . . . . . . . 499
D.3.11 SMC-D support overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
Appendix E. IBM Dynamic Partition Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
E.1 What is IBM Dynamic Partition Manager? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
E.2 Why use DPM?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
E.3 IBM z Systems servers and DPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
E.4 Setting up the DPM environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
E.4.1 Defining partitions in DPM mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
E.4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Appendix F. KVM for IBM z Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
F.1 Why KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
F.1.1 Advantages of using KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 512
F.2 IBM z Systems servers and KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
F.2.1 Storage connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
F.2.2 Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
F.2.3 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
F.2.4 Open source virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
F.2.5 What comes with KVM for IBM z Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
F.3 Managing the KVM for IBM z Systems environment. . . . . . . . . . . . . . . . . . . . . . . . . . 518
F.3.1 Hypervisor Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
F.4 Using IBM Cloud Manager with OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Appendix G. Native Peripheral Component Interconnect Express . . . . . . . . . . . . . . 521
G.1 Design of native PCIe I/O adapter management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
G.1.1 Native PCIe adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
G.1.2 Integrated firmware processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
G.1.3 Resource groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
G.1.4 Management tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
G.2 Native PCIe feature plugging rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
G.3 Native PCIe feature definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
G.3.1 FUNCTION identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
G.3.2 Virtual function number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
G.3.3 Physical network identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Appendix H. Flash Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
H.1 Flash Express overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
H.2 Using Flash Express. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
H.3 Security on Flash Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
H.3.1 Integrated Key Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
H.3.2 Key serving topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
H.3.3 Error recovery scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Appendix I. GDPS Virtual Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
I.1 GDPS overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
I.2 Overview of GDPS Virtual Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
I.3 GDPS Virtual Appliance recovery scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
Contents xiii
Page 16
I.3.1 Planned disk outage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
I.3.2 Unplanned disk outage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
I.3.3 Disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Appendix J. IBM zEnterprise Data Compression Express . . . . . . . . . . . . . . . . . . . . . 551
J.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
J.2 zEDC Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
J.3 Software support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
J.3.1 z/VM V6R3 support with PTFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
J.3.2 IBM SDK 7 for z/OS Java support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
J.3.3 IBM z Systems Batch Network Analyzer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
xiv IBM z13s Technical Guide
Page 17

Figures

2-1 z13s frame: Rear and front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2-2 z13s (one CPC drawer) I/O drawer configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2-3 z13s (two CPC Drawers) I/O Drawer Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2-4 Model N10 components (top view) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2-5 Model N20 components (top view) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2-6 CPC drawer (front view) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2-7 CPC drawer logical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2-8 Drawer to drawer communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2-9 Oscillators cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2-10 Conceptual overview of system control elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2-11 Redundant DCAs and blowers for CPC drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2-12 Single chip modules (PU SCM and SC SCM) N20 CPC Drawer . . . . . . . . . . . . . . . . 47
2-13 PU Chip Floorplan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2-14 PU Core floorplan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2-15 SC chip diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2-16 Cache level structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2-17 CPC drawer memory topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2-18 RAIM configuration per node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2-19 Model N10 memory plug locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2-20 Model N20 one drawer memory plug locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2-21 Model N20 two drawer memory plug locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2-22 Memory allocation diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2-23 Model N10 drawer: Location of the PCIe and IFB fanouts . . . . . . . . . . . . . . . . . . . . . 66
2-24 Model N20 two CPC drawer: Locations of the PCIe and IFB fanouts. . . . . . . . . . . . . 66
2-25 Redundant I/O interconnect for I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2-26 Redundant I/O interconnect for PCIe I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2-27 z13s upgrade paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3-1 z13s cache levels and memory hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3-2 z13s cache topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3-3 z13s and zBC12 cache level comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3-4 z13s CPC drawer communication topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3-5 Point-to-point topology z13s two CPC drawers communication . . . . . . . . . . . . . . . . . . 88
3-6 Two threads running simultaneously on the same processor core. . . . . . . . . . . . . . . . 90
3-7 Schematic representation of add SIMD instruction with 16 elements in each vector . . 91
3-8 Floating point registers overlaid by vector registers . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3-9 z13s PU core logical diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3-10 z13s in-order and out-of-order core execution improvements . . . . . . . . . . . . . . . . . . 94
3-11 Compression and cryptography accelerators on a core in the chip . . . . . . . . . . . . . . 96
3-12 PU error detection and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3-13 ICF options: Shared ICFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3-14 Logical flow of Java code execution on a zIIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3-15 IRD LPAR cluster example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3-16 Sysplex hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3-17 Dynamic CF dispatching (shared CPs or shared ICF PUs) . . . . . . . . . . . . . . . . . . . 134
4-1 I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4-2 I/O domains of an I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4-3 PCIe I/O drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4-4 z13s (N20) connectivity to PCIe I/O drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
© Copyright IBM Corp. 2016. All rights reserved. xv
Page 18
4-5 PCIe I/O drawer with 32 PCIe slots and four I/O domains . . . . . . . . . . . . . . . . . . . . . 146
4-6 Infrastructure for PCIe and InfiniBand coupling links . . . . . . . . . . . . . . . . . . . . . . . . . 148
4-7 OM3 50/125 µm multimode fiber cable with MPO connectors . . . . . . . . . . . . . . . . . . 150
4-8 z13 Parallel Sysplex coupling connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4-9 CPC drawer front view: Coupling links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4-10 HCA3-O Fanouts: z13s versus z114 / zBC12 servers . . . . . . . . . . . . . . . . . . . . . . . 180
4-11 PCIe I/O drawer that is fully populated with Flash Express cards. . . . . . . . . . . . . . . 185
5-1 Multiple channel subsystems and multiple subchannel sets. . . . . . . . . . . . . . . . . . . . 188
5-2 Output for display ios,config(all) command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5-3 z Systems CSS: Channel subsystems with channel spanning . . . . . . . . . . . . . . . . . . 194
5-4 CSS, LPAR, and identifier example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6-1 Three levels of protection with three levels of speed. . . . . . . . . . . . . . . . . . . . . . . . . . 203
6-2 Cryptographic hardware supported on IBM z13s servers . . . . . . . . . . . . . . . . . . . . . . 204
6-3 z13s cryptographic support in z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
6-4 The cryptographic coprocessor CPACF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
6-5 CPACF key wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6-6 Customize Image Profiles: Crypto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
6-7 SE: View LPAR Cryptographic Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
7-1 Result of the display core command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7-2 Simultaneous Multithreading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7-3 IDAW usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
7-4 MIDAW format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
7-5 MIDAW usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
8-1 The provisioning architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
8-2 Example of temporary upgrade activation sequence . . . . . . . . . . . . . . . . . . . . . . . . . 321
8-3 Memory sizes and upgrades for the N10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8-4 Memory sizes and upgrades for the single drawer N20 . . . . . . . . . . . . . . . . . . . . . . . 326
8-5 Memory sizes and upgrades for the two drawer N20 . . . . . . . . . . . . . . . . . . . . . . . . . 326
8-6 Feature on Demand window for zBX Blades features HWMs. . . . . . . . . . . . . . . . . . . 327
8-7 Permanent upgrade order example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8-8 CIU-eligible order activation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8-9 Machine profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8-10 IBM z13s Perform Model Conversion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8-11 Customer Initiated Upgrade Order Activation Number window. . . . . . . . . . . . . . . . . 332
8-12 Order On/Off CoD record window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
8-13 On/Off CoD order example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8-14 The capacity provisioning infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8-15 A Capacity Provisioning Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
8-16 Example of C02 with three CBU features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8-17 STSI output on z13s server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
9-1 Typical PowerHA cluster diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
9-2 Flash Express RAS components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
10-1 IBM z13s cabling options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10-2 Top Exit I/O cabling feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10-3 Recommended environmental conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
10-4 Rear Door Heat eXchanger (left) and functional diagram . . . . . . . . . . . . . . . . . . . . . 379
10-5 Top Exit Support for the zBX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10-6 Maximum potential power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
10-7 Power usage on the System Activity Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
11-1 Diagnostic sampling authorization control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
11-2 Change LPAR security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
11-3 Rack-mounted HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
11-4 SEs location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
xvi IBM z13s Technical Guide
Page 19
11-5 Configure backup FTP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
11-6 Backup Critical Console Data destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
11-7 Backup Critical Data destinations of SEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
11-8 Scheduled Operation for HMC backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
11-9 Scheduled Operation for SEs backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
11-10 HMC and SE connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
11-11 SE Physical connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
11-12 HMC connectivity examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
11-13 Change LPAR Group Controls - Group absolute capping . . . . . . . . . . . . . . . . . . . 407
11-14 Manage Trusted Signing Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11-15 Import Remote Certificate example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11-16 Configure 3270 Emulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11-17 Microcode terms and interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
11-18 System Information: Bundle level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
11-19 HMC Monitor task group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
11-20 Monitors Dashboard task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
11-21 Display the activity for an LPAR by processor type . . . . . . . . . . . . . . . . . . . . . . . . 413
11-22 Display the SMT usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
11-23 Monitors Dashboard: Crypto function integration . . . . . . . . . . . . . . . . . . . . . . . . . . 414
11-24 Monitors Dashboard - Flash Express function integration . . . . . . . . . . . . . . . . . . . 414
11-25 Environmental Efficiency Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
11-26 Customize Console Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
11-27 Timing Network window with Scheduled DST and Scheduled leap second offset . 419
11-28 HMC NTP broadband authentication support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
11-29 Time coordination for zBX components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11-30 Cryptographic Configuration window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
11-31 Enabling Dynamic Partition Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
11-32 HMC welcome window (CPC in DPM Mode) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
11-33 Unified Resource Manager functions and suites . . . . . . . . . . . . . . . . . . . . . . . . . . 429
11-34 Ensemble example with primary and alternate HMCs . . . . . . . . . . . . . . . . . . . . . . 435
12-1 z13s to zBC12, z114, z10 BC, and z9 BC performance comparison . . . . . . . . . . . . 438
12-2 Memory hierarchy on the z13s one CPC drawer system (two nodes) . . . . . . . . . . . 441
12-3 Relative Nest Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
12-4 The traditional factors that were used to categorize workloads . . . . . . . . . . . . . . . . 442
12-5 New z/OS workload categories defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
A-1 zACI Framework basic outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
A-2 IBM zAware Image Profile based on zACI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
A-3 zACI icon in the HMC interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
B-1 IBM zAware complements an existing environment . . . . . . . . . . . . . . . . . . . . . . . . . . 456
B-2 IBM zAware shortens the business impact of a problem . . . . . . . . . . . . . . . . . . . . . . 457
B-3 Basic components of the IBM zAware environment . . . . . . . . . . . . . . . . . . . . . . . . . . 459
B-4 HMC Image Profile for an IBM zAware LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
B-5 HMC Image Profile for an IBM zAware LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
B-6 IBM zAware Heat Map view analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
B-7 IBM zAware bar score with intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
B-8 Feature on Demand window for IBM zAware feature . . . . . . . . . . . . . . . . . . . . . . . . . 466
D-1 RDMA technology overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
D-2 Dynamic transition from TCP to SMC-R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
D-3 Shared RoCE mode concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
D-4 10GbE RoCE Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
D-5 10GbE RoCE Express sample configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
D-6 RNIC and OSD interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
D-7 Physical network ID example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Figures xvii
Page 20
D-8 Reduced latency and improved wall clock time with SMC-R . . . . . . . . . . . . . . . . . . . 485
D-9 Sysplex Distributor before RoCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
D-10 Sysplex Distributor after RoCE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
D-11 Connecting two LPARs on the same CPC using SMC-D. . . . . . . . . . . . . . . . . . . . . 490
D-12 Clustered systems: Multitier application solution. RDMA, and DMA . . . . . . . . . . . . 490
D-13 Dynamic transition from TCP to SMC-D by using two OSA-Express adapters . . . . 491
D-14 Concept of vPCI adapter implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
D-15 SMC-D configuration that uses Ethernet to provide connectivity . . . . . . . . . . . . . . . 494
D-16 SMC-D configuration that uses HiperSockets to provide connectivity . . . . . . . . . . . 495
D-17 ISM adapters shared between LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
D-18 Multiple LPARs connected through multiple VLANs. . . . . . . . . . . . . . . . . . . . . . . . . 497
D-19 SMCD parameter in GLOBALCONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
E-1 High-level view of DPM implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
E-2 Enabling DPM mode of operation from the SE CPC configuration options . . . . . . . . 504
E-3 Entering the OSA ports that will be used by the management network . . . . . . . . . . . 505
E-4 DPM mode welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
E-5 Traditional HMC Welcome window when no CPCs are running in DPM mode . . . . . 507
E-6 User Options when the HMC presents the DPM welcome window . . . . . . . . . . . . . . 507
E-7 DPM wizard welcome window options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
F-1 KVM running in z Systems LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
F-2 Open source virtualization (KVM for IBM z Systems) . . . . . . . . . . . . . . . . . . . . . . . . . 516
F-3 KVM for IBM z Systems management interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
F-4 KVM management by using the libvirt API layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
G-1 I/O domains and resource groups managed by the IFP . . . . . . . . . . . . . . . . . . . . . . 523
G-2 Sample output of AO data or PCHID report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
G-3 Example of IOCP statements for zEDC Express and 10GbE RoCE Express . . . . . . 527
H-1 z Systems storage hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
H-2 Flash Express PCIe adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
H-3 PCIe I/O drawer that is fully populated with Flash Express cards . . . . . . . . . . . . . . . 531
H-4 Sample SE/HMC window for Flash Express allocation to LPAR . . . . . . . . . . . . . . . . 533
H-5 Flash Express allocation in z/OS LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
H-6 Integrated Key Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
H-7 Key serving topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
I-1 GDPS offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
I-2 Positioning a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
I-3 GDPS virtual appliance architecture overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
I-4 GDPS Storage failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
J-1 Relationships between the PCIe I/O drawer card slots, I/O domains, and resource
groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
xviii IBM z13s Technical Guide
Page 21

Notices

This information was developed for products and services offered in the US. This material might be available from IBM in other languages. However, you may be required to own a copy of the product or product version in that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to actual people or business enterprises is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs.
© Copyright IBM Corp. 2016. All rights reserved. xix
Page 22

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation, and might also be trademarks or registered trademarks in other countries.
AIX® CICS® Cognos® DataPower® DB2 Connect™ DB2® developerWorks® Distributed Relational Database
Architecture™ Domino® DRDA® DS8000® ECKD™ FICON® FlashCopy® GDPS® Geographically Dispersed Parallel
Sysplex™ HACMP™ HiperSockets™ HyperSwap® IBM Systems Director Active Energy
Manager™
IBM z Systems™ IBM z13™ IBM® IMS™ Language Environment® Lotus® MVS™ NetView® OMEGAMON® Parallel Sysplex® Passport Advantage® Power Systems™ POWER6® POWER7® PowerHA® PowerPC® PowerVM® PR/SM™ Processor Resource/Systems
Manager™ RACF® Redbooks® Redpaper™
Redpapers™ Redbooks (logo) ® Resource Link® Resource Measurement Facility™ RMF™ System Storage® System z10® System z9® System z® SystemMirror® Tivoli® VIA® VTAM® WebSphere® z Systems™ z/Architecture® z/OS® z/VM® z/VSE® z10™ z13™ z9® zEnterprise®
The following terms are trademarks of other companies:
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xx IBM z13s Technical Guide
Page 23

IBM REDBOOKS PROMOTIONS

Find and read thousands of IBM Redbooks publications
Search, bookmark, save and organize favorites
Get personalized notifications of new content
Link to the latest Redbooks blogs and videos
Download
Now
Get the latest version of the Redbooks Mobile App
iOS
Android
Place a Sponsorship Promotion in an IBM Redbooks publication, featuring your business or solution with a link to your web site.
Qualified IBM Business Partners may place a full page promotion in the most popular Redbooks publications. Imagine the power of being seen by users who download millions of Redbooks publications each year!
®
®
Promote your business in an IBM Redbooks publication
ibm.com/Redbooks
About Redbooks Business Partner Programs
IBM Redbooks promotions
Page 24
THIS PAGE INTENTIONALLY LEFT BLANK
Page 25

Preface

Digital business has been driving the transformation of underlying information technology (IT) infrastructure to be more efficient, secure, adaptive, and integrated. IT must be able to handle the explosive growth of mobile clients and employees. It also must be able to process enormous amounts of data to provide deep and real-time insights to help achieve the greatest business impact.
This IBM® IBM z13s server. IBM z Systems servers are the trusted enterprise platform for integrating data, transactions, and insight. A data-centric infrastructure must always be available with a
99.999% or better availability, have flawless data integrity, and be secured from misuse. It needs to be an integrated infrastructure that can support new applications. It also needs to have integrated capabilities that can provide new mobile capabilities with real-time analytics delivered by a secure cloud infrastructure.
IBM z13s servers are designed with improved scalability, performance, security, resiliency, availability, and virtualization. The superscalar design allows z13s servers to deliver a record level of capacity over the prior single frame z Systems server. In its maximum configuration, the z13s server is powered by up to 20 client characterizable microprocessors (cores) running at 4.3 GHz. This configuration can run more than 18,000 millions of instructions per second (MIPS) and up to 4 TB of client memory. The IBM z13s Model N20 is estimated to provide up to 100% more total system capacity than the IBM zEnterprise® BC12 Model H13.
This book provides information about the IBM z13s server and its functions, features, and associated software support. Greater detail is offered in areas relevant to technical planning. It is intended for systems engineers, consultants, planners, and anyone who wants to understand the IBM z Systems™ functions and plan for their usage. It is not intended as an introduction to mainframes. Readers are expected to be generally familiar with existing IBM z Systems technology and terminology.
Redbooks® publication addresses the new IBM z Systems™ single frame, the

Authors

This book was produced by a team working at the International Technical Support Organization, Poughkeepsie Center.
Octavian Lascu is a Senior IT Consultant for IBM Romania with over 25 years of IT experience. He specializes in designing, implementing, and supporting complex IT infrastructure environments (cloud, systems, storage, and networking), including high availability and disaster recovery solutions and high-performance computing deployments. He has developed materials for and taught over 50 workshops for technical audiences around the world. He has written several IBM Redbook and IBM Redpaper™ publications.
Barbara Sannerud is a Worldwide Technical Enablement Manager for the IBM z Systems platform. She has 30 years of experience in services, strategy, marketing, and product management. Before her current role, she was Offering Manager for IBM z/OS®, and also held competitive marketing roles. She holds math and MBA degrees and joined IBM from the software and professional services industries where she specialized in performance, systems management, and security.
© Copyright IBM Corp. 2016. All rights reserved. xxiii
Page 26
Cecilia A. De Leon is a Certified IT Specialist in the Philippines. She has 15 years of experience in the z Systems field. She has worked at IBM for 7 years. She holds a degree in Computer Engineering from Mapua Institute of Technology. Her areas of expertise include z Systems servers and operating system. In her current role as Client Technical Specialist, she supports mainframe clients and IBM sales representatives on technical sales engagements. She has also worked as a systems programmer for large banks in the Philippines.
Edzard Hoogerbrug is a System Support Representative in The Netherlands. During the past 26 years, he has worked in various roles within IBM, mainly with mainframes. He has 14 years experience working for EMEA L2 support for Z series after a 2.5 years assignment in Montpellier France. He holds a degree in electrotechnology. His areas of expertise include failure analysis on for z Systems hardware.
Ewerson Palacio is an IBM Distinguished Engineer and a Certified Consulting IT Specialist for Large Systems in Brazil. He has more than 40 years of experience in IBM large systems. Ewerson holds a Computer Science degree from Sao Paulo University. His areas of expertise include z Systems client technical support, mainframe architecture, infrastructure implementation, and design. He is an ITSO z Systems hardware official speaker who has presented technical ITSO seminars, workshops, and private sessions to IBM clients, IBM IT Architects, IBM IT Specialists, and IBM Business Partners around the globe. He has also been a z Systems Hardware Top Gun training designer, developer, and instructor for the last generations of the IBM high-end servers. Ewerson leads the Mainframe Specialty Services Area (MF-SSA), which is part of GTS Delivery, Technology and Engineering (DT&E). He is a member of the IBM Academy of Technology.
Franco Pinto is a Client Technical Specialist in IBM Switzerland. He has 20 years of experience in the mainframe and z/OS fields. His areas of expertise include z Systems technical pre-sales covering mainframe sizing and installation planning, and providing support on existing and new z Systems functions.
Jin J. Yang is a Senior System Service Representative in China. He joined IBM in 1999 to support z Systems products maintenance for clients in China. He has been working in the Technical Support Group to provide second-level support for z Systems clients as a country Top Gun since 2009. His areas of expertise include z Systems hardware, channel connectivity, IBM z/VM®, and Linux on z Systems.
John P. Troy is a z Systems and Storage hardware National Top Gun in the northeast area of the United States. He has 35 years of experience in the service field. His areas of expertise include z Systems server and high-end storage systems technical and customer support. John has been a z Systems hardware technical support course designer, developer, and instructor for the last six generations of IBM high-end servers.
Martin Soellig is a Consulting IT Specialist in Germany. He has 26 years of experience working in the z Systems field. He holds a degree in mathematics from University of Hamburg. His areas of expertise include z/OS and z Systems hardware, specifically in IBM Parallel Sysplex® and GDPS® environments.
Thanks to the following people for their contributions to this project:
William G. White International Technical Support Organization, Poughkeepsie Center
Robert Haimowitz Development Support Team Poughkeepsie
xxiv IBM z13s Technical Guide
Page 27
Patty Driever Diana Henderson Anuja Deedwaniya Lisa Schloemer Christine Smith Martin Ziskind Romney White Jerry Stevens Anthony Sofia IBM Poughkeepsie
Leslie Geer III IBM Endicott
Monika Zimmerman Carl Mayer Angel Nunez Mencias IBM Germany
Parwez Hamid IBM Poughkeepsie

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
Preface xxv
Page 28
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks

򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxvi IBM z13s Technical Guide
Page 29

Chapter 1. Introducing IBM z13s servers

1
Digital business has been driving the transformation of IT to be more efficient, secure, adaptive, and intelligent. Today’s technology foundation must be capable of increasing the value of data, drawing on the wealth of applications on the z Systems platform, and delivering new opportunities made possible by the API economy. z Systems is designed to harvest information from huge volumes of data to unlock real-time insights, and enable smarter decision making to deliver the greatest business advantage. The hybrid cloud infrastructure requires a secured and resilient platform that is capable of in-transaction analytics, and capable of providing customers with the agility and insight they need to succeed in today’s marketplace.
The IT infrastructure must always be on to serve customers in a 24 x 7 global business, have flawless data integrity, and be capable of thwarting cyber security threats. It needs to be resilient to support the introduction of new workloads and handle spikes in processing while meeting the most demanding service levels. It must support both traditional delivery models and new “as a service” cloud delivery models. It also must support the needs of traditional and mobile knowledge workers.
This chapter introduces the basic concepts of IBM z13s servers.
This chapter includes the following sections:
򐂰 Overview of IBM z13s servers 򐂰 z13s servers highlights 򐂰 z13s technical overview 򐂰 Hardware Management Consoles and Support Elements 򐂰 IBM z BladeCenter Extension (zBX) Model 004 򐂰 IBM z Unified Resource Manager 򐂰 IBM Dynamic Partition Manager 򐂰 Operating systems and software
© Copyright IBM Corp. 2016. All rights reserved. 1
Page 30

1.1 Overview of IBM z13s servers

The IBM z13s server, like its predecessors, is designed from the chip level up for intense data serving and transaction processing, the core of business. IBM z13s servers are designed to provide unmatched support for data processing by providing these features:
򐂰 A strong, powerful I/O infrastructure 򐂰 Larger cache sizes on the chip to bring data close to processing power 򐂰 Enhanced cryptographic support and compression capabilities of the co-processors and
I/O features
򐂰 Simultaneous multithreading to drive throughput 򐂰 The 99.999%
1
application availability design of the IBM z Systems clustering technologies
z13s servers feature a newly designed eight-core
2
processor chip introduced and shared with the IBM z13™. This chip features innovations designed to deliver maximum performance, and up to 4 TB of available memory, representing an eight-fold increase in memory over the prior processor generation. More memory can help improve response time, minimize application constraints, speed up batch processing, and reduce processor cycles that are consumed for a specific workload. z13s servers also offer a 2.75x boost in the system I/O bandwidth, which when combined with new key I/O enhancements can reduce transfer time for the global enterprise. z13s servers also have more throughput per core (1.3 times the performance of a zBC12 based on workload and model), and more processing cores (50% over zBC12) to drive higher volumes of workloads.
.
Terminology: The remainder of the book uses the designation CPC to refer to the central
processor complex
.
The IBM operating systems participate in providing new advantages for the digital era. For instance, z/OS V2.2 running on z13s servers sets the groundwork for digital business by providing support for demanding workloads such as operational analytics and cloud along with traditional mission-critical applications. z/OS V2.2 continues to support the IBM z Integrated Information Processor (zIIP)
3
that can take advantage of the simultaneous multithreading (SMT) feature implemented in the IBM z Systems processor unit (PU) for greater throughput. Applications running under z/OS V2.2 can use vector processing and single-instruction, multiple-data (SIMD) for new analytics applications, Java processing and so on.
Operating system enhancements drive simplification and enables cost-saving consolidation opportunities. With enhancements to extend the reach of your skills, z/OS V2.2 and the entitled z/OS Management Facility can help extend the reach of administrators and other personnel, enabling them to handle configuration tasks with ease. Mobile Workload Pricing for z/OS can help reduce the cost of growth for mobile transactions that are processed by programs such as IBM CICS®, IBM IMS™, and IBM DB2® for z/OS.
IBM z/VM 6.3 has been enhanced to use the SMT and SIMD features offered on the new processor chip
1
A z/OS Parallel Sysplex is designed to provide 99.999% application availability. See:
http://www.ibm.com/systems/z/advantages/resiliency/datadriven/application.html
2
Only 6 or 7 active cores per PU Single Chip Module (SCM) in z13s servers.
3
zAAP workloads are now run on zIIP.
4
z/VM 6.3 SMT support is provided on Integrated Facility for Linux (IFL) processors only.
5
z/VM supports 20 processors on z13s servers and up to 40 threads when SMT enabled.
2 IBM z13s Technical Guide
4
, and also supports 64 threads5 for Linux workloads. With support for sharing
Page 31
Open Systems Adapters (OSAs) across z/VM systems, z/VM 6.3 delivers enhanced availability and reduced cost of ownership in network environments.
IBM z13s brings a new approach for Enterprise-grade Linux with offerings and capabilities for availability, virtualization with z/VM, and a focus on open standards and architecture. The new support of kernel-based virtual machine (KVM) on the mainframe provides a new modern, virtualization platform to accelerate the introduction of Linux applications and new open source-based technologies.
KVM: In addition to continued investment in z/VM, IBM has made available a KVM for z Systems that can host Linux guest virtual machines. KVM for IBM z Systems can coexist with z/VM virtualization environments, z/OS, IBM z/VSE®, and z/TPF. This modern, open-source-based hypervisor enables enterprises to capitalize on virtualization capabilities by using common Linux administration skills, while enjoying the robustness of z Systems’ scalability, performance, security, and resilience. KVM for IBM z Systems is optimized for z Systems architecture and provides standard Linux and KVM interfaces for operational control of the environment. In addition, it integrates with standard OpenStack virtualization management tools, enabling enterprises to easily integrate Linux servers into their existing traditional infrastructure and cloud offerings.
z13s servers continue to provide heterogeneous platform investment protection with the updated IBM z BladeCenter Extension (zBX) Model 004 and IBM z Unified Resource Manager (zManager). Enhancements to the zBX include the uncoupling of the zBX from the server and installing a Support Element (SE) into the zBX. The zBX Model 002 and Model 003 can be upgraded to the zBX Model 004.

1.2 z13s servers highlights

This section reviews some of the most important features and functions of z13s servers:
򐂰 Processor and memory 򐂰 Capacity and performance 򐂰 I/O subsystem and I/O features 򐂰 Virtualization 򐂰 Increased flexibility with z/VM mode logical partition 򐂰 IBM zAware logical partition 򐂰 10GbE RoCE Express 򐂰 Flash Express 򐂰 Reliability, availability, and serviceability design 򐂰 IBM z Appliance Container Infrastructure (zACI) 򐂰 Dynamic Partition Manager (DPM)

1.2.1 Processor and memory

IBM continues its technology leadership at a consumable entry point with the z13s servers. z13s servers are built using the IBM modular multi-drawer design that supports 1 or 2 CPC drawers per CPC. Each CPC drawer contains eight-core SCMs with either 6 or 7 cores enabled for use. The SCMs host the redesigned complementary metal-oxide semiconductor (CMOS) 14S0 superscalar processor has enhanced out-of-order (OOO) instruction execution, redesigned caches, and an expanded instruction set that includes a Transactional Execution facility. It
6
processor units, storage control chips, and connectors for I/O. The
6
CMOS 14S0 is a 22-nanometer CMOS logic fabrication process.
Chapter 1. Introducing IBM z13s servers 3
Page 32
also includes innovative SMT capability, and provides 139 SIMD vector instruction subset for better performance.
Depending on the model, z13s servers can support from 64 GB to a maximum of 4 TB of usable memory, with up to 2 TB of usable memory per CPC drawer. In addition, a fixed amount of 40 GB is reserved for the hardware system area (HSA) and is not part of customer-purchased memory. Memory is implemented as a redundant array of independent memory (RAIM) and uses extra physical memory as spare memory. The RAIM function uses 20% of the physical installed memory in each CPC drawer.

1.2.2 Capacity and performance

The z13s server provides increased processing and I/O capacity over its predecessor, the zBC12 system. This capacity is achieved both by increasing the performance of the individual processor units, which increases the number of PUs per system, redesigning the system cache and increasing the amount of memory. The increased performance and the total system capacity available, with possible energy savings, allow consolidating diverse applications on a single platform, with significant financial savings. The introduction of new technologies and instruction set ensure that the z13s server is a high performance, reliable, and secure platform. z13s servers are designed to maximize resource exploitation and utilization, and allows you to integrate and consolidate applications and data across your enterprise IT infrastructure.
z13s servers come in two models: The N10 (10 configurable PUs) and N20 (20 configurable PUs). An N20 can have one or two CPC drawers. Each drawer has two nodes (N10 only one). Each node has 13 PUs available for characterization. The second drawer for an N20 is added when more than 2 TB of memory or more I/O features are needed.
IBM z13s Model Capacity Identifier A01 is estimated to provide 60% more total system capacity than the zBC12 Model capacity Identifier A01, with the same amount of memory and power requirements. With up to 4 TB of main storage and SMT, the performance of the z13s processors provide considerable improvement. Uniprocessor performance has also increased significantly. A z13s Model z01 offers, on average, performance improvements of 34% over the zBC12 Model z01. However, variations on the observed performance increasingly depend on the workload type.
The IFL and zIIP processor units on z13s servers can run two simultaneous threads per clock cycle in a single processor, increasing the capacity of the IFL and zIIP processors up to
1.2 times and 1.25 times over the zBC12. However, the observed performance increase varies depending on the workload type.
z13s servers offer 26 capacity levels for six processors that can be characterized as CP. This configuration gives a total of 156 distinct capacity settings and provides a range of 80 - 7123 MIPS. z13s servers deliver scalability and granularity to meet the needs of medium-sized enterprises, while also satisfying the requirements of large enterprises that have demanding, mission-critical transaction and data processing requirements.
This comparison is based on the Large System Performance Reference (LSPR) mixed workload analysis. For a description about performance and workload variation on z13s servers, see Chapter 12, “Performance” on page 437. For more information about LSPR, see:
https://www.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprindex
z13s servers continue to offer all the specialty engines that are available on previous z Systems except for IBM System z® Application Assist Processors (zAAPs). A zAAP qualified
4 IBM z13s Technical Guide
Page 33
workload runs on a zIIP processor, thus reducing the complexity of the IBM z/Architecture®. zAAPs are no longer supported beginning with the z13 and z13s servers.
Workload variability
Consult the LSPR when considering performance on z13s servers. Individual logical partitions (LPARs) have more performance variability of when an increased number of partitions and more PUs are available. For more information, see Chapter 12, “Performance” on page 437.
For detailed performance information, see the LSPR website at:
https://www.ibm.com/servers/resourcelink/lib03060.nsf/pages/lsprindex
The millions of service units (MSUs) ratings are available from the following website:
http://www.ibm.com/systems/z/resources/swprice/reference/exhibits/
Capacity on demand (CoD)
CoD enhancements enable clients to have more flexibility in managing and administering their temporary capacity requirements. z13s servers support the same architectural approach for CoD offerings as the zBC12 (temporary or permanent). Within z13s servers, one or more flexible configuration definitions can be available to solve multiple temporary situations and multiple capacity configurations can be active simultaneously.
The customer can stage records to prepare for many scenarios. Up to eight of these records can be installed on the server at any time. After the records are installed, the activation of the records can be done manually, or the z/OS Capacity Provisioning Manager can automatically start the activation when Workload Manager (WLM) policy thresholds are reached. Tokens are available that can be purchased for On/Off CoD either before or after workload execution (pre- or post-paid).
LPAR group absolute capping
IBM PR/SM™ and the Hardware Management tool have been enhanced to limit the amount of physical processor capacity that is consumed by a group of LPARs when a processor unit is defined as a general-purpose processor or as an IFL shared across a set of LPARs. Currently, a user can define the LPAR group capacity limits that allow one or more groups of LPARs to each have their own capacity limit. This new feature adds the ability to define an absolute capping value for an entire LPAR group. This group physical capacity limit is enforced as an absolute limit, so it is not affected by changes to the logical or physical configuration of the system.

1.2.3 I/O subsystem and I/O features

The z13s servers support both PCIe and InfiniBand I/O infrastructure. PCIe Gen3 features are installed in PCIe I/O drawers. Up to two PCIe I/O drawers per z13s server are supported, providing space for up to 64 PCIe features. When upgrading a zBC12 or a z114 to a z13s server, one I/O drawer is also supported as carry forward. I/O drawers were introduced with the IBM z10™ BC.
The z13s Model N20 single CPC drawer can have up to eight PCIe and four InfiniBand (IFB) fanouts, a Model N20 with two CPC drawers has up to 16 PCIe and eight IFB fanouts, whereas the N10 can have up to four PCIe and two IFB fanouts used for the I/O infrastructure and coupling.
Chapter 1. Introducing IBM z13s servers 5
Page 34
For I/O constraint relief, the IBM z13s server has improved scalability with support for three logical channel subsystems (LCSSs), three subchannel sets (to support more devices per logical channel subsystem), and up to 32K devices per IBM FICON® channel up from 24K channels in the previous generation.
For improved device connectivity for parallel access volumes (PAVs), Peer-to-Peer Remote Copy (PPRC) secondary devices, and IBM FlashCopy® devices, the third subchannel set allows extending the amount of addressable external storage. In addition to performing an IPL from subchannel set 0, z13s servers allow you to also perform an IPL from subchannel set 1 (SS1), or subchannel set 2 (SS2).
The system I/O buses take advantage of the PCIe technology and the InfiniBand technology, which are also used in coupling links.
z13s connectivity supports the following I/O or special purpose features: 򐂰 Storage connectivity:
– Fibre Channel connection (FICON):
• FICON Express16S 10 KM long wavelength (LX) and short wavelength (SX)
• FICON Express8S 10 KM long wavelength (LX) and short wavelength (SX)
• FICON Express8 10 KM LX and SX (carry forward only)
򐂰 Networking connectivity:
– Open Systems Adapter (OSA):
• OSA-Express5S 10 GbE LR and SR
• OSA-Express5S GbE LX and SX
• OSA-Express5S 1000BASE-T Ethernet
• OSA-Express4S 10 GbE LR and SR (carry forward only)
• OSA-Express4S GbE LX and SX (carry forward only)
• OSA-Express4S 1000BASE-T Ethernet (carry forward only) – IBM HiperSockets™ – Shared Memory Communications - Direct Memory Access (SMC-D) over Internal
Shared Memory (ISM)
– 10 GbE Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE)
using Shared Memory Communications over RDMA (SMC-R)
򐂰 Coupling and Server Time Protocol (STP) connectivity:
– Integrated Coupling Adapter (ICA SR) – Parallel Sysplex InfiniBand coupling links (IFB) – Internal Coupling links (IC)
In addition, z13s servers support the following special function features, which are installed on the PCIe I/O drawers:
򐂰 Crypto Express5S 򐂰 Flash Express 򐂰 zEnterprise Data Compression (zEDC) Express
Flash Express
Flash Express is an innovative optional feature that was first introduced with the zEC12 and
has been enhanced with more cache memory to benefit availability during memory dump
6 IBM z13s Technical Guide
Page 35
processing for z13s and z13 GA2 servers. It is intended to provide performance improvements and better availability for critical business workloads that cannot afford any impact to service levels. Flash Express is easy to configure, and provides rapid time to value.
Flash Express implements storage-class memory (SCM) through an internal NAND Flash solid-state drive (SSD), in a PCIe card form factor. The Flash Express feature is designed to allow each LPAR to be configured with its own SCM address space.
Flash Express is used in these situations: 򐂰 By z/OS V1R13 (or later), for handling z/OS paging activity such as start of day
processing.
򐂰 Coupling Facility Control Code (CFCC) Level 20 or later (CFCC Level 21 on z13s servers),
to use Flash Express as an overflow device for shared queue data. This configuration provides emergency capacity to handle IBM WebSphere® MQ shared queue buildups during abnormal situations, such as when “putters” are putting to the shared queue, but “getters” are transiently not retrieving data from the shared queue.
򐂰 Linux for z Systems (Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux
(SLES)), for use as temporary storage.
򐂰 Stand alone memory dumps and Supervisor Call (SAN Volume Controller) dumps. 򐂰 Read/write cache, which greatly increases performance by allowing customer data to be
stored temporarily in a cache in the system's HSA RAM.
For more information, see Appendix H, “Flash Express” on page 529.
10GbE RoCE Express
The 10 Gigabit Ethernet (10GbE) RoCE Express is an optional feature that uses RDMA over Converged Ethernet, and is designed to provide fast memory-to-memory communications between two z Systems CPCs. It is transparent to applications.
Use of the 10GbE RoCE Express feature helps reduce CPU consumption for applications that use the TCP/IP stack such as IBM WebSphere Application Server accessing a DB2 database on z/OS). It can also help reduce network latency with memory-to-memory transfers using SMC-R in z/OS V2R1 and later.
The 10GbE RoCE Express feature on z13s servers can now be shared among up to 31 LPARs running z/OS, and uses both ports on the feature. z/OS V2.1 with PTF or z/OS V2.2 supports the new sharing capability of the RoCE Express features on z13s processors. Also, the z/OS Communications Server has been enhanced to support automatic selection between TCP/IP and RoCE transport layer protocols based on traffic characteristics. The 10GbE RoCE Express feature is supported on z13, z13s, zEC12, and zBC12 servers, and is installed in the PCIe I/O drawer. A maximum of 16 features can be installed. On zEC12 and zBC12, only one port can be used, whereas on the z13 and z13s servers, both RoCE Express ports can be used.
Shared Memory Communications - Direct Memory Access
In addition to supporting SMC-R, z13 servers support a new feature called Shared Memory Communications - Direct Memory Access (SMC-D). Unlike SMC-R, SMC-D does not depend on the RoCE Express feature. SMC-R affords low latency communications within a CEC by using an RDMA connection.
IBM z Systems servers (z13 and z13s) now support a new ISM virtual PCIe (vPCIe) device to enable optimized cross-LPAR TCP communications that use a new “sockets-based DMA”, the SMC-D. SMC-D maintains the socket-API transparency aspect of SMC-R so that applications
Chapter 1. Introducing IBM z13s servers 7
Page 36
that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes.
With its lightweight design, SMC-D is designed to improve throughput, latency, and CPU consumption over other alternatives without sacrificing quality of service. SMC-D extends the benefits of SMC-R to the same CPC operating system instances without requiring physical resources such as RoCE adapters, PCI bandwidth, ports, I/O slots, network resources, 10 GbE switches, and so on. SMC-D requires either OSA connections or HiperSockets to establish the initial TCP connection, and can coexist with them.
SMC-D uses a virtual PCIe adapter and is configured like a physical PCIe device. There are up to 32 ISM adapters, each with a unique Physical Network ID per CPC.
Notes:
򐂰
SMC-D does not support Coupling facilities or access to the IEDN network.
򐂰 Shared Memory Communication protocols (SMC-R and SMC-D) do not currently
support multiple IP subnets.
zEDC Express
The growth of data that needs to be captured, transferred, and stored for large periods of time is unrelenting. The large amounts of data that needs to be handled require ever-increasing bandwidth and storage space. Software-implemented compression algorithms are costly in terms of processor resources, and storage costs are not negligible.
Beginning with zEC12,bandwidth and storage space requirements are addressed by providing hardware-based acceleration for data compression and decompression. zEDC, an optional feature that is available for z13s, provides data compression with lower CPU consumption than previously existing compression technology on z Systems. zEDC compression provides these advantages, among others:
򐂰 QSAM/BSAM for better disk utilization and batch elapsed time improvements 򐂰 SMF for increased availability and online storage reduction 򐂰 DFSMSdss for better disk and tape utilization for backup data 򐂰 Java for high throughput standard compression through java.util.zip 򐂰 Encryption Facility for z/OS for better industry data exchange 򐂰 IBM Sterling Connect: Direct for z/OS for better throughput and link utilization 򐂰 ISV support for increased client value 򐂰 DFSMShsm for improved throughput and MIPS reduction
For more information, see Appendix J, “IBM zEnterprise Data Compression Express” on page 551.

1.2.4 Virtualization

The IBM Processor Resource/Systems Manager™ (PR/SM) is Licensed Internal Code (LIC) that manages and virtualizes all the installed and enabled system resources as a single large symmetric multiprocessor (SMP) system. This virtualization enables full sharing of the installed resources with high security and efficiency. It does so by configuring up to 40 LPARs, each of which has logical processors, memory, and I/O resources, and are assigned from the installed CPC drawers and features. For more information about PR/SM functions, see 3.7, “Logical partitioning” on page 116.
LPAR configurations can be dynamically adjusted to optimize the virtual servers’ workloads. On z13s servers, PR/SM supports an option to limit the amount of physical processor
8 IBM z13s Technical Guide
Page 37
capacity that is consumed by an individual LPAR or by a group of LPARs. This limit is still valid when a PU defined as a CP or an IFL is shared across a set of LPARs. This feature is designed to provide and enforce a physical capacity limit as an absolute (versus a relative) limit. Physical capacity limit enforcement is not affected by changes to the logical or physical configuration of the system. This physical capacity limit can be specified to a fine level of granularity, to hundredths of units of a processor.
z13s servers provide improvements to the PR/SM HiperDispatch function. provides work alignment to logical processors, and alignment of logical processors to physical processors. This alignment optimizes cache utilization, minimizes inter-CPC drawer communication, and optimizes z/OS work dispatching, with the result of increasing throughput. For more information, see “HiperDispatch” on page 87
z13s servers support the definition of up to 32 IBM HiperSockets. memory to memory communication across LPARs without the need for any I/O adapters, and have virtual LAN (VLAN) capability. HiperSockets have been extended to bridge to the intraensemble data network (IEDN).
HiperSockets provide for
HiperDispatch
Increased flexibility with z/VM mode logical partition
z13s servers provide for the definition of a z/VM mode LPAR containing a mix of processor types. These types include CPs and specialty processors, such as IFLs, zIIPs, and ICFs. z/VM V6R2 and later support this capability, which increases flexibility and simplifies system management. In a single LPAR, z/VM can perform the following tasks:
򐂰 Manage guests that use Linux on z Systems on IFLs or CPs, and manage IBM z/VSE,
z/TPF, and z/OS guests on CPs.
򐂰 Run designated z/OS workloads, such as parts of IBM DB2 Distributed Relational
Database Architecture™ (DRDA®) processing and XML, on zIIPs.
Coupling Facility mode logical partition
Parallel Sysplex is the clustering technology used with z13s servers. To use this technology, a special LIC is used. This code is called CFCC. To activate the CFCC, a special logical partition must be defined. Only PUs characterized as CPs or Internal Coupling Facilities (ICFs) can be used for Coupling Facility (CF) partitions. For a production CF workload, use dedicated ICFs.
IBM z Appliance Container Infrastructure
z Appliance Container Infrastructure (zACI) s a new partition type that, along with an appliance installer, enables the secure deployment of software appliances.Typically, appliances can be implemented as firmware or software, depending on the environment where the appliance runs. To support the execution of software applications, base infrastructure is needed. This partition with its infrastructure is the z Appliance Container Infrastructure. zACI is designed to shorten the deployment and implementation of building and deploying appliances. zACI will be delivered as part of the base code on z13s and z13 (Driver 27) servers.
zACI provides a standardized framework for deploying products as software or firmware. An appliance is an integration of operating system, middleware, and software components that work autonomously and provide core services and infrastructure that focus on consumability and security.
zACI reduces the work that is needed to create and maintain a product, and enforces common functions that appliances need. The zACI framework provides a consistent set of utilities to implement these common functions such as first failure data capture (FFDC),
Chapter 1. Introducing IBM z13s servers 9
Page 38
network setup, and appliance configuration. The design of zACI allows a simpler product (function) deployment model.
Several exploiters are planned for delivery through zACI. For instance, z/VSE Network Appliance provides network access for TCP/IP socket applications running on z/VSE in an LPAR. The z/VSE Network Appliance is an example of a product that is intended to be managed by using the zACI infrastructure. IBM zAware is also designed to be implemented by using the zACI partition, which replaces the zAware partition infrastructure used previously. For more information about zACI, see Appendix A, “IBM z Appliance Container Infrastructure” on page 449.
IBM z/VSE Network Appliance
The z/VSE Network Appliance builds on the z/VSE Linux Fast Path (LFP) function and provides TCP/IP network access without requiring a TCP/IP stack in z/VSE. The appliance uses zACI, which was introduced on z13 and z13s servers. Compared to a TCP/IP stack in z/VSE, this configuration can support higher TCP/IP traffic throughput while reducing the processing resource consumption in z/VSE. The z/VSE Network Appliance is an extension of the IBM z/VSE - z/VM IP Assist (VIA®) function introduced on z114 and z196 servers. VIA provides network access for TCP/IP socket applications running on z/VSE as a z/VM guest. With the new z/VSE Network Appliance, this configuration is available for z/VSE systems running in an LPAR. When available, the z/VSE Network Appliance is provided as a downloadable package. It can then be deployed with the appliance installer.
In summary, the VIA function is available for z/VSE systems running as z/VM guests. The z/VSE Network Appliance is available for z/VSE systems running without z/VM in LPARs.
Both provide network access for TCP/IP socket applications that use the Linux Fast Path. However, no TCP/IP stack is required on the z/VSE system, and no Linux on z Systems needs to be installed.
IBM zAware
IBM zAware is a feature that was introduced with the zEC12 that provides the next generation of system monitoring. IBM zAware is designed to offer a near real-time, continuous-learning diagnostic and monitoring capability. This function helps pinpoint and resolve potential problems quickly enough to minimize their effects on your business.
The ability to tolerate service disruptions is diminishing. In a continuously available environment, any disruption can have grave consequences. This negative effect is especially true when the disruption lasts days or even hours. But increased system complexity makes it more probable that errors occur, and those errors are also increasingly complex. Some incidents’ early symptoms go undetected for long periods of time and can become large problems. Systems often experience “soft failures” (sick but not dead), which are much more difficult and unusual to detect. IBM zAware is designed to help in those circumstances. For more information, see Appendix B, “IBM z Systems Advanced Workload Analysis Reporter (IBM zAware)” on page 453.
IBM zAware also now offers support for Linux running on z Systems and supports Linux message log analysis. IBM zAware enables processing of message streams including those without message IDs. It provides increased flexibility for analysis with the ability to group multiple systems for modeling and analysis purposes together, which is especially helpful for Linux workloads.
For use across the sysplex, IBM zAware now features an aggregated Sysplex view for z/OS and system views. Visualization and usability are enhanced with an enhanced Heat Map display, enhanced filtering and visualization capabilities, and improvements in time zone display.
10 IBM z13s Technical Guide
Page 39
Beginning with z13s servers, IBM zAware runs in a IFLs can be configured to the IBM zAware partition. This special partition is defined for the exclusive use of the IBM z Systems Advanced Workload Analysis Reporter (IBM zAware) offering. IBM zAware requires a special license, IBM z Advanced Workload Analysis Reporter (zAware)
For customers now considering IBM zAware, z13 GA2 introduces zACI, which supports IBM zAware. IBM zAware functions are the same whether it uses zACI or runs as a stand-alone zAware partition. Existing zAware instances are automatically converted to use the zACI partition. For new zACI instances that will use IBM zAware, use the zACI web interface to select IBM zAware.
zACI mode logical partition. Either CPs or

1.2.5 Reliability, availability, and serviceability design

System reliability, availability, and serviceability (RAS) is an area of continuous IBM focus. The RAS objective is to reduce, or eliminate if possible, all sources of planned and unplanned outages, while providing adequate service information in case something happens. Adequate service information is required for determining the cause of an issue without the need to reproduce the context of an event. With a properly configured z13s server, further reduction of outages can be attained through improved, nondisruptive replace, repair, and upgrade functions for memory, drawers, and I/O adapters. In addition, z13s servers have extended nondisruptive capability to download and install Licensed Internal Code (LIC) updates.
Enhancements include removing pre-planning requirements with the fixed 40 GB HSA. Client-purchased memory is need to reserve capacity to avoid disruption when adding new features. With a fixed amount of 40 GB for the HSA, maximums are configured and an IPL is performed so that later insertion can be dynamic, which eliminates the need for a power-on reset of the server.
not used for traditional I/O configurations, and you no longer
IBM z13s RAS features provide many high-availability and nondisruptive operational capabilities that differentiate the z Systems in the marketplace.
The ability to cluster multiple systems in a Parallel Sysplex takes the commercial strengths of the z/OS platform to higher levels of system management, scalable growth, and continuous availability.

1.3 z13s technical overview

This section briefly reviews the major elements of z13s servers:
򐂰 Models 򐂰 Model upgrade paths 򐂰 Frame 򐂰 CPC drawer 򐂰 I/O Connectivity 򐂰 I/O subsystem 򐂰 Parallel Sysplex Coupling and Server Time Protocol connectivity
Chapter 1. Introducing IBM z13s servers 11
Page 40
򐂰 Special-purpose features:
򐂰 Reliability, availability, and serviceability

1.3.1 Models

z13s servers have a machine type of 2965. Two models are offered: N10 and N20. The model name indicates the maximum number of PUs available for purchase. A PU is the generic term for the IBM z/Architecture processor unit (processor core) on the SCM.
On z13s servers, some PUs are part of the system base. That is, they are not part of the PUs that can be purchased by clients. They are characterized by default as follows:
򐂰 System assist processor (SAP) that is used by the channel subsystem. There are three
򐂰 One integrated firmware processor (IFP), which is used in support of select features, such
򐂰 Two spare PUs for the N20 only that can transparently assume any characterization during
The PUs that clients can purchase can assume any of the following characterizations:
򐂰 Central processor (CP) for general-purpose use. 򐂰 Integrated Facility for Linux (IFL) for the use of Linux on z Systems. 򐂰 z Systems Integrated Information Processor (zIIP
– Cryptography –Flash Express – zEDC Express
standard SAPs for the N20, and two for the N10.
as zEDC and 10GbE RoCE.
a permanent failure of another PU.
before the installation of any zIIPs.
7
). One CP must be installed with or
zIIPs: At least one CP must be purchased with, or before, a zIIP can be purchased. Clients can purchase up to two zIIPs for each purchased CP (assigned or unassigned) on the system (2:1 ratio). However, for migrations from zBC12 with zAAPs, the ratio (CP:zIIP) can go up to 4:1.
򐂰 ICF is used by the CFCC. 򐂰 Extra SAPs are used by the channel subsystem.
A PU that is not characterized cannot be used, but is available as a spare. The following rules apply:
򐂰 At least one CP, ICF, or IFL must be purchased and activated for either the N10 or N20
model.
򐂰 PUs can be purchased in single PU increments and are orderable by feature code. 򐂰 The total number of PUs purchased cannot exceed the total number that are available for
that model.
򐂰 The number of installed zIIPs cannot exceed twice the number of installed CPs.
7
zAAPs are not available on z13s servers. The zAAP workload is run on zIIPs.
12 IBM z13s Technical Guide
Page 41
򐂰 The z13s CPC drawer system design provides the capability to increase the capacity of
the system. You can add capacity by activating more CPs, IFLs, ICFs, or zIIPs on an existing CPC drawer. The second CPC drawer can be used to provide more memory, or one or more adapters to support a greater number of I/O features.
򐂰 In z13s servers, the addition of a CPC drawer or memory is disruptive

1.3.2 Model upgrade paths

A z13s Model N10 can be upgraded to a z13s Model N20 hardware model. The upgrades to N20 are disruptive (that is, the system is not available during the upgrade). Any zBC12 or z114 model can be upgraded to a z13s N10 or N20 model, which is also disruptive. For more information see also 2.7.1, “Upgrades” on page 71.
Consideration: Note that the z13s servers are air cooled only.
z114 upgrade to z13s server
When a z114 is upgraded to a z13s server, the z114 Driver level must be at least 93. If a zBX is involved, the Driver 93 must be at bundle 27 or later. Family to family (z114 or zBC12) upgrades are frame rolls, but all z13s upgrade paths are disruptive.
zBC12 upgrade to z13s server
When a BC12 is upgraded to a z13s server, the zBC12 must be at least at Driver level 15. If a zBX is involved, Driver 15 must be at Bundle 22 or later.

1.3.3 Frame

The following processes are not supported:
򐂰 Reverting to an earlier level within the z13 or z13s models 򐂰 Upgrades from IBM System z10® or earlier systems 򐂰 Attachment of a zBX Model 002 or model 003 to a z13s server
zBX upgrade
The zBX Model 004 is available as an upgrade from an existing zBX Model 002 or Model 003. The upgrade decouples the zBX from its controlling CPC. With the addition of redundant SEs, it becomes a stand-alone node within an ensemble.
The z13s server has one frame that contains the following CPC components: 򐂰 Up to two CPC drawers, up to two PCIe I/O drawers, and up to one I/O drawer that hold
I/O features and special purpose features
򐂰 Power supplies 򐂰 An optional internal battery feature (IBF) 򐂰 Air cooling 򐂰 Two System Control Hubs (SCHs) to interconnect the CPC components through Ethernet. 򐂰 Two new 1U, rack-mounted SEs, each with one keyboard, pointing device, and display
mounted on a tray.
Chapter 1. Introducing IBM z13s servers 13
Page 42

1.3.4 CPC drawer

Up to two CPC drawers (minimum one) can be installed in the z13s frame. Each CPC drawer houses the SCMs, memory, and fanouts. The CPC drawer supports up to 8 PCIe fanouts and 4 IFB fanouts for I/O and coupling connectivity.
In the N20 two CPC drawer model, CPC drawers are connected through cables that are also field-replaceable units (FRUs) and can be replaced concurrently, unlike the disruptive bus repair on the zBC12.
Important: Concurrent drawer repair, concurrent drawer add, and concurrent memory add are not available on z13s servers.
SCM technology
z13s servers are built on the superscalar microprocessor architecture of its predecessor, and provides several enhancements over the zBC12. Each CPC drawer is physically divided into two nodes. Each node has three SCMs: Two PU SCMs, and one storage control (SC) SCM.
A fully configured CPC drawer has four PU SCMs and two SC SCMs. The PU SCM has eight cores (six or seven active), which can be characterized as CPs, IFLs, ICFs, zIIPs, SAPs, or IFPs. Two CPC drawers sizes are offered: 10 cores (N10) and 20 cores (N20).
On the N20, the PU configuration includes two designated spare PUs. The N10 has no dedicated spares. Two standard SAPs are installed with the N10 Model and up to three SAPs for the N20 Model. In addition, one PU is used as an IFP and is not available for client use. The remaining PUs can be characterized as CPs, IFL processors, zIIPs, ICF processors, or extra SAPs. For more information, see 3.3, “CPC drawer design” on page 84 and 3.4, “Processor unit design” on page 88.
Processor features
The processor chip runs at 4.3 GHz. Depending on the model, either 13 PUs (N10) or 26 PUs (N20) are available. Each core on the PU chip includes an enhanced dedicated coprocessor for data compression and cryptographic functions, which are known as the Central Processor Assist for Cryptographic Function (CPACF)
Hardware data compression can play a significant role in improving performance and saving costs over performing compression in software (thus consuming CPU cycles). In addition, the zEDC Express feature offers more performance and savings. It is designed to provide compression capabilities that compliment those provided by the data compression on the coprocessor.
The micro-architecture of the core has been enhanced to increase parallelism and improve pipeline efficiency. The core has a new branch prediction and instruction fetch front end to support simultaneous multithreading in a single core and to improve the branch prediction throughput, a wider instruction decode (six instructions per cycle), and 10 arithmetic logical execution units that offer double instruction bandwidth over the zBC12.
Each core has two hardware decimal floating point units that are designed according to a standardized, open algorithm. Two on-core hardware decimal floating point units meet the requirements of today’s business and user applications, and provide greater floating point execution throughput with improved performance and precision.
8
.
8
No charge feature code (FC) 3863 must be ordered to enable CPACF
14 IBM z13s Technical Guide
Page 43
In the unlikely case of a permanent core failure, each core can be individually replaced by one of the available spares. Core sparing is transparent to the operating system and applications. The N20 has two spares, but the N10 does not offer dedicated spares (the firmware can determine cores available for sparing).
Simultaneous multithreading
The micro-architecture of the core of z13s servers allows simultaneous execution of two threads (SMT) in the same zIIP or IFL core, dynamically sharing processor resources such as execution units and caches. This feature allows a more efficient utilization of the core and increased capacity. While one of the threads is waiting for a storage access (cache miss), the other thread that is running simultaneously in the core can use the shared resources rather than remain idle.
z/OS and z/VM control programs use SMT on z13s servers to optimize their workloads while providing repeatable metrics for capacity planning and chargeback.
Single instruction multiple data instruction set
The z13s instruction set architecture includes a subset of 139 new instructions for SIMD execution, which was added to improve efficiency of complex mathematical models and vector processing. These new instructions allow a larger number of operands to be processed with a single instruction. The SIMD instructions use the superscalar core to process operands in parallel, which enables more processor throughput.
Transactional Execution facility
The z13s server, like its predecessor zBC12, has a set of instructions that allows defining groups of instructions that are run atomically, that is, either all the results are committed or none are. The facility provides for faster and more scalable multi-threaded execution, and is known as
hardware transactional memory.
Out-of-order execution
As with its predecessor zBC12, a z13s server has an enhanced superscalar microprocessor with OOO execution to achieve faster throughput. With OOO, instructions might not run in the original program order, although results are presented in the original order. For example, OOO allows a few instructions to complete while another instruction is waiting. Up to six instructions can be decoded per system cycle, and up to 10 instructions can be in execution.
Concurrent processor unit conversions
z13s servers support concurrent conversion between various PU types, which provides the flexibility to meet the requirements of changing business environments. CPs, IFLs, zIIPs, ICFs, and optional SAPs can be converted to CPs, IFLs, zIIPs, ICFs, and optional SAPs.
Memory subsystem and topology
z13s servers use a new buffered dual inline memory module (DIMM) technology. For this purpose, IBM has developed a chip that controls communication with the PU, and drives address and control from DIMM to DIMM. z13s servers use the new DIMM technology, and carry forward is not supported. The memory subsystem supports 20 DIMMs per drawer and the DIMM capacities are 16 GB, 32 GB, 64 GB, and 128 GB.
Memory topology provides the following benefits: 򐂰 A RAIM for protection at the dynamic random access memory (DRAM), DIMM, and
memory channel levels
򐂰 A maximum of 4 TB of user configurable memory with a maximum of 5.1 TB of physical
memory (with a maximum of 4 TB configurable to a single LPAR)
Chapter 1. Introducing IBM z13s servers 15
Page 44
򐂰 One memory port for each PU chip. 򐂰 Increased bandwidth between memory and I/O 򐂰 Asymmetrical memory size and DRAM technology across CPC drawers 򐂰 Large memory pages (1 MB and 2 GB) 򐂰 Key storage 򐂰 Storage protection key array that is kept in physical memory 򐂰 Storage protection (memory) key that is also kept in every L2 and L3 cache directory entry 򐂰 A larger (40 GB) fixed-size HSA that eliminates having to plan for HSA
PCIe fanout hot-plug
The PCIe fanout provides the path for data between memory and the PCIe features through the PCIe 16 GBps bus and cables. The PCIe fanout is hot-pluggable. During an outage, a redundant I/O interconnect allows a PCIe fanout to be concurrently repaired without loss of access to its associated I/O domains. Up to four PCIe fanouts per drawer are available on the N10 and up to eight PCIe fanouts per drawer on the N20. The PCIe fanout can also be used for the ICA SR. If redundancy in coupling link connectivity is ensured, the PCIe fanout can be concurrently repaired.
Host channel adapter fanout hot-plug
The HCA fanout provides the path for data between memory and the I/O cards in an I/O drawer through 6 GBps IFB cables. The HCA fanout is hot-pluggable. During an outage, an HCA fanout can be concurrently repaired without the loss of access to its associated I/O features by using redundant I/O interconnect to the I/O drawer. Up to four HCA fanouts are available per CPC drawer.

1.3.5 I/O connectivity: PCIe and InfiniBand

z13s servers offer various improved I/O features and uses technologies such as PCIe and InfiniBand. This section briefly reviews the most relevant I/O capabilities.
z13s servers take advantage of PCIe Generation 3 to implement the following features: 򐂰 PCIe Generation 3 (Gen3) fanouts that provide 16 GBps connections to the PCIe I/O
features in the PCIe I/O drawers.
򐂰 PCIe Gen3 fanouts that provide 8 GBps coupling link connections through the new IBM
ICA SR. IBM z13s server, enabling improved connectivity for short distance coupling.
z13s servers take advantage of InfiniBand to implement the following features: 򐂰 A 6 GBps I/O bus that includes the InfiniBand infrastructure (HCA2-C) for the I/O drawer
for non-PCIe I/O features.
򐂰 Parallel Sysplex coupling links using IFB include 12x InfiniBand coupling links (HCA3-O)
for local connections and 1x InfiniBand coupling links (HCA3-O LR) for extended distance connections between any two zEnterprise CPCs. The 12x IFB link (HCA3-O) has a bandwidth of 6 GBps and the HCA3-O LR 1X InfiniBand links have a bandwidth of 5 Gbps.
Up to eight ICA SR fanouts and up to 16 ICA SR ports are supported on an

1.3.6 I/O subsystem

The z13s I/O subsystem is similar to z13 servers and includes a new PCIe Gen3 infrastructure. The I/O subsystem is supported by both a PCIe bus and an I/O bus similar to
16 IBM z13s Technical Guide
Page 45
that of zEC12. This infrastructure is designed to reduce processor usage and latency, and provide increased throughput.
z13s servers offer two I/O infrastructure elements for driving the I/O features: PCIe I/O drawers for PCIe features, and up to one I/O drawer for non-PCIe features carried forward only.
PCIe I/O drawer
The PCIe I/O drawer, together with the PCIe features, offers finer granularity and capacity over previous I/O infrastructures. It can be concurrently added and removed in the field, easing planning. Only PCIe cards (features) are supported, in any combination. Up to two PCIe I/O drawers can be installed on a z13s server with up to 32 PCIe I/O features per drawer (64 in total).
I/O drawer
On z13s servers, a maximum of one I/O drawer is supported only when carried forward on upgrades from zBC12 or z114 to z13s servers. For a z13s new order, the I/O drawer is not available.
Native PCIe and Integrated Firmware Processor
Native PCIe was introduced with the zEDC and RoCE Express features, which are managed differently from the traditional PCIe features. The device drivers for these adapters are available in the operating system.
The diagnostic tests for the adapter layer functions of the native PCIe features are managed by LIC that is designated as a resource group partition, which runs on the IFP. For availability, two resource groups are present and share the IFP.
During the ordering process of the native PCIe features, features of the same type are evenly spread across the two resource groups (RG1 and RG2) for availability and serviceability reasons. Resource groups are automatically activated when these features are present in the CPC.
I/O and special purpose features
z13s servers support the following PCIe features on a new build system, which can be installed only in the PCIe I/O drawers:
򐂰 FICON Express16S Short Wave (SX) and Long Wave (LX) 򐂰 FICON Express8S Short Wave (SX) and Long Wave (LX) 򐂰 OSA-Express5S 10 GbE Long Reach (LR) and Short Reach (SR) 򐂰 OSA Express5S GbE LX and SX 򐂰 OSA Express5S 1000BASE-T 򐂰 10GbE RoCE Express 򐂰 Crypto Express5S 򐂰 Flash Express (FC 0403) 򐂰 zEDC Express
When carried forward on an upgrade, z13s servers also support the following features in the PCIe I/O drawers:
򐂰 FICON Express8S SX and LX 򐂰 OSA-Express5S (all) 򐂰 OSA-Express4S (all) 򐂰 10GbE RoCE Express 򐂰 Flash Express (FC 0402) 򐂰 zEDC Express
Chapter 1. Introducing IBM z13s servers 17
Page 46
Also, when carried forward on an upgrade, the z13s servers support one I/O drawer on which the FICON Express8 SX and LX (10 km) feature can be installed.
In addition, InfiniBand coupling links HCA3-O and HCA3-O LR, which attach directly to the CPC drawers, are supported.
Tip: FICON Express8 and 8S cards should only be ordered or carried forward to support attachments to 2 Gbps devices and for older I/O optics not supported for 16 Gbps capable features.
FICON channels
Up to 64 features of any combination of FICON Express16s or FICON Express8S are supported in PCIe I/O Drawer. The FICON Express8S features support link data rates of 2, 4, or 8 Gbps, and the FICON Express16S support 4, 8, or 16 Gbps.
Up to eight features with up to 32 FICON Express8 channels (carry forward) are supported in one I/O drawer (also carry forward). The FICON Express8 features support link data rates of 2, 4, or 8 Gbps.
The z13s FICON features support the following protocols: 򐂰 FICON (FC) and High Performance FICON for z Systems (zHPF). zHPF offers improved
performance for data access, which is of special importance to OLTP applications.
򐂰 FICON channel-to-channel (CTC). 򐂰 Fibre Channel Protocol (FCP).
FICON also offers the following capabilities: 򐂰 Modified Indirect Data Address Word (MIDAW) facility that provides more capacity over
native FICON channels for programs that process data sets that use striping and compression, such as DB2, VSAM, partitioned data set extended (PDSE), hierarchical file system (HFS), and z/OS file system (zFS). It does so by reducing channel, director, and control unit processor usage.
򐂰 Enhanced problem determination, analysis, and manageability of the storage area
network (SAN) by providing registration information to the fabric name server for both FICON and FCP.
Read Diagnostic Parameter
A new command called Read Diagnostic Parameter (RDP) allows z Systems to obtain extra diagnostics from the Small Form Factor Pluggable (SFP) optics located throughout the SAN fabric. RDP is designed to help improve the accuracy of identifying a failed/failing component in the SAN fabric.
Open Systems Adapter
z13s servers allow any mix of the supported Open Systems Adapter (OSA) Ethernet features. Up to 48 OSA-Express5S or OSA-Express4S features, with a maximum of 96 ports, are supported. OSA-Express5S and OSA-Express4S features are plugged into the PCIe I/O drawer.
The maximum number of combined OSA-Express5S and OSA-Express4S features cannot exceed 48.
The N10 supports 64 OSA ports and the N20 supports 96 OSA ports.
18 IBM z13s Technical Guide
Page 47
OSM and OSX channel path identifier types
The z13s provides OSA-Express5S, OSA-Express4S, and channel-path identifier (CHPID) types OSA-Express for Unified Resource Manager (OSM) and OSA-Express for zBX (OSX) connections:
򐂰 OSA-Express for Unified Resource Manager (OSM)
Connectivity to the intranode management network (INMN) Top of Rack (ToR) switch in the zBX is not supported on z13s servers. When the zBX model 002 or 003 is upgraded to a model 004, it becomes an independent node that can be configured to work with the ensemble. The zBX model 004 is equipped with two 1U rack-mounted SEs to manage and control itself, and is independent of the CPC SEs.
򐂰 OSA-Express for zBX (OSX)
Connectivity to the IEDN provides a data connection from the z13s to the zBX. OSX can use OSA-Express5S 10 GbE (preferred) and the OSA-Express4S 10 GbE feature.
OSA-Express5S, OSA-Express4S feature highlights
z13s servers support five different types each of OSA-Express5S and OSA-Express4S features. OSA-Express5S features are a technology refresh of the OSA-Express4S features:
򐂰 OSA-Express5S 10 GbE Long Reach (LR) 򐂰 OSA-Express5S 10 GbE Short Reach (SR) 򐂰 OSA-Express5S GbE Long Wave (LX) 򐂰 OSA-Express5S GbE Short Wave (SX) 򐂰 OSA-Express5S 1000BASE-T Ethernet 򐂰 OSA-Express4S 10 GbE Long Reach 򐂰 OSA-Express4S 10 GbE Short Reach 򐂰 OSA-Express4S GbE Long Wave 򐂰 OSA-Express4S GbE Short Wave 򐂰 OSA-Express4S 1000BASE-T Ethernet
OSA-Express features provide important benefits for TCP/IP traffic, which are reduced latency and improved throughput for standard and jumbo frames. Performance enhancements are the result of the data router function being present in all OSA-Express features.
For functions that were previously performed in firmware, the OSA Express5S and OSA-Express4S now perform those functions in hardware. Additional logic in the IBM application-specific integrated circuit (ASIC) that is included with the feature handles packet construction, inspection, and routing. This process allows packets to flow between host memory and the LAN at line speed without firmware intervention.
With the data router, the longer used. The data router enables a direct host memory-to-LAN flow. This configuration avoids a (1492 bytes) and jumbo frames (8992 bytes).
hop, and is designed to reduce latency and increase throughput for standard frames
store and forward technique in direct memory access (DMA) is no
HiperSockets
The HiperSockets function is also known as internal queued direct input/output (internal
QDIO or iQDIO)
attachments to up to 32 high-speed virtual LANs with minimal system and network processor usage.
HiperSockets can be customized to accommodate varying traffic sizes. Because the HiperSockets function does not use an external network, it can free system and network resources, eliminating equipment costs while improving availability and performance.
. It is an integrated function of the z13s servers that provides users with
Chapter 1. Introducing IBM z13s servers 19
Page 48
For communications between LPARs in the same z13s server, HiperSockets eliminates the need to use I/O subsystem features and to traverse an external network. Connection to HiperSockets offers significant value in server consolidation by connecting many virtual servers. It can be used instead of certain coupling link configurations in a Parallel Sysplex.
HiperSockets is extended to allow integration with IEDN, which extends the reach of the HiperSockets network outside the CPC to the entire ensemble, and displays it as a single Layer 2 network.
10GbE RoCE Express
The 10 Gigabit Ethernet (10GbE) RoCE Express feature is a RDMA-capable network interface card. The 10GbE RoCE Express feature is supported on z13, z13s, zEC12, and zBC12 servers, and is used in the PCIe I/O drawer. Each feature has one PCIe adapter. A maximum of 16 features can be installed.
The 10GbE RoCE Express feature uses a short reach (SR) laser as the optical transceiver, and supports the use of a multimode fiber optic cable that terminates with an LC Duplex connector. Both a point-to-point connection and a switched connection with an enterprise-class 10 GbE switch are supported.
Support is provided by z/OS, which supports one port per feature, dedicated to one partition in zEC12 and zBC12. With z13s servers, both ports, shared by up to 31 partitions, are supported.
For more information, see Appendix D, “Shared Memory Communications” on page 475.
Shared Memory Communications - Direct Memory Access
In addition to supporting SMC-R, z13s support is a new protocol called SMC-D over ISM. Unlike SMC-R, SMC-D has no dependency on the RoCE Express feature. SMC-D provides low latency communications within a CPC by using shared memory.
SMC-R, by using RDMA, can vastly improve the transaction, throughput, and CPU consumption rates for unchanged TCP/IP applications. Direct memory access (DMA) can provide comparable benefits on an internal LAN.
z Systems now supports a new ISM virtual PCIe (vPCIe) device to enable optimized cross-LPAR TCP communications using SMC-D. SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes. For more information, see D.3, “Shared Memory Communications - Direct Memory Access” on page 489.
Note: SMC-D does not support Coupling facilities or access to the IEDN network. In addition, SMC protocols do not currently support multiple IP subnets.

1.3.7 Parallel Sysplex Coupling and Server Time Protocol connectivity

Support for Parallel Sysplex includes the Coupling Facility Control Code and coupling links.
Coupling links support
The z13s CPC drawer supports up to 8 PCIe Gen3 fanouts and up to four IFB fanouts for Parallel Sysplex coupling links. A z13s Model N20 with optional second CPC drawer supports a total of 16 PCIe Gen3 fanouts and 8 IFB fanout slots (four per CPC drawer). Coupling
20 IBM z13s Technical Guide
Page 49
connectivity in support of Parallel Sysplex environments is provided on z13s servers by the following features:
򐂰 Integrated Coupling Adapter Short Reach - ICA SR (PCIe Coupling Links): The ICA SR
PCIe fanout has two 8 GBps ports that use short reach optics (up to 150m between CPCs). The ICA SR can only be used for coupling connectivity between IBM z13 or IBM z13s servers, and the ICA SR can only connect to another ICA SR. Generally, order ICA SR on the IBM z13s processors used in a Parallel Sysplex to help ensure coupling connectivity with future processor generations
򐂰 HCA3-O: 12x InfiniBand coupling links offering up to 6 GBps of bandwidth between z13,
zBC12, z196, and z114 systems, for a distance of up to 150 m (492 feet). These links have improved service times over the HCA2-O links that were used on prior z Systems families.
򐂰 HCA3-O LR: 1x InfiniBand (up to 5 Gbps connection bandwidth) between z13, zEC12,
zBC12, z196, and z114 systems for a distance of up to 10 km (6.2 miles). The HCA3-O LR (1xIFB) type has twice the number of links per fanout card as compared to type HCA2-O LR (1xIFB) that was used in the previous z Systems generations.
򐂰 Internal Coupling Channels (ICs): Operate at memory speed.
Attention: IBM z13s servers do not support ISC-3 connectivity.
CFCC Level 21
CFCC level 21 is delivered on z13s servers with driver level 27. CFCC Level 21 introduces the following enhancements:
򐂰 Support for up to 20 ICF processors per z Systems CPC:
– The maximum number of logical processors in a Coupling Facility Partition remains 16
򐂰 Large memory support:
– Improves availability/scalability for larger CF cache structures and data sharing
performance with larger DB2 group buffer pools (GBPs).
– This support removes inhibitors to using large CF structures, enabling the use of Large
Memory to scale to larger DB2 local buffer pools (LBPs) and GBPs in data sharing environments.
– The CF structure size remains at a maximum of 1 TB.
򐂰 See Table 1-1 for the maximum number of Coupling Links on N20 models.
Table 1-1 Coupling Links on N20
Coupling Links Features Ports Speed
HCA3-O LR 8 32 5 Gbps
HCA-O 12X 8 16 6 GBps
ICA SR 8 16 8 GBps
z13s systems with CFCC Level 21 require z/OS V1R13 or later, and z/VM V6R2 or later for virtual guest coupling.
Chapter 1. Introducing IBM z13s servers 21
Page 50
To support an upgrade from one CFCC level to the next, different levels of CFCC can coexist in the same sysplex while the coupling facility LPARs are running on different servers. CF LPARs that run on the same server share the CFCC level.
A CF running on a z13s server (CFCC level 21) can coexist in a sysplex with CFCC levels 17 and 19. Review the CF LPAR size by using the CFSizer tool:
http://www.ibm.com/systems/z/cfsizer
Server Time Protocol facility
If you require time synchronization across multiple IBM z Systems servers, you can implement Server Time Protocol (STP). STP is a server-wide facility that is implemented in the LIC of z Systems CPCs (including CPCs running as stand-alone coupling facilities). STP presents a single view of time to PR/SM and provides the capability for multiple servers to maintain time synchronization with each other.
Any z Systems CPC can be enabled for STP by installing the STP feature. Each server that must be configured in a Coordinated Timing Network (CTN) must be STP-enabled.
The STP feature is the supported method for maintaining time synchronization between z Systems images and coupling facilities. The STP design uses the CTN concept, which is a collection of servers and coupling facilities that are time-synchronized to a time value called Coordinated Server Time (CST).
Network Time Protocol (NTP) client support is available to the STP code on the z13, z13s, zEC12, zBC12, z196, and z114 servers. With this function, the z13, z13s. zEC12, zBC12, z196, and z114 servers can be configured to use an NTP server as an external time source (ETS).
This implementation answers the need for a single time source across the heterogeneous platforms in the enterprise. An NTP server becomes the single time source for the z13, z13s, zEC12, zBC12, z196, and z114 servers, as well as other servers that have NTP clients, such as UNIX and Microsoft Windows systems.
The time accuracy of an STP-only CTN is improved by using, as the ETS device, an NTP server with the pulse per second (PPS) output signal. This type of ETS is available from various vendors that offer network timing solutions.
Improved security can be obtained by providing NTP server support on the HMC for the SE. The HMC is normally attached to the private dedicated LAN for z Systems maintenance and support. For z13s and zBC12 server, authentication support is added to the HMC NTP communication with NTP time servers. In addition, TLS/SSL with Certificate Authentication is added to the HMC/SE support to provide a secure method for connecting clients to z Systems.
Attention: A z13s server cannot be connected to a Sysplex Timer and cannot join a Mixed CTN.
If a current configuration consists of a Mixed CTN, the configuration must be changed to an STP-only CTN before z13s integration. z13s servers can coexist only with z Systems CPCs that do not have the Exernal Time Reference (ETR) port capability.
Support has been added to enable STP communications to occur by using the ICA SR (new for z13s servers).
22 IBM z13s Technical Guide
Page 51
Enhanced Console Assisted Recovery
Console Assisted Recovery (CAR) support is designed to help a Backup Time Server (BTS) determine whether the Primary Time Server is still up and running if Coupling traffic ceases. The CAR process is initiated by the BTS if there is no communication between the Primary and Backup Time Servers. The BTS queries the state of the Primary Time Server (PTS)/ Central TIme Server (CTS) SE by using the SE and HMC of the BTS. If the PTS is down, the BTS initiates takeover
With the new Enhanced Console Assisted Recovery (ECAR), the process of BTS takeover is faster. When the PTS encounters a check-stop condition, the CEC informs the SE and HMC of the condition. The PTS SE recognizes the pending check-stop condition, and an ECAR request is sent directly from the HMC to the BTS SE to start the takeover. The new ECAR support is faster than the original support because there is almost no delay between the system check-stop and the start of CAR processing. ECAR is only available on z13 GA2 and z13s servers. In a mixed environment with previous generation machines, you should define a z13 or z13s server as the PTS and CTS.

1.3.8 Special-purpose features

This section overviews several features that, although installed in the PCIe I/O drawer or in the I/O drawer, provide specialized functions without performing I/O operations. No data is moved between the CPC and externally attached devices.
Cryptography
Integrated cryptographic features provide industry leading cryptographic performance and functions. The cryptographic solution that is implemented in z Systems has received the highest standardized security certification (FIPS 140-2 Level 4 cryptographic features, the cryptographic features (Crypto Express5S, the only crypto-card that is supported on z13s servers) allows adding or moving crypto-coprocessors to LPARs without pre-planning.
9
). In addition to the integrated
z13s servers implement PKCS#11, one of the industry-accepted standards that are called Public Key Cryptographic Standards (PKCS), which are provided by RSA Laboratories of RSA, the security division of EMC Corporation. It also implements the IBM Common Cryptographic Architecture (CCA) in its cryptographic features.
CP Assist for Cryptographic Function
The CP Assist for Cryptographic Function (CPACF) offers the full complement of the Advanced Encryption Standard (AES) algorithm and Secure Hash Algorithm (SHA) with the Data Encryption Standard (DES) algorithm. Support for CPACF is available through a group of instructions that are known as the Message-Security Assist (MSA). z/OS Integrated Cryptographic Service Facility (ICSF) callable services, and the z90crypt device driver running on Linux on z Systems also start CPACF functions. ICSF is a base element of z/OS. It uses the available cryptographic functions, CPACF, or PCIe cryptographic features to balance the workload and help address the bandwidth requirements of your applications.
CPACF must be explicitly enabled by using a no-charge enablement feature (FC 3863), except for the SHAs, which are included enabled with each server.
The enhancements to CPACF are exclusive to the z Systems servers, and are supported by z/OS, z/VM, z/VSE, z/TPF, and Linux on z Systems.
9
Federal Information Processing Standard (FIPS) 140-2 Security Requirements for Cryptographic Modules
Chapter 1. Introducing IBM z13s servers 23
Page 52
Configurable Crypto Express5S feature
Crypto Express5S represents the newest generation of cryptographic features. It is designed to complement the cryptographic capabilities of the CPACF. It is an optional feature of the z13 and z13s server generation. The Crypto Express5S feature is designed to provide granularity for increased flexibility with one PCIe adapter per feature. For availability reasons, a minimum of two features are required.
With z13 and z13s servers, a cryptographic coprocessor can be shared across more than 16 domains, up to the maximum number of LPARs on the system.(up to 85 domains for z13 servers and 40 domains for z13s servers).
The Crypto Express5S is a state-of-the-art, tamper-sensing, and tamper-responding programmable cryptographic feature that provides a secure cryptographic environment. Each adapter contains a tamper-resistant hardware security module (HSM). The HSM can be configured as a Secure IBM CCA coprocessor, as a Secure IBM Enterprise PKCS #11 (EP11) coprocessor, or as an accelerator:
򐂰 A Secure IBM CCA coprocessor is for secure key encrypted transactions that use CCA
callable services (default). A Secure IBM Enterprise PKCS #11 (EP11) coprocessor implements an industry
standardized set of services that adhere to the PKCS #11 specification v2.20 and more recent amendments. This new cryptographic coprocessor mode introduced the PKCS #11 secure key function.
򐂰 An accelerator for public key and private key cryptographic operations is used with Secure
Sockets Layer/Transport Layer Security (SSL/TLS) acceleration.
The Crypto Express5S is designed to meet these cryptographic standards, among others:
– FIPS 140-2 Level 4 – ANSI 9.97 – Payment Card Industry (PCI) HSM – Deutsche Kreditwirtschaft (DK)
FIPS 140-2 certification is supported only when Crypto Express5S is configured as a CCA or an EP11 coprocessor.
Crypto Express5S supports a number of ciphers and standards that include those in this section. For more information about cryptographic algorithms and standards, see Chapter 6, “Cryptography” on page 199.
TKE workstation and support for smart card readers
The TKE feature is an integrated solution that is composed of workstation firmware, hardware, and software to manage cryptographic keys in a secure environment. The TKE is either network-connected or isolated in which case smart cards are used.
The Trusted Key Entry (TKE) workstation and the most recent TKE 8.1 LIC are optional features on the z13s. The TKE 8.1 requires the crypto adapter FC 4767. You can use TKE 8.0 to collect data from previous generations of Cryptographic modules and apply the data to Crypto Express5S coprocessors.
The TKE workstation offers a security-rich solution for basic local and remote key management. It provides to authorized personnel a method for key identification, exchange, separation, update, and backup, and a secure hardware-based key loading mechanism for
24 IBM z13s Technical Guide
Page 53
operational and master keys. TKE also provides secure management of host cryptographic module and host capabilities.
Support for an optional smart card reader that is attached to the TKE workstation allows the use of smart cards that contain an embedded microprocessor and associated memory for data storage. Access to and the use of confidential data on the smart cards are protected by a user-defined personal identification number (PIN). A FIPS-certified smart card, part number 00JA710, is now included in the smart card reader and additional smart cards optional features.
When Crypto Express5S is configured as a Secure IBM Enterprise PKCS #11 (EP11) coprocessor, the TKE workstation is required to manage the Crypto Express5S feature. The TKE is recommended for CCA mode processing as well. If the smart card reader feature is installed in the TKE workstation, the new smart card part 00JA710 is required for EP11 mode. If EP11 is to be defined, smart cards that are used must have FIPS certification.
For more information about the Cryptographic features, see Chapter 6, “Cryptography” on page 199. Also, see the Web Deliverables download site for the most current ICSF updates available (currently HCR77B0 Web Deliverable 14 and HCR77B1 Web Deliverable 15):
http://www.ibm.com/systems/z/os/zos/tools/downloads/
Flash Express
The Flash Express optional feature is intended to provide performance improvements and better availability for critical business workloads that cannot afford any impact to service levels. Flash Express is easy to configure, and provides rapid time to value.
Flash Express implements SCM in a PCIe card form factor. Each Flash Express card implements an internal NAND Flash SSD, and has a capacity of 1.4 TB of usable storage. Cards are installed in pairs, which provide mirrored data to ensure a high level of availability and redundancy. A maximum of four pairs of cards (four features) can be installed on a z13s server, for a maximum capacity of 5.6 TB of storage.
The Flash Express feature, recently enhanced, is designed to allow each LPAR to be configured with its own SCM address space. It is used for paging and enables the use of pageable 1 MB pages.
Encryption is included to improve data security. Data security is ensured through a unique key that is stored on the SE hard disk drive (HDD). It is mirrored for redundancy. Data on the Flash Express feature is protected with this key, and is usable only on the system with the key that encrypted it. The Secure Keystore is implemented by using a smart card that is installed in the SE. The smart card (one pair, one for each SE) contains the following items:
򐂰 A unique key that is personalized for each system 򐂰 A small cryptographic engine that can run a limited set of security functions within the
smart card
Flash Express is supported by z/OS V1R13 (or later) for handling z/OS paging activity, and has support for 1 MB pageable pages and SAN Volume Controller memory dumps. Support was added to the CFCC to use Flash Express as an overflow device for shared queue data to provide emergency capacity to handle WebSphere MQ shared queue buildups during abnormal situations. Abnormal situations include when “putters” are putting to the shared queue, but “getters” cannot keep up and are not getting from the shared queue.
Flash memory is assigned to a CF image through HMC windows. The coupling facility resource management (CFRM) policy definition allows the correct amount of SCM to be used by a particular structure, on a structure-by-structure basis. Additionally, Linux (RHEL and
Chapter 1. Introducing IBM z13s servers 25
Page 54
SUSE) can use Flash Express for temporary storage. Recent enhancements of Flash Express include 2 GB page support and dynamic reconfiguration for Flash Express.
For more information, see Appendix H, “Flash Express” on page 529.
zEDC Express
zEDC Express, an optional feature that is available to z13, z13s, zEC12, and zBC12 servers,
provides hardware-based acceleration for data compression and decompression with lower CPU consumption than the previous compression technology on z Systems.
Use of the zEDC Express feature by the z/OS V2R1or later zEnterprise Data Compression acceleration capability delivers an integrated solution to help reduce CPU consumption, optimize performance of compression-related tasks, and enable more efficient use of storage resources. It also provides a lower cost of computing and helps to optimize the cross-platform exchange of data.
One to eight features can be installed on the system. There is one PCIe adapter/compression coprocessor per feature, which implements compression as defined by RFC1951 (DEFLATE).
A zEDC Express feature can be shared by up to 15 LPARs.
See the IBM System z Batch Network Analyzer 1.4.2 tool, which reports on potential zEDC usage for QSAM/BSAM data sets:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5132
For more information, see Appendix J, “IBM zEnterprise Data Compression Express” on page 551.

1.3.9 Reliability, availability, and serviceability

The z13 and z13s RAS strategy employs a building-block approach, which is developed to meet the client's stringent requirements for achieving continuous reliable operation. Those building blocks are error prevention, error detection, recovery, problem determination, service structure, change management, measurement, and analysis.
The initial focus is on preventing failures from occurring. This goal is accomplished by using
Hi-Rel (highest reliability) components that use screening, sorting, burn-in, and run-in, and by
taking advantage of technology integration. For LICCand hardware design, failures are reduced through rigorous design rules; design walk-through; peer reviews; element, subsystem, and system simulation; and extensive engineering and manufacturing testing.
The RAS strategy is focused on a recovery design to mask errors and make them transparent to client operations. An extensive hardware recovery design is implemented to detect and correct memory array faults. In cases where transparency cannot be achieved, you can restart the server with the maximum capacity possible.
z13s servers include the following RAS improvements, among others:
򐂰 Cables for SMP fabric 򐂰 CP and SC SCMs are FRUs 򐂰 Point of load (POL) replaces the Voltage Transformation Module, and is a FRU 򐂰 z13s servers have both raised floor and non-raised-floor options 򐂰 The redundant oscillators are isolated on their own backplane 򐂰 The CPC drawer is a FRU (empty) 򐂰 A built-in Time Domain Reflectometry (TDR) isolates failures 򐂰 CPC drawer level degrade
26 IBM z13s Technical Guide
Page 55
򐂰 FICON (better recovery on fiber) 򐂰 N+1 SCH and power supplies 򐂰 N+1 SEs 򐂰 ASHRAE A3 support
For more information, see Chapter 9, “Reliability, availability, and serviceability” on page 355.

1.4 Hardware Management Consoles and Support Elements

The HMCs and SEs are appliances that together provide platform management for z Systems.
The HMC is a workstation designed to provide a single point of control for managing local or remote hardware elements. In addition to the HMC console tower, a new 1U Rack Mounted HMC with 19-inch rack-mounted components can be placed in customer supplied 19-inch racks.
The HMCs and SEs are appliances that together provide platform management for z Systems. For z13s servers, a new driver level 27 is required. HMC (V2.13.1) plus Microcode Change Levels (MCLs) and the Support Element (V2.13.1) are required to be installed.
HMCs and SEs also provide platform management for zBX Model 004 and for the ensemble nodes when the z Systems CPCs and the zBX Model 004 nodes are members of an ensemble. In an ensemble, the HMC is used to manage, monitor, and operate one or more z Systems CPCs and their associated LPARs, and the zBX Model 004 machines. Also, when the z Systems and a zBX Model 004 are members of an ensemble, the HMC (ensemble) management scope, which is compared to the SEs on the zBX Model 004 and on the CPCs, which have local (node) management responsibility.
10
has a global
When tasks are performed on the HMC, the commands are sent to one or more CPC SEs or zBX Model 004 SEs, which then issue commands to their CPCs and zBXs. To provide high availability, an ensemble configuration requires a pair of HMCs: A primary and an alternate.
For more information, see Chapter 11, “Hardware Management Console and Support Elements” on page 387.

1.5 IBM z BladeCenter Extension (zBX) Model 004

The IBM z BladeCenter Extension (zBX) Model 004 improves infrastructure reliability by extending the mainframe systems management and service across a set of heterogeneous compute elements in an ensemble.
The zBX Model 004 is only available as an optional upgrade from a zBX Model 003 or a zBX Model 002, through MES, in an ensemble that contains at least one z13s CPC and consists of these components:
򐂰 Two internal 1U rack-mounted Support Elements providing zBX monitoring and control
functions.
򐂰 Up to four IBM 42U Enterprise racks. 򐂰 Up to eight BladeCenter chassis with up to 14 blades each, with up to two chassis per
rack.
10
From Version 2.11. For more information, see 11.6, “HMC in an ensemble” on page 428.
Chapter 1. Introducing IBM z13s servers 27
Page 56
򐂰 Up to 11211 blades. 򐂰 INMN ToR switches. On the zBX Model 004, the new local zBX Support Elements directly
connect to the INMN within the zBX for management purposes. Because zBX Model 004 is an independent node, there is no INMN connection to any z Systems CPC.
򐂰 IEDN ToR switches. The IEDN is used for data paths between the zEnterprise ensemble
members and the zBX Model 004, and also for customer data access. The IEDN point-to-point connections use message authentication code (MAC) addresses, not IP addresses (Layer 2 connectivity).
򐂰 8 Gbps Fibre Channel switch modules for connectivity to customer supplied storage
(through SAN).
򐂰 Advanced management modules (AMMs) for monitoring and management functions for all
the components in the BladeCenter.
򐂰 Power Distribution Units (PDUs) and cooling fans. 򐂰 Optional acoustic rear door or optional rear door heat exchanger.
The zBX is configured with redundant hardware infrastructure to provide qualities of service similar to those of z Systems, such as the capability for concurrent upgrades and repairs.
Geographically Dispersed Parallel Sysplex Peer-to-Peer Remote Copy (GDPS/PPRC) and GDPS Global Mirror (GDPS/GM) support zBX hardware components, providing workload failover for automated multi-site recovery. These capabilities facilitate the management of planned and unplanned outages.

1.5.1 Blades

Two types of blades can be installed and operated in the IBM z BladeCenter Extension (zBX): 򐂰 Optimizer Blades: IBM WebSphere DataPower® Integration Appliance XI50 for zBX
blades
򐂰 IBM Blades:
– A selected subset of IBM POWER7® blades – A selected subset of IBM BladeCenter HX5 blades
򐂰 IBM BladeCenter HX5 blades are virtualized by using an integrated hypervisor. 򐂰 IBM POWER7 blades are virtualized by PowerVM® Enterprise Edition, and the virtual
servers run the IBM AIX® operating system.
Enablement for blades is specified with an entitlement feature code that is configured on the ensemble HMC.

1.5.2 IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise

The IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise (DataPower XI50z) is a multifunctional appliance that can help provide multiple levels of XML optimization.
This configuration streamlines and secures valuable service-oriented architecture (SOA) applications. It also provides drop-in integration for heterogeneous environments by enabling core enterprise service bus (ESB) functions, including routing, bridging, transformation, and event handling. It can help to simplify, govern, and enhance the network security for XML and web services.
11
The maximum number of blades varies according to the blade type and blade function.
28 IBM z13s Technical Guide
Page 57
When the DataPower XI50z is installed in the zBX, the Unified Resource Manager provides integrated management for the appliance. This configuration simplifies control and operations, including change management, energy monitoring, problem detection, problem reporting, and dispatching of an IBM service support representative (IBM SSR), as needed.
Important: The zBX Model 004 uses the blades carried forward in an upgrade from a previous model. Customers can install more entitlements up to the full zBX installed blade capacity if the existing blade centers chassis have available slots. After the entitlements are acquired from IBM, clients must procure and purchase the additional zBX supported blades to be added, up to the full installed entitlement LIC record, from another source or vendor.

1.6 IBM z Unified Resource Manager

The IBM z Unified Resource Manager is the integrated management software that runs on the Ensemble HMC and on the zBX model 004 SEs. The Unified Resource Manager consists of six management areas (for more information, see 11.6.1, “Unified Resource Manager” on page 428):
򐂰 Operational controls (Operations)
Includes extensive operational controls for various management functions.
򐂰 Virtual server lifecycle management (Virtual servers)
Enables directed and dynamic virtual server provisioning across hypervisors from a single point of control.
򐂰 Hypervisor management (Hypervisors)
Enables the management of hypervisors and support for application deployment.
򐂰 Energy management (Energy)
Provides energy monitoring and management capabilities that can be used to better understand the power and cooling demands of the zEnterprise System.
򐂰 Network management (Networks)
Creates and manages virtual networks, including access control, which allows virtual servers to be connected.
򐂰 Workload Awareness and platform performance management (Performance)
Manages CPU resource across virtual servers that are hosted in the same hypervisor instance to achieve workload performance policy objectives.
The Unified Resource Manager provides energy monitoring and management, goal-oriented policy management, increased security, virtual networking, and storage configuration management for the physical and logical resources of an ensemble.

1.7 IBM Dynamic Partition Manager

A new administrative mode for the z Systems CPC is being introduced for Linux only CPCs with FCP attached storage. The IBM DPM provides simplified z Systems hardware and virtual infrastructure management including integrated dynamic I/O management for users that intend to run KVM for IBM z Systems as hypervisor or Linux on z Systems running in LPAR mode.
Chapter 1. Introducing IBM z13s servers 29
Page 58
DPM provides simplified, consumable, and enhanced partition lifecycle and dynamic I/O management capabilities by using the Hardware Management Console and is designed to perform these tasks:
򐂰 Create and provision an environment:
– Create new partitions – Assignment of processors and memory – Configuration of I/O adapters (Network, FCP Storage, Crypto, and Accelerators)
򐂰 Manage the environment: Modify system resources without disrupting running workloads 򐂰 Monitor and troubleshoot the environment: Source identification of system failures,
conditions, states, or events that can lead to workload degradation.
򐂰 A CPC can be in either the DPM mode or the standard PR/SM mode. 򐂰 DPM mode is enabled before the CPC power-on reset. 򐂰 Operating the CPC in DPM mode requires two OSA-Express 1000BASE-T Ethernet ports
(recommended that these be on separate features) for primary and backup SE connectivity.

1.8 Operating systems and software

IBM z13s servers are supported by a large set of software, including independent software vendor (ISV) applications. This section lists only the supported operating systems. Use of various features might require the latest releases. For more information, see Chapter 7, “Software support” on page 229.

1.8.1 Supported operating systems

The following operating systems are supported for z13s servers:
򐂰 z/OS Version 2 Release 2 with program temporary fixes (PTFs) 򐂰 z/OS Version 2 Release 1 with PTFs 򐂰 z/OS Version 1 Release 13 with PTFs 򐂰 z/OS V1R12 with required maintenance (compatibility support only) and extended support
agreement
򐂰 z/VM Version 6 Release 4 with PTFs (preview) 򐂰 z/VM Version 6 Release 3 with PTFs 򐂰 z/VM Version 6 Release 2 with PTFs 򐂰 KVM for IBM z Systems Version 1.1.1 򐂰 z/VSE Version 6 Release 1 with PTFs 򐂰 z/VSE Version 5 Release 2 with PTFs 򐂰 z/VSE Version 5 Release 1 with PTFs 򐂰 z/TPF Version 1 Release 1 with PTFs
30 IBM z13s Technical Guide
Page 59
򐂰 Linux on z Systems distributions:
– SUSE Linux Enterprise Server (SLES): SLES 12 and SLES 11. – Red Hat Enterprise Linux (RHEL): RHEL 7 and RHEL 6. – Customers should monitor for new distribution releases supported
For recommended and new Linux on z Systems distribution levels, see the following website:
http://www.ibm.com/systems/z/os/linux/resources/testedplatforms.html
The following operating systems will be supported on zBX Model 004: 򐂰 An AIX (on POWER7) blade in IBM BladeCenter Extension Mod 004): AIX 5.3, AIX 6.1,
AIX 7.1 and later releases, and PowerVM Enterprise Edition
򐂰 Linux (on IBM BladeCenter HX5 blade installed in zBX Mod 004):
– Red Hat RHEL 5.5 and later, 6.0 and later, 7.0 and later – SLES 10 (SP4) and later, 11 SP1 and later, SLES 12 and later - 64 bit only
򐂰 Microsoft Windows (on IBM BladeCenter HX5 blades installed in zBX Mod 004):
– Microsoft Windows Server 2012, Microsoft Windows Server 2012 R2 – Microsoft Windows Server 2008 R2 and Microsoft Windows Server 2008 (SP2)
(Datacenter Edition recommended) 64 bit only
Together with support for IBM WebSphere software, IBM z13s servers provide full support for SOA, web services, Java Platform, Enterprise Edition, Linux, and Open Standards. The z13s server is intended to be a platform of choice for the integration of the newest generations of applications with existing applications and data.
z Systems software is also designed to take advantage of the many enhancements on z13 and z13s servers. Several platform enhancements have been announced with the z13:
򐂰 KVM for IBM z Systems 1.1.1 򐂰 z/VM Support 򐂰 z/OS Support
KVM for IBM z Systems 1.1.1
KVM on z offers a number of enhancements announced with z13s servers for availability. Because KVM support is evolving on z Systems, monitor for new enhancements to KVM and Linux on z Systems distributions.
These enhancements are intended to use many of the innovations of the server. KVM on z provides the following enhancements, among others:
򐂰 Simultaneous multithreading (SMT) exploitation 򐂰 Guest use of the Vector Facility for z/Architecture (SIMD) 򐂰 Hypervisor enhancements that include support for iSCSI and NFS 򐂰 Hypervisor Crypto use 򐂰 Enhanced RAS capabilities such as improved first-failure data capture (FFDC) 򐂰 Improved high availability configuration 򐂰 Unattended installation of hypervisor
Chapter 1. Introducing IBM z13s servers 31
Page 60
z/VM Support
z/VM Support for the z13 servers includes but is not limited to these enhancements: 򐂰 z/VM 6.2 and 6.3 provide these enhancements:
– Guest support for Crypto Express5S, and support for 85 Crypto Express domains. – Absolute Capping of an LPAR group, enabling each LPAR to consume capacity up to
its individual limit.
򐂰 In addition, z/VM 6.3 will provide support for the following enhancements with service:
– Guest exploitation support of SMC-D. – Dynamic Memory Management, which provides improved efficiency for memory
upgrades by using only a portion of the reserved main storage for the partition by initializing and clearing just the amount of storage requested.
– Guest exploitation of Vector Facility (SIMD).
򐂰 z/VM SMT exploitation for IFL processors in a Linux only mode or for a z/VM mode LPAR.
Note that z/VM SMT exploitation support does not virtualize threads for Linux guests, but z/VM itself is designed to achieve higher throughput through SMT.
򐂰 z/VM CPU pools for limiting the CPU resources consumed by a group of virtual machines
to a specific capacity.
򐂰 z/VM Multi-VSwitch Link Aggregation allows a port group of OSA-Express features to
span multiple virtual switches within a single z/VM or between multiple z/VM systems to increase optimization and utilization of OSA-Express when handling larger traffic loads.
z/OS Support
z/OS uses many of the new functions and features of IBM z13 servers that include but are not limited to the following:
򐂰 z/OS V2.2 supports zIIP processors in SMT mode to help improve throughput for zIIP
workloads. This support is also available for z/OS V2.1 with PTFs.
򐂰 z/OS V2.2 supports up to 141 processors per LPAR or up to 128 physical processors per
LPAR in SMT mode.
򐂰 .z/OS V2.2 also supports up to 4 TB of real memory per LPAR. This support is also
available on z/OS V2.1 with PTFs.
򐂰 z/OS V2.2 supports the vector extension facility (SIMD) instructions. This support is also
available for z/OS V2.1 with PTFs.
򐂰 z/OS V2.2 supports up to four subchannel sets on IBM z13 and z13s servers to help
relieve subchannel constraints, and can allow you to define larger I/O configurations that support multi-target Metro Mirror (PPRC) sets. This support is also available for z/OS V1.13 and z/OS V2.1 with service.
򐂰 z/OS V1.13 and later releases support z13 and z13s FICON function to allow cascading
up to 4 FICON switches. Dynamic cache management (DCM) support is provided for cascaded switches on z/OS V2.1 and later.
򐂰 z/OS V2.1 and later with PTFs is designed to use the Read Diagnostic Parameters (RDP)
Extended Link Service (ELS) on z13 and z13s processors to retrieve and display information about the status of FICON fiber optic connections, and to provide health checks for diagnosing FICON error conditions that might help with early detection of deterioration of connection quality.
򐂰 z/OS V2.2 running on IBM z13 and z13s servers with IBM System Storage® DS8000®
devices and a minimum MCL supports a new health check for FICON dynamic routing that is designed to check all components of a dynamic routing fabric. This support, which is
32 IBM z13s Technical Guide
Page 61
also available for z/OS V1.13 and z/OS V2.1 with PTFs, can help you identify configuration errors that can result in data integrity exposures.
򐂰 z/OS V2.2, and z/OS V2.1 with PTFs support the new LPAR absolute group capping
function.
򐂰 z/OS V2.2 Communications Server supports the virtualization capability of 10GbE RoCE
Express on IBM z13 and z13s processors. z/OS V2.2 is designed to support the SMC-D protocol for low-latency, high-bandwidth, cross-LPAR connections for applications.
򐂰 Exploitation of Crypto Express5S features is provided for z/OS V2.2 and with the
Enhanced Cryptographic Support for z/OS V1.13 - z/OS V2.1 web deliverable.
򐂰 z/OS V2.2 XL C/C++ supports z13 and z13s processors with ARCH(11) andTUNE(11)
parameters that are designed to take advantage of the new instructions.
򐂰 XL C/C++ support for SIMD instructions with the vector programming language
extensions, and the IBM MASS and ATLAS libraries. This function is also available for z/OS V2.1 XL C/C++ with a web deliverable available at:
http://www.ibm.com/systems/z/os/zos/tools/downloads/#webdees
򐂰 New functions are available for ICSF in a new Cryptographic Support for z/OS V1R13 -
z/OS V2R2 This web deliverable is available for download from:
http://www.ibm.com/systems/z/os/zos/tools/downloads/
򐂰 More support for the z13 processor family is planned with the ICSF Cryptographic Support
for z/OS V1R13 - z/OS V2R2 web deliverable in PTFs in the first quarter of 2016.
򐂰 Support for the new TKE 8.1 workstation.

1.8.2 IBM compilers

The following IBM compilers for z Systems can use z13s servers:
򐂰 Enterprise COBOL for z/OS 򐂰 Enterprise PL/I for z/OS 򐂰 XL C/C++ for Linux on z Systems 򐂰 z/OS XL C/C++
The compilers increase the return on your investment in z Systems hardware by maximizing application performance by using the compilers’ advanced optimization technology for z/Architecture. Compilers like COBOL and C, C++ and Java all use SIMD. Through their support of web services, XML, and Java, they allow for the modernization of existing assets into web-based applications. They support the latest IBM middleware products (CICS, DB2, and IMS), allowing applications to use their latest capabilities.
To fully use the capabilities of z13s servers, you must compile by using the minimum level of each compiler as specified in Table 1-2.
Table 1-2 Supported IBM compiler levels
Compiler Minimum level
Enterprise COBOL for z/OS V5.2
Enterprise PL/I for z/OS V4.5
XL C/C++ for Linux on z Systems V1.1
z/OS XL C/C++ V 2.1
a. Web update required
a
Chapter 1. Introducing IBM z13s servers 33
Page 62
To obtain the best performance, you must specify an architecture level of 11 by using the –qarch=arch11 option for the XL C/C++ for Linux on z Systems compiler or the ARCH(11) option for the other compilers. This option grants the compiler permission to use machine instructions that are available only on z13s servers.
Because specifying the architecture level of 11 results in a generated application that uses instructions that are available only on the z13s or z13 servers, the application will not run on earlier versions of the hardware. If the application must run on z13s servers and on older hardware, specify the architecture level corresponding to the oldest hardware on which the application needs to run. For more information, see the documentation for the ARCH or -qarch options in the guide for the corresponding compiler product.
34 IBM z13s Technical Guide
Page 63
Chapter 2. Central processor complex
2
hardware components
This chapter introduces the IBM z13s hardware components, significant features and functions, and their characteristics and options. The main objective of this chapter is to explain the z13s hardware building blocks, and how these components interconnect from a physical point of view. This information can be useful for planning purposes, and can help to define configurations that fit your requirements.
This chapter provides information about the following topics:
򐂰 Frame and drawers 򐂰 Processor drawer concept 򐂰 Single chip modules 򐂰 Memory 򐂰 Reliability, availability, and serviceability 򐂰 Connectivity 򐂰 Model configurations 򐂰 Power considerations 򐂰 Summary of z13s structure
© Copyright IBM Corp. 2016. All rights reserved. 35
Page 64

2.1 Frame and drawers

Rear view Front view
Support Elements
Power Supplies
Slot for 2
nd
CPC
Drawer
CPC Drawer
PCIe I/O Drawers
I/O Drawer
SEs Monitors and
Keyboards
Internal Batteries
2 x SCH
side by side
The z Systems frames are enclosures that are built to Electronic Industries Alliance (EIA) standards. The z13s central processor complex (CPC) has one 42U EIA frame, which is shown in Figure 2-1. The frame has positions for one or two CPC drawers, and a combination of up to two PCIe drawers and up to one I/O drawer.
Figure 2-1 z13s frame: Rear and front view

2.1.1 The z13s frame

The frame includes the following elements, which are shown in Figure 2-1 from top to bottom: 򐂰 Two Support Element (SE) servers that are installed at the top of the A frame. In previous
z Systems, the SEs were notebooks installed on the swing tray. For z13s servers, the SEs are 1U servers that are mounted at the top of the 42U EIA frame. The external LAN interface connections are now directly connected to the SEs at the rear of the system.
򐂰 Optional Internal Battery Features (IBFs) that provide the function of a local uninterrupted
power source. The IBF further enhances the robustness of the power design, increasing power line
disturbance immunity. It provides battery power to preserve processor data in a loss of power on both power feeds from the utility company. The IBF provides battery power to preserve full system function despite the loss of power. It enables continuous operation through intermittent losses, brownouts, and power source switching, or can provide time for an orderly shutdown in a longer outage.
Table 10-2 on page 372 lists the IBF holdup times for various configurations.
36 IBM z13s Technical Guide
Page 65
򐂰 Two Bulk Power Assemblies (BPAs) that house the power components, with one in the
front and the other in the rear of the system. Each BPA is equipped with up to three Bulk Power Regulators (BPRs), a controller, and two distributor cards. The number of BPRs varies depending on the configuration of the z13s servers. For more information see , “The Top Exit I/O Cabling feature adds 15 cm (6 in.) to the width of the frame and about 60 lbs (27.3 kg) to the weight.” on page 371.
򐂰 Two side by side System Control Hubs (SCHs) that are the replacement for the Bulk Power
Hubs that were used in previous z Systems and provide the internal communication among the various system elements.
򐂰 Up to two CPC drawers. At least one CPC drawer must be installed, referenced as
drawer 1. The additional one can be installed above the first CPC drawer, and is referenced as drawer 0.
– A Model N10 CPC drawer contains one node, consisting of two processor unit Single
Chip Modules (PU SCMs) and one storage controller SCM (SC SCM) and the associated memory slots.
• The N10 model is always the single CPC drawer. – Model N20 CPC drawer contains two nodes, four PU SCMs and two SC SCMs.
• The N20 model can be a one or two CPC drawers system. – Memory dual inline memory modules (DIMMs), point of load regulators, and fanout
cards for internal and external connectivity
򐂰 A newly designed swing tray that is in front of the I/O drawer slots, and contains the
keyboards and displays that connect to the two SEs.
򐂰 Combinations of PCIe and I/O drawers are shown in Table 2-1. The older I/O drawer can
only be carried forward by way of upgrade from previous systems z114 or zBC12. The number of CPC drawers and PCIe and InfiniBand fanouts are also shown. The general rule is that the number of CPC drawers plus the number of PCIe and I/O drawers cannot be greater than four due to power restrictions.
Table 2-1 PCIe drawer and I/O drawer on z13s servers
N10 N20
1 CPC drawer
4 PCIe fanouts
2 InfiniBand fanouts
I/O Drawer
0-1
000000
010101
101010
111111
PCIe Drawer
0-1
1 CPC drawer
8 PCIe fanouts
4 InfiniBand fanouts
I/O Drawer
0-1
0202
12
PCIe Drawer
0-2
2 CPC drawers
16 PCIe fanouts
8 InfiniBand fanouts
I/O Drawer
0-1
PCIe Drawer
0-2
Chapter 2. Central processor complex hardware components 37
Page 66
In Figure 2-2, the various I/O drawer configurations are displayed when one CPC drawer is
N10/N20(1)
BPA
CPC
1
6
13
21
25
29
31
39
41
42
N10/N20(1)
BPA
CPC
PCIe
I/O Drawer
1
6
13
21
25
29
31
39
41
42
N10/N20(1)
BPA
CPC
I/O Drawer
PCIe
I/O Drawer
1
6
13
21
25
29
31
39
41
42
N20(1)
BPA
CPC
PCIe
I/O Drawer
PCIe
I/O Drawer
1
6
13
21
25
29
31
39
41
42
N20(1)
BPA
CPC
PCIe
I/O Drawer
I/O Drawer
PCIe
I/O Drawer
1
6
13
21
25
29
31
39
41
42
N20(2) N20(2) N20(2)
BPA
CPC
PCIe
I/O Drawer
PCIe
I/O Drawer
CPC
1
6
13
21
25
29
31
39
41
42
BPA
CPC
PCIe
I/O Drawer
CPC
1
6
13
21
25
29
31
39
41
42
BPA
CPC
CPC
1
6
13
21
25
29
31
39
41
42
BPA
CPC
I/O Drawer
PCIe
I/O Drawer
CPC
1
6
13
21
25
29
31
39
41
42
N20(2)
installed for both Model N10 and N20. The view is from the rear of the A frame.
򐂰 PCIe drawers are installed from the top down 򐂰 An I/O drawer (legacy) is only present during a MES carry forward and is always at the
bottom of the frame.
Figure 2-2 z13s (one CPC drawer) I/O drawer configurations
In Figure 2-3, the various I/O drawer configurations are displayed when two CPC drawers are installed for the Model N20 only. The view is from the rear of the A frame.
򐂰 PCIe drawers are installed from the top down 򐂰 I/O drawer (legacy) is only present during a MES carry forward, and is always at the
bottom of the frame.
Figure 2-3 z13s (two CPC Drawers) I/O Drawer Configurations

2.1.2 PCIe I/O drawer and I/O drawer features

38 IBM z13s Technical Guide
Each CPC drawer has PCIe Generation3 fanout slots and InfiniBand fanout slots to support two types of I/O infrastructure for data transfer:
򐂰 PCIe I/O infrastructure with a bandwidth of 16 GBps 򐂰 InfiniBand I/O infrastructure with a bandwidth of 6 GBps
Page 67
PCIe I/O infrastructure
The PCIe I/O infrastructure uses the PCIe fanout to connect to the PCIe I/O drawer, which can contain the following features:
򐂰 FICON Express16S, a two port card (long wavelength (LX) or short wavelength (SX)) and
two physical channel IDs (PCHIDs)
򐂰 FICON Express8S (two port card, LX or SX, and two PCHIDs) 򐂰 Open Systems Adapter (OSA)-Express5S features:
– OSA-Express5S 10 Gb Ethernet (one port card, Long Reach (LR) or Short Reach
(SR), and one PCHID) – OSA-Express5S Gb Ethernet (two port card, LX or SX, and one PCHID) – OSA-Express5S 1000BASE-T Ethernet (two port card, RJ-45, and one PCHID)
򐂰 OSA-Express4S features (only for a carry-forward MES):
– OSA-Express4S 10 Gb Ethernet (one port card, LR or SR, and one PCHID) – OSA-Express4S Gb Ethernet (two port card, LX or SX, and one PCHID) – OSA-Express4S 1000BASE-T Ethernet (two port card, RJ-45, and one PCHID)
򐂰 Crypto Express5S feature. Each feature holds one PCI Express cryptographic adapter.
Each adapter can be configured during installation as a Secure IBM Common Cryptographic Architecture (CCA) coprocessor, as a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or as an accelerator.
򐂰 Flash Express. Each Flash Express feature occupies two slots in the PCIe I/O Drawer, but
does not have a CHPID type. Logical partitions (LPARs) in all channel subsystems (CSSs) have access to the features.
򐂰 10 GbE Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE)
Express. It has a 2-port card, and up to 31 LPARS can share a physical adapter.
򐂰 zEnterprise Data Compression (zEDC) Express. The zEDC Express feature occupies one
slot in the PCIe I/O Drawer, and one PCHID. Up to 15 partitions can share the feature concurrently.
InfiniBand I/O infrastructure
InfiniBand I/O infrastructure uses the HCA2-C fanout to connect to I/O drawers. The I/O drawers can contain only these features:
򐂰 FICON Express8 cards, maximum quantity eight. Each card has four ports, LX or SX, and
four PCHIDs.

2.2 Processor drawer concept

The z13s CPC drawer contains up to six SCMs, memory, symmetric multiprocessor (SMP) connectivity, and connectors to support:
򐂰 PCIe I/O drawers (through PCIe fanout hubs) 򐂰 I/O drawers through InfiniBand fanout features 򐂰 Coupling links fanouts
Chapter 2. Central processor complex hardware components 39
Page 68
z13s servers can have up to two CPC drawers installed (the minimum is one CPC drawer).
10
Memory
DIMMS
InfiniBand
fanout slots (x2)
NODE 0
PCIe fanout
slots (x4)
Front
Rear
2x PU SCMs
(air cooled)
1x SC SCMs
(air cooled)
FSP
FSP
NODE 1
not populated
The contents of the CPC drawer and its components are model dependent. The Model N10 CPC drawer that is shown in Figure 2-4 has a single node, half the structure of the Model N20 CPC drawer that has two nodes.
Figure 2-4 Model N10 components (top view)
򐂰 Single node drawer contains (Figure 2-4):
– Two PU chips up to 13 active cores. – One SC chip (480 MB L4 cache). – Four PCIe fanout slots. – Two InfiniBand fanout slots. – One memory controller per PU chip. – Five DDR3 DIMM slots per memory controller. – Two Flexible Support Processors (FSPs). – Node 1 resources (SCMs, memory, and fanouts) are not installed in the Model N10.
򐂰 Two node drawer contains (Figure 2-5 on page 41):
– Four PU chips, up to 26 active PUs – Two SC chips (960 MB L4 cache) – Eight PCIe fanout slots – Four InfiniBand fanout slots – One memory controller per PU chip – Five DDR3 DIMM slots per memory controller – Two FSPs
40 IBM z13s Technical Guide
Page 69
Figure 2-5 Model N20 components (top view)
20
Memory
DIMM
Slots
InfiniBand
fanout slots (x4)
NODE 0
PCIe fanout
slots (x4)
Front
Rear
4x PU SCMs (air cooled)
2x SC SCMs (air cooled)
PCIe fanout
slots (x4)
NODE 1
FSP
FSP
Figure 2-6 shows the front view of the CPC drawer with fanout slots and FSP slots for both a Model N20 and a Model N10.
Figure 2-6 CPC drawer (front view)
Chapter 2. Central processor complex hardware components 41
Page 70
Figure 2-7 shows the memory DIMM to PU SCM relationship and the various buses on the
Inter-node
interface
S-BUS
Intra-node interface
X-BUS
Intra-node interface
X-BUS
PCIeMemory
8x cores 8x L1 cache 8x L2 cache 1x L3 cache
8x COP
MC, GX++, PCIe
GX++
Memory
8x cores 8x L1 cache 8x L2 cache 1x L3 cache
8x COP
MC, GX++, PCIe
GX++PSI
PCIe
480 MB +224 MB NIC
Directory L4 cache
SC
A-BUS
PCIe
8x cores 8x L1 cache 8x L2 cache 1x L3 cache
8x COP
MC, GX++, PCIe
GX++
Memory
8x cores 8x L1 cache 8x L2 cache 1x L3 cache
8x COP
MC, GX++, PCIe
GX++
PSI
PCIe
480 MB +224 MB NIC
Directory L4 cache
SC
A-BUS
Node 1
Node 0
CPC Drawer
To other
drawer
(SMP connector)
To other
drawer
(SMP connector)
Memory
nodes. Memory is connected to the SCMs through memory control units (MCUs). There is one MCU per PU SCM that provide the interface to controller on memory DIMM. A memory controller drives five DIMM slots.
42 IBM z13s Technical Guide
Figure 2-7 CPC drawer logical structure
1
The buses are organized as follows: 򐂰 The GX++ I/O bus slots provide connectivity for host channel adapters (HCAs). They are
fully buffered, and can sustain up to 6 GBps data transfer per bus direction. GXX++ I/O slots provide support for InfiniBand and non-PCIe I/O features (FICON Express 8).
򐂰 The PCIe I/O buses provide connectivity for PCIe fanouts and can sustain up to 18 GBps
data traffic per bus direction.
򐂰 The X-bus provides interconnects between SC chip and PUs chips to each other, in the
same node.
򐂰 The S-bus provides interconnects between SC chips in the same drawer. 򐂰 The A-bus provides interconnects between SC chips in different drawers (through SMP
cables).
򐂰 Processor support interfaces (PSIs) are used to communicate with FSP cards for system
control.
1
The drawing shows eight cores per PU chip. This is by design. The PU chip is a FRU shared with the z13. When installed in a z13s server, each PU chip can have either six or seven active cores,
Page 71

2.2.1 CPC drawer interconnect topology

Figure 2-8 shows the point-to-point topology for CPC drawers and node communication through the SMP cables. Each CPC drawer communicates directly to all other processor drawers in the CPC through two ways.
Figure 2-8 Drawer to drawer communication
The CPC drawers are populated from bottom to top. Table 2-2 indicates where the CPC drawers are installed.
Table 2-2 CPC drawers installation order and position in Frame A
CPC drawer CPC drawer 1 CPC drawer 0
Position in Frame A A21A A25A

2.2.2 Oscillator

z13s servers have two oscillator cards (OSCs): One primary and one backup. If the primary OSC fails, the secondary detects the failure, takes over transparently, and continues to provide the clock signal to the CPC. The two oscillators have Bayonet Neill-Concelman (BNC) connectors that provide pulse per second signal (PPS) synchronization to an external time source with PPS output.
The SEs provide the Simple Network Time Protocol (SNTP) client function. When Server Time Protocol (STP) is used, the time of an STP-only Coordinated Timing Network (CTN) can be synchronized to the time that is provided by an NTP server. This configuration allows time-of-day (TOD) synchronization in a heterogeneous platform environment.
The accuracy of an STP-only CTN is improved by using an NTP server with the PPS output signal as the external time source (ETS). NTP server devices with PPS output are available from several vendors that offer network timing solutions. A cable connection from the PPS port on the OSC to the PPS output of the NTP server is required when the z13s server is using STP and is configured in an STP-only CTN using NTP with PPS as the external time source. z13s servers cannot participate in a mixed CTN. They can participate only in an STP only CTN.
STP tracks the highly stable and accurate PPS signal from the NTP server and maintains an accuracy of 10 µs to the ETS, as measured at the PPS input of IBM z13s servers.
Chapter 2. Central processor complex hardware components 43
Page 72
If STP uses an NTP server without PPS, a time accuracy of 100 ms to the ETS is maintained.
Although not part of the CPC drawer design, the OSCs cards are located beside the drawers, and are connected to the same backplane to which the drawers are connected. Both drawers connect to the OSC.
Figure 2-9 shows the location of the two OSC cards with BNC connectors for PPS on the CPC, which is beside the drawer 1 and drawer 0 locations.
Figure 2-9 Oscillators cards
Tip: STP is available as FC 1021. It is implemented in the Licensed Internal Code (LIC), and allows multiple servers to maintain time synchronization with each other and synchronization to an ETS. For more information, see the following publications:
򐂰 Server Time Protocol Planning Guide, SG24-7280 򐂰 Server Time Protocol Implementation Guide, SG24-7281 򐂰 Server Time Protocol Recovery Guide, SG24-7380

2.2.3 System control

Various system elements are managed through the flexible service processors (FSPs). An FSP is based on the IBM PowerPC® microprocessor technology. Each FSP card has two ports that connect to two internal Ethernet LANs, through system control hubs (SCH1 and SCH2). The FSPs communicate with the SEs and provide a subsystem interface (SSI) for controlling components.
44 IBM z13s Technical Guide
Page 73
Figure 2-10 depicts a logical diagram of the system control design.
FSP
FSP
CEC
Drawer
FSP FSP
CEC
Drawer
FSP
FSP
PCIe
Drawer
FSP
FSP
PCIe
Drawer
FSP FSP
I/O
Drawer
HMC
Alternate SE
Primary SE
Customer
LAN 1
Customer
LAN 2
OSC
Bulk Power Controllers
System Control Hub - 1
Internal FSP
System Control Hub - 2
Internal FSP
Figure 2-10 Conceptual overview of system control elements
Note: The maximum number of drawers (CEC and I/O) is four for z13s servers. The diagram in Figure 2-10 references the various supported FSP connections
A typical FSP operation is to control a power supply. An SE sends a command to the FSP to start the power supply. The FSP (through SSI connections) cycles the various components of the power supply, monitors the success of each step and the resulting voltages, and reports this status to the SE.
Most system elements are duplexed (n+1), and each element has at least one FSP. Two
internal Ethernet LANs are used by two SEs, for redundancy. Crossover capability between
the LANs is available so that both SEs can operate on both LANs.
The HMCs and SEs are connected directly to one or two Ethernet Customer LANs. One or more HMCs can be used.
Chapter 2. Central processor complex hardware components 45
Page 74

2.2.4 CPC drawer power

Each CPC drawer gets its power from two distributed converter assemblies (DCAs) in the CPC drawer (see Figure 2-11). The DCAs provide the power for the CPC drawer. Loss of one DCA leaves enough power to satisfy CPC drawer power requirements. The DCAs can be concurrently maintained, and are accessed from the rear of the frame. During a blower failure, the adjacent blower increases speed to satisfy cooling requirements until the failed blower is replaced.
Figure 2-11 Redundant DCAs and blowers for CPC drawers

2.3 Single chip modules

The SCM is a multi-layer metal substrate module that holds either one PU chip or a SC chip. Its size is 678.76 mm two PU SCMs, and one SC SCM. The Model N20 CPC drawer has six SCMs, four PU SCMs, and two SC SCMs, with more than 20 billion transistors in total.
2
(28.4 mm x 23.9 mm). Each node of a CPC drawer has three SCMs,
46 IBM z13s Technical Guide
Page 75

2.3.1 Processor units and system control chips

PU Chip
Cap
2x SC SCMs (Air Cooled)
PU SCM Heatsink
4x PU SCMs (Air Cooled)
SC SC M ha s simila r hea tsink des ign
but is shorter
The two types of SCMs (PU and SC) are shown in Figure 2-12.
Figure 2-12 Single chip modules (PU SCM and SC SCM) N20 CPC Drawer
Both PU and SC chips use CMOS 14S0 process state-of-the-art semiconductor technology, which is implemented with 17-layer (PU chip) or 15-layer (SC chip) copper interconnections and Silicon-On-Insulator (SOI) technologies. The chip lithography line width is 22 nm.
The SCMs are plugged into a card that is part of the CPC drawer packaging. The interconnectivity between the CPC drawers is accomplished through SMP connectors and cables. One inter-drawer connection per node of the CPC drawer is used for the two drawer Model N20 configuration. This configuration allows a multi drawer system to be displayed as a symmetric multiprocessor (SMP) system.
Each node has three SCMs: Two PU SCMs and one SC SCM.

2.3.2 Processor unit (PU) chip

The z13s PU chip (installed as a PU SCM) is an evolution of the zBC12 core design. It uses CMOS 14S0 technology, out-of-order instruction processing, pipeline enhancements, dynamic simultaneous multithreading (SMT), single-instruction multiple-data (SIMD), and redesigned, larger caches.
Chapter 2. Central processor complex hardware components 47
Page 76
By design, each PU chip has eight cores. Core frequency is 4.3 GHz with a cycle time of
0.233 ns. When installed in a z13s server, the PU chips have either six or seven active cores. This limit means that a Model N10 has 13 active cores, and the Model N20 has 26 active cores. Model N10 has 10 customizable cores, whereas Model N20 has 20 customizable cores. A schematic representation of the PU chip is shown in Figure 2-13.
Figure 2-13 PU Chip Floorplan
Each PU chip has 3.99 billion transistors. Each one of the eight cores has its own L1 cache with 96 KB for instructions and 128 KB for data. Next to each core is its private L2 cache, with 2 MB for instructions and 2 MB for data.
Each PU chip has one L3 cache, with 64 MB. This 64 MB L3 cache is a store-in shared cache across all cores in the PU chip. It has 192 x 512 KB eDRAM macros, dual address-sliced and dual store pipe support, an integrated on-chip coherency manager, cache, and cross-bar switch. The L3 directory filters queries from the local L4. Both L3 slices can deliver up to 16 GBps bandwidth to each core simultaneously. The L3 cache interconnects the eight cores, GX++ I/O buses, PCIe I/O buses, and memory controllers (MCs) with SC chips.
The MC function controls access to memory. The GX++ I/O bus controls the interface to the InfiniBand fanouts, while the PCIe bus controls the interface to PCIe fanouts. The chip controls traffic between the cores, memory, I/O, and the L4 cache on the SC chips.
One coprocessor is dedicated for data compression and encryption functions for each core. The compression unit is integrated with the CP Assist for Cryptographic Function (CPACF), benefiting from combining (or sharing) the use of buffers and interfaces. The assist provides high-performance hardware encrypting and decrypting support for clear key operations.
For more information, see 3.4.5, “Compression and cryptography accelerators on a chip” on page 95.
48 IBM z13s Technical Guide
Page 77

2.3.3 Processor unit (core)

Each processor unit, or core, is a superscalar and out-of-order processor that has 10 execution units and two load/store units, which are divided into two symmetric pipelines as follows:
򐂰 Four fixed-point units (FXUs) (integer) 򐂰 Two load/store units (LSUs) 򐂰 Two binary floating-point units (BFUs) 򐂰 Two binary coded decimal floating-point units (DFUs) 򐂰 Two vector floating point units (vector execution units (VXUs))
Up to six instructions can be decoded per cycle, and up to 10 instructions/operations can be initiated to run per clock cycle. The running of the instructions can occur out of program order, and memory address generation and memory accesses can also occur out of program order. Each core has special circuitry to display execution and memory accesses in order to the software. Not all instructions are directly run by the hardware, which is the case for several complex instructions. Some are run by millicode, and some are broken into multiple operations that are then run by the hardware.
Each core has the following functional areas, which are also shown in Figure 2-14 on page 50:
򐂰 Instruction sequence unit (ISU): This unit enables the out-of-order (OOO) pipeline. It
tracks register names, OOO instruction dependency, and handling of instruction resource dispatch.
This unit is also central to performance measurement through a function called
instrumentation.
򐂰 Instruction fetch and branch (IFB) (prediction) and instruction cache and merge (ICM):
These two subunits (IFB and ICM) contain the instruction cache, branch prediction logic, instruction fetching controls, and buffers. The relative size of these subunits is the result of the elaborate branch prediction design, which is described in 3.4.4, “Superscalar processor” on page 95.
򐂰 Instruction decode unit (IDU): The IDU is fed from the IFB buffers, and is responsible for
the parsing and decoding of all z/Architecture operation codes.
򐂰 Load/store unit (LSU): The LSU contains the data cache. It is responsible for handling all
types of operand accesses of all lengths, modes, and formats that are defined in the z/Architecture.
򐂰 Translation unit (XU): The XU has a large translation look aside buffer (TLB) and the
dynamic address translation (DAT) function that handles the dynamic translation of logical to physical addresses.
򐂰 Core pervasive unit - PC: Used for instrumentation and error collection. 򐂰 Vector and floating point units:
– Fixed-point unit (FXU): The FXU handles fixed-point arithmetic. – Binary floating-point unit (BFU): The BFU handles all binary and hexadecimal
floating-point and fixed-point multiplication operations.
– Decimal floating-point unit (DFU): The DFU runs both floating-point and decimal
fixed-point operations and fixed-point division operations.
– Vector execution unit (VXU)
Chapter 2. Central processor complex hardware components 49
Page 78
򐂰 Recovery unit (RU): The RU keeps a copy of the complete state of the system that
IDU
FXU
COP
VFU
RU
IFB
ICMXU
L2I L2D
PC
LSU
ISU
includes all registers, collects hardware fault signals, and manages the hardware recovery actions.
򐂰 Dedicated Coprocessor (COP): The dedicated coprocessor is responsible for data
compression and encryption functions for each core.
Figure 2-14 PU Core floorplan

2.3.4 PU characterization

In each CPC drawer, PUs can be characterized for client use. The characterized PUs can be used in general to run supported operating systems, such as z/OS, z/VM, and Linux on z Systems. They can also run specific workloads, such as Java, XML services, IPSec, and some DB2 workloads, or functions such as Coupling Facility Control Code (CFCC). For more information about PU characterization, see 3.5, “Processor unit functions” on page 100.
The maximum number of characterized PUs depends on the z13s model. Some PUs are characterized by the system as standard system assist processors (SAPs) to run the I/O processing. By default, there are at least two spare PUs per Model N20 system that are used to assume the function of a failed PU. The remaining installed PUs can be characterized for client use. The Model N10 uses unassigned PUs as spares when available. A z13s model nomenclature includes a number that represents the maximum number of PUs that can be characterized for client use, as shown in Table 2-3.
Table 2-3 Number of PUs per z13s model
Model CPC
N10 1 / 13 0-6 0-10 0-6 0-10 2 0-2 0 1
N20 1 / 26 0-6 0-20 0-12 0-20 3 0-3 2 1
Drawers
/ PUs
PUs IFLs
uIFLs
zIIPs ICFs Standard
SAPs
Optional
SAPs
Standard
Spares
IFP
N20 2 / 26 0-6 0-20 0-12 0-20 3 0-3 2 1
50 IBM z13s Technical Guide
Page 79

2.3.5 System control chip

L4 Controller
L4 Cache L4 Cache
L4 Cache
(120 MB + 56 MB NIC Directory)
L4 Cache
Fabric
IOs
TOD
Data
Bit-
Stack
Clk
Repower
PLL
Perv
Perv
Perv
Data
Bit-
Stack
Fabric
IOs
(120 MB + 56 MB NIC Directory) (120 MB + 56 MB NIC Directory)
(120 MB + 56 MB NIC Directory)
(480 MB + 224 MB NIC Directory)
The SC chip uses the CMOS 14S0 22 nm SOI technology, with 15 layers of metal. It measures 28.4 x 23.9 mm, has 7.1 billion transistors, and has 2.1 billion cells of eDRAM. Each node of the CPC drawer has one SC chip. The L4 cache on each SC chip has 480 MB of non-inclusive cache and a 224 MB non-data inclusive coherent (NIC) directory, which results in 960 MB of on-inclusive L4 cache and 448 MB in a NIC directory that is shared per CPC drawer.
Figure 2-15 shows a schematic representation of the SC chip.
Figure 2-15 SC chip diagram
Most of the SC chip space is taken by the L4 controller and the 480 MB L4 cache. The cache consists of four 120 MB quadrants with 256 x 1.5 MB eDRAM macros per quadrant. The L4 cache is logically organized as 16 address-sliced banks, with 30-way set associative. The L4 cache controller is a single pipeline with multiple individual controllers, which is sufficient to handle 125 simultaneous cache transactions per chip.
The L3 caches on PU chips communicate with the L4 caches through the attached SC chip by using unidirectional buses. L3 is divided into two logical slices. Each slice is 32 MB, and consists of two 16 MB banks. L3 is 16-way set associative. Each bank has 4 K sets, and the cache line size is 256 bytes.
The bus/clock ratio (2:1) between the L4 cache and the PU is controlled by the storage controller on the SC chip.
The SC chip also acts as an L4 cache cross-point switch for L4-to-L4 traffic to up to three remote CPC drawers through three bidirectional data buses. The SMP cables and system coherency manager use the L4 directory to filter snoop traffic from remote CPC drawers. This process uses an enhanced synchronous fabric protocol for improved latency and cache management. There are six clock domains, and the clock function is distributed between both SC chips.
Chapter 2. Central processor complex hardware components 51
Page 80

2.3.6 Cache level structure

Node 0Node 1
64MB eDRAM
Inclusive L3
L1
L2
2MB
L1
L2
2MB
64MB eDRAM
Inclusive L3
L1
L2
2MB
L1
L2
2MB
64MB eDRAM
Inclusive L3
L1
L2
2MB
L1
L2
2MB
64MB eDRAM
Inclusive L3
L1
L2
2MB
L1
L2
2MB
480MB
eDRAM
L4
224MB
eDRAM
NIC
L3
owned
lines
480MB
eDRAM
L4
224MB
eDRAM
NIC
L3
owned
lines
CP Stores
LRU Cast-out
Data Fetch Return
S-Bus
X-Bus
To other CPC drawer
PU chip (7 cores) PU chip (6 cores) PU chip (6 cores) PU chip (7 cores)
z13s implements a four level cache structure, as shown in Figure 2-16.
Figure 2-16 Cache level structure
Each core has its own 224-KB Level 1 (L1) cache, split into 96 KB for instructions (I-cache) and 128 KB for data (D-cache). The L1 cache is designed as a store-through cache, meaning that altered data is also stored in the next level of memory.
The next level is the Level 2 (L2) private cache on each core. This cache has 4 MB, split into a 2 MB D-cache and 2 MB I-cache. It is designed as a store-through cache.
The Level 3 (L3) cache is also on the PU chip. It is shared by the active cores, has 64 MB, and is designed as a store-in cache.
Cache levels L2 and L3 are implemented on the PU chip to reduce the latency between the processor and the L4 large shared cache, which is on the two SC chips. Each SC chip has 480 MB, which is shared by PU chips on the node. The S-bus provide the inter-node interface between the two L4 caches (SC chips) in each node. The L4 cache uses a store-in design.
52 IBM z13s Technical Guide
Page 81

2.4 Memory

The maximum physical memory size is directly related to the number of CPC drawers in the system. The Model N10 CPC drawer can contain up to 1,280 GB of physical memory. The Model N20 single drawer system can contain up to 2,560 GB, and the Model N20 with two drawers up to 5,120 GB of installed (physical) memory per system.
A z13s has more memory installed than the amount that is ordered. Part of the physical installed memory is used to implement the redundant array of independent memory (RAIM) design. As a result, for customer use, Model N10 can have up to 984 GB, Model N20 single CPC drawer up to 2008 GB, and Model N20 with two CPC drawers up to 4056 GB.
Table 2-4 shows the maximum and minimum memory sizes that you can order for each z13s model.
Table 2-4 z13s memory sizes
Model Increment (GB) Standard (GB) Plan Ahead
N10 and N20 8 64 - 88 88-88
N10 and N20 32 120-344 120-344
N10 and N20 64 408-600 408-600
N10 only 128 728-984 728-984
N20 only 128 1112-2008 1112-2008
N20 (2) only
b
a. The maximum amount of Preplanned Memory capacity is 2 TB. b. N20 with two CPC drawers (FC 1045)
256 2264-4056 2264-4056
a
(GB)
The minimum physical installed memory is 160 GB per CPC drawer. The minimum initial amount of customer memory that can be ordered is 64 GB for all z13 models. The maximum customer memory size is based on the physical installed memory minus the RAIM (20% of physical) and the hardware system area (HSA) memory, which has a fixed amount of 40 GB.
Table 2-5 shows the memory granularity that is based on the installed customer memory. For more information, see the z Systems Processor Resource/Systems Manager Planning Guide SB10-7162.
Table 2-5 Central storage granularity for z13s servers
Largest Central Storage Amount (LCSA) LPAR Storage Granularity
LCSA <= 256 GB 512 MB
256 GB < LCSA <= 512 GB 1 GB
512 GB < LCSA <= 1024 GB 2 GB
1024 GB < LCSA <= 2048 GB 4 GB
2048 GB < LCSA <= 4096 GB 8 GB
,
Chapter 2. Central processor complex hardware components 53
Page 82
Note: The required granularity for all main storage fields of an LPAR for which an origin
DIMMs
PU1
MCU
PU2
MCU
PU3
MCU
PU4
MCU
MD11
MD15
MD14
MD13
MD12
MD06
MD10
MD09
MD08
MD07
Channel 0
Channel 4
Channel 3
Channel 2
Channel 1
MD21
MD25
MD24
MD23
MD22
MD16
MD20
MD19
MD18
MD17
Channel 0
Channel 4
Channel 3
Channel 2
Channel 1
MCU 4MCU 3MCU 1 MCU2
SC1 SC0
Depopulated SCM
and DIMM locations
not shown.
has been specified (for example, initial main storage amount, reserved main storage amount, and main storage origin) is fixed at 2 GB. This configuration helps to simplify customer management of absolute storage regarding 2 GB large page support for these partitions. In support of 2 GB large pages, all logical partition origin MBs and limit MBs must be on a 2 GB boundary.

2.4.1 Memory subsystem topology

The z13s memory subsystem uses high speed, differential-ended communications memory channels to link a host memory to the main memory storage devices.
Figure 2-17 shows an overview of the CPC drawer memory topology of a z13s server.
Figure 2-17 CPC drawer memory topology
Each CPC drawer has up to 20 DIMMs. DIMMs are connected to each PU chip through the MCUs. Each PU chip has one MCU, which uses five channels, one for each DIMM and one for RAIM implementation, in a 4 +1 design.
Each DIMM can be 16 GB, 32 GB, 64 GB, or 128 GB. DIMM sizes cannot be mixed in the same CPC drawer, but a two CPC drawer Model N20 can have different (but not mixed) DIMM sizes in each drawer.
54 IBM z13s Technical Guide
Page 83

2.4.2 Redundant array of independent memory

DAT A
CHECK
DATA
CHECK
ECC RAIM Parity
Level 4 Cache
Key Cache
MCU 0
Key Cache
MCU 1
Extra column
provides RAIM
function
16B
16B
16B
16B
2B
2B
z13s servers use the RAIM technology. The RAIM design detects and recovers from failures of DRAMs, sockets, memory channels, or DIMMs.
The RAIM design requires the addition of one memory channel that is dedicated for reliability, availability, and serviceability (RAS), as shown in Figure 2-18.
Figure 2-18 RAIM configuration per node
The parity of the four “data” DIMMs is stored in the DIMMs that are attached to the fifth memory channel. Any failure in a memory component can be detected and corrected dynamically. This system simplifies the memory subsystem design, while maintaining a fully fault-tolerant RAIM design.
The RAIM design provides the following layers of memory recovery:
򐂰 ECC with 90B/64B Reed Solomon code. 򐂰 DRAM failure, with marking technology in which two DRAMs can be marked and no half
sparing is needed. A call for replacement occurs on the third DRAM failure.
򐂰 Lane failure with CRC retry, data-lane sparing, and clock-RAIM with lane sparing. 򐂰 DIMM failure with CRC retry, data-lane sparing, and clock-RAIM with lane sparing. 򐂰 DIMM controller ASIC failure. 򐂰 Channel failure.
Chapter 2. Central processor complex hardware components 55
Page 84

2.4.3 Memory configurations

PU
Memory
DIMMS
MD16-MD20
Front
NODE 0
NODE 1
Rear
Memory
DIMMS
MD21-MD25
Unused
MD06-MD15
Airflo w inserts
PU
Physically, memory is organized in the following manner:
򐂰 Within a drawer, all slots must be populated with the same size DIMM. 򐂰 Each CPC drawer in a two drawer Model N20 can contain different amounts of memory. 򐂰 A CPC drawer can have available unused memory, which can be ordered as a memory
upgrade and enabled by LIC without DIMM changes. The amount of memory that can be enabled by the client is the total physical installed memory minus the RAIM amount and minus the 40 GB of HSA memory.
򐂰 The memory controller on the adjacent PU chip manages the five memory slots. 򐂰 DIMM changes require a disruptive IML on z13s.
Model N10 memory configurations
The memory in Model N10 can be configured in the following manner: 򐂰 Ten DIMM slots per N10 drawer supported by two memory controllers on PU chips with
five slots each.
򐂰 Any remaining unpopulated DIMM banks, MD06-MD10 and MD11-MD15, must be
plugged with memory DIMM airflows
򐂰 Drawer memory configuration based on DIMM sizes of 16, 32, 64, 128 supporting up to
984 GB customer available memory in the drawer.
򐂰 HSA size (40 GB) is fixed and is not taken from customer purchased memory.
Figure 2-19 shows the physical view of the N10 CPC drawer memory DIMM plug locations.
Figure 2-19 Model N10 memory plug locations
56 IBM z13s Technical Guide
Page 85
Table 2-6 shows the standard memory plug summary by node for new build systems.
Table 2-6 Model N10 physically installed memory
Customer Memory
(GB) GB GB
64 160 8
72
80
88
120 320 32
152
184
216
248 640
280
312
344
408 64
Tot al Physical
Increment Node 0
DIMM location / size GB
MD16-MD20 MD21-MD25 Dial Max
16 16 88
16 16
16 16
16 16
32 32 216
32 32
32 32
32 32
64 64 472
64 64
64 64
64 64
64 64
472
536 1280
600
728 128
856
984
64 64
128 128 984
128 128
128 128
128 128
128 128
Model N20 single drawer memory configurations
The memory in Model N20 with a single drawer can be configured in the following manner: 򐂰 Ten or twenty DIMM slots per N20 drawer supported by two or four memory controllers
with five slots each.
򐂰 The remaining unpopulated DIMM banks, MD06-MD10 and MD21-MD25, must be
plugged with memory DIMM airflows.
򐂰 Drawer memory configurations based on DIMM sizes of 16, 32, 64, 128 supporting up to
2008 GB customer available memory in the drawer.
򐂰 HSA size (40 GB) is fixed and is not taken from customer purchased memory.
Chapter 2. Central processor complex hardware components 57
Page 86
Figure 2-20 shows physical view of the N20 one CPC drawer memory DIMM plug locations
Memory
DIMMS
MD16-MD20
Front
NODE 0
NODE 1
Rear
Memory
DIMMS
MD21-MD25
Memory
DIMMS
MD06-MD10
Memory
DIMMS
MD11-MD15
PU
PU
PU
PU
Figure 2-20 Model N20 one drawer memory plug locations
Table 2-7 shows the standard memory plug summary by node for new build systems.
Table 2-7 Model N20 single CPC drawer - physically installed memory
Cust Mem
Tot al Phys
Increm ent
Node 1 DIMM location / size GB
Node 0 DIMM location / size GB
(GB) GB GB MD06-MD10 MD11-MD15 MD16-MD20 MD21-MD25 Dial Max
64 160 8
72
16 16 88
16 16
80 16 16
88 16 16
58 IBM z13s Technical Guide
Page 87
Cust Mem
(GB) GB GB MD06-MD10 MD11-MD15 MD16-MD20 MD21-MD25 Dial Max
120 320 32 16 16 16 16 216
Tot al Phys
Increm ent
Node 1 DIMM location / size GB
Node 0 DIMM location / size GB
152
184
216
248 640
280
312
344
408 64
472
536 1280
600
728 128
856
984
1112 2560
1240
16 16 16 16
16 16 16 16
16 16 16 16
32 32 32 32 472
32 32 32 32
32 32 32 32
32 32 32 32
32 32 32 32
32 32 32 32
64 64 64 64 984
64 64 64 64
64 64 64 64
64 64 64 64
64 64 64 64
128 128 128 128 2008
128 128 128 128
1368
1496
1624
1752
1880
2008
128 128 128 128
128 128 128 128
128 128 128 128
128 128 128 128
128 128 128 128
128 128 128 128
Model N20 two drawer memory configurations
The memory in Model N20 with two drawers can be configured in the following manner: 򐂰 Ten or twenty DIMM slots per N20 drawer supported by two or four memory controllers
with five slots each.
򐂰 The remaining unpopulated DIMM banks, MD06-MD10 and MD21-MD25, must be
plugged with memory DIMM airflows.
򐂰 Drawer memory configurations based on DIMM sizes of 16, 32, 64, 128 supporting up to
2008 GB in the drawer and up to 4056 GB customer available memory per system.
򐂰 HSA size (40 GB) is fixed and is not taken from customer purchased memory.
Chapter 2. Central processor complex hardware components 59
Page 88
Figure 2-21 shows the physical view of the N20 two CPC drawer memory DIMM plug
Memory
DIMMS
MD16-MD20
Front
NODE 0
NODE 1
Rear
Memory
DIMMS
MD21-MD25
Memory
DIMMS
MD06-MD10
Memory
DIMMS
MD11-MD15
PU
PU
PU
PU
Drawer 0
Drawer 1
Model N20
locations.
Figure 2-21 Model N20 two drawer memory plug locations
Table 2-8 shows the Standard Memory Plug summary by node for new build systems.
Table 2-8 Model N20 with two CPC drawers - physically installed memory
Cust. Mem
(GB) GB GB Node 1DIMM loc
64 320 8
72
Physi cal
Increm ent
Drawer 1 Drawer 0 (Second drawer)
/ size
MD06­MD10
MD11­MD15
Node 0 D IMM loc/size
MD16­MD20
MD21­MD25
Node 1 DIMM loc /size
MD06­MD10
MD11­MD15
16 16 16 16 88
16 16 16 16
80 16 16 16 16
88 16 16 16 16
60 IBM z13s Technical Guide
Node 0 DIMM loc/size
MD16­MD20
MD21­MD25
Dial Max
Page 89
Cust. Mem
Physi cal
Increm ent
Drawer 1 Drawer 0 (Second drawer)
(GB) GB GB Node 1DIMM loc
/ size
MD06­MD10
120 480 32 16 16 16 16 16 16 216
152
184 16 16 16 16 16 16
216 16 16 16 16 16 16
248 640 32 32 32 32 16 16 472
280
312 32 32 32 32 16 16
344 32 32 32 32 16 16
408 64 32 32 32 32 16 16
472 32 32 32 32 16 16
536 1280 64 64 64 64 16 16 984
600
728 128 64 64 64 64 16 16
16 16 16 16 16 16
32 32 32 32 16 16
64 64 64 64 16 16
MD11­MD15
Node 0 D IMM loc/size
MD16­MD20
MD21­MD25
Node 1 DIMM loc /size
MD06­MD10
MD11­MD15
Node 0 DIMM loc/size
MD16­MD20
MD21­MD25
Dial Max
856 64 64 64 64 16 16
984 64 64 64 64 16 16
1112 2720 128 128 128 128 16 16 2008
1240
1368 128 128 128 128 16 16
1496 128 128 128 128 16 16
1624 128 128 128 128 16 16
1752 128 128 128 128 16 16
1880 128 128 128 128 16 16
2008 128 128 128 128 16 16
128 128 128 128 16 16
Chapter 2. Central processor complex hardware components 61
Page 90
Cust.
Customer
Ordered Memory
(Usable memory
for LPARs)
HSA
Available
Addressable
Memory
RAIM Array
(DIMM capacity)
Installed physical memory
2
0
%
s
m
a
l
l
e
r
2
0
%
s
m
a
l
l
e
r
Addresable
Memory
HSA size = 40GB
Mem
Physi cal
Increm ent
Drawer 1 Drawer 0 (Second drawer)
(GB) GB GB Node 1DIMM loc
/ size
MD06­MD10
MD11­MD15
Node 0 D IMM loc/size
MD16­MD20
MD21­MD25
Node 1 DIMM loc /size
MD06­MD10
MD11­MD15
Node 0 DIMM loc/size
MD16­MD20
MD21­MD25
Dial Max
2264 2880 256 128 128 128 128 16 16 16 16 2264
2520 3200
2776 3840
3032
3288 5120
3544
3800
4056
128 128 128 128 32 32 32 32 2520
128 128 128 128 64 64 64 64 3032
128 128 128 128 64 64 64 64
128 128 128 128 128 128 128 128 4056
128 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128
Figure 2-22 illustrates how the physical installed memory is allocated on a z13s server, showing HSA memory, RAIM, customer memory, and the remaining available unused memory that can be enabled by using LIC when required.
Figure 2-22 Memory allocation diagram
As an example, a z13s Model N20 (one CPC drawer) that is ordered with 1496 GB of memory has the following memory sizes:
򐂰 Physical installed memory is 2560 GB: 1280 GB on Node 0 and 1280 GB on Node 1.
62 IBM z13s Technical Guide
Page 91
򐂰 CPC drawer 1 has 40 GB of HSA memory and up to 2008 GB for customer memory. 򐂰 Because the customer ordered 1496 GB, provided the granularity rules are met, 512 GB
are available for future nondisruptive upgrades by activating LIC.
Memory upgrades are satisfied from already installed unused memory capacity until it is exhausted. When no more unused memory is available from the installed memory cards (DIMMs), one of the following additions must occur:
򐂰 Memory cards must be upgraded to a higher capacity. 򐂰 A CPC drawer with more memory must be added. 򐂰 Memory cards (DIMMs) must be added.
Memory upgrades are concurrent when it requires no change of the physical memory cards. A memory card change is always disruptive.
When activated, an LPAR can use memory resources that are in any CPC drawer. No matter where the memory is, an LPAR has access to that memory up to a maximum of 4 TB. This access is possible because despite the CPC drawer structure, the z13s is still an SMP system. The existence of an I/O drawer in the CPC limits the memory LPAR to 1 TB. For more information, see 3.7, “Logical partitioning” on page 116.

2.4.4 Memory upgrades

Memory upgrades can be ordered and enabled using Licensed Internal Code Configuration Control (LICCC) by upgrading the DIMM cards, by adding new DIMM cards, or by adding a CPC drawer.
For a model upgrade that results in the addition of a CPC drawer, the minimum memory increment is added to the system. Each CPC drawer has a minimum physical memory size of 160 GB.
During a model upgrade, adding a CPC drawer is a disruptive operation. Adding physical memory to any drawer is also disruptive.

2.4.5 Preplanned memory

The idea of preplanned memory is to allow for nondisruptive memory upgrades. Any hardware that is required would be pre-plugged based on a target capacity specified by the customer. The maximum amount of Preplanned Memory capacity is 2 TB. Pre-plugged hardware can be enabled by using an LICCC order placed by the customer when additional memory capacity is needed.
You can order this LICCC through these channels: 򐂰 The IBM Resource Link® (a login is required):
http://www.ibm.com/servers/resourcelink/
򐂰 Your IBM representative
The installation and activation of any preplanned memory requires the purchase of the required feature codes (FC), which are described in Table 2-9. FCs 1996 and 1993 are used to track the quantity of 16 GB and 8 GB physical increments of plan ahead memory capacity.
The payment for plan-ahead memory is a two-phase process. One charge takes place when the plan-ahead memory is ordered, and another charge takes place when the prepaid
Chapter 2. Central processor complex hardware components 63
Page 92
memory is activated for actual use. For the exact terms and conditions, contact your IBM representative.
Table 2-9 Feature codes for plan-ahead memory
Memory z13s Feature Code /
Increment
Preplanned memory
Charged when physical memory is installed. Used for tracking the quantity of physical increments of plan-ahead memory capacity.
Preplanned memory activation
Charged when plan-ahead memory is enabled. Used for tracking the quantity of increments of plan-ahead memory being activated.
You install preplanned memory by ordering FC 1993 / 1996. The ordered amount of plan-ahead memory is charged with a reduced price compared to the normal price for memory. One FC 1993 is needed for each 8 GB physical increment, and one FC 1996 for each 16 GB physical increment.
The activation of installed pre-planned memory is achieved by ordering FC 1903, which causes the other portion of the previously contracted charge price to be invoiced. FC 1903 indicates 8 GB, and FC 1996 for 16 GB of LICCC increments of memory capacity.
Memory upgrades: Normal memory upgrades use up the plan-ahead memory first.

2.5 Reliability, availability, and serviceability

FC 1993 / 8 GB FC 1996 / 16 GB
FC 1903
IBM z Systems continue to deliver enterprise class RAS with IBM z13s servers. The main intent behind RAS is about preventing or tolerating (masking) outages and providing the necessary instrumentation (in hardware, LIC/microcode, and software) to capture (collect) the relevant failure information to help identify an issue without requiring a reproduction of the event. These outages can be planned or unplanned. Planned and unplanned outages can include the following situations (examples are not related to the RAS features of z Systems servers):
򐂰 A planned outage because of the addition of extra processor capacity 򐂰 A planned outage because of the addition of extra I/O cards 򐂰 An unplanned outage because of a failure of a power supply 򐂰 An unplanned outage because of a memory failure
The z Systems hardware has decades of intense engineering behind it, which has resulted in a robust and reliable platform. The hardware has many RAS features built into it.

2.5.1 RAS in the CPC memory subsystem

Patented error correction technology in the memory subsystem continues to provide the most robust error correction from IBM to date. Two full DRAM failures per rank can be spared and a third full DRAM failure can be corrected. DIMM level failures, including components such as the memory controller application-specific integrated circuit (ASIC), the power regulators, the clocks, and the system board can be corrected. Memory channel failures, such as signal lines, control lines, and drivers/receivers on the SCM, can be corrected. Upstream and
64 IBM z13s Technical Guide
Page 93
downstream data signals can be spared by using two spare wires on both the upstream and downstream paths. One of these signals can be used to spare a clock signal line (one upstream and one downstream). The following improvements were also added in the z13s servers:
򐂰 No cascading of memory DIMMs 򐂰 Independent channel recovery 򐂰 Double tabs for clock lanes 򐂰 Separate replay buffer per channel 򐂰 Hardware driven lane soft error rate (SER) and sparing.

2.5.2 General z13s RAS features

z13s servers have the following RAS features: 򐂰 z13s servers provide a true N+1 (fully redundant) cooling function for the CPC drawers
and the PCIe drawers by using the blowers. If a blower fails, the second blower increases the speed of the fans to provide necessary cooling until the failed blower is replaced. The power supplies for the z13s servers are also based on the N+1 design. The second power supply can maintain operations and avoid an unplanned outage of the system.
򐂰 The z Systems processors have improved chip packaging (encapsulated chip connectors)
and use SER hardened latches throughout the design.
򐂰 z13s servers have N+2 point of load (POL) power conversion. This redundancy protects
the processor from the loss of voltage because of POL failures.
򐂰 z13s servers have N+2 redundancy on the environmental sensors (ambient temperature,
relative humidity, air density
򐂰 Enhanced bus structure using integrated time-domain reflectometry (TDR) technology.
2
, and corrosion).
򐂰 z13s servers have these Peripheral Component Interconnect Express (PCIe) service
enhancements:
– Mandatory end-to-end cyclic redundancy check (ECRC) – Customer operation code separate from maintenance code – Native PCIe firmware stack running on the integrated firmware processor (IFP) to
manage isolation and recovery
IBM z13s servers continue to deliver robust server designs through exciting new technologies, hardening both new and classic redundancy.
For more information, see Chapter 9, “Reliability, availability, and serviceability” on page 355.

2.6 Connectivity

Connections to PCIe I/O drawers, I/O drawers, Parallel Sysplex InfiniBand (PSIFB) coupling, and Integrated Coupling Adapters (ICA) are driven from the CPC drawer fanout cards. These fanouts are on the front of the CPC drawer.
2
The air density sensor measures air pressure and is used to control blower speed.
Chapter 2. Central processor complex hardware components 65
Page 94
Figure 2-23 shows the location of the fanouts for a Model N10. Model N10 has four PCIe
P
C
I
e
P
C
I
e
P
C I e
P
C I e
P
C
I e
P
C
I
e
F S P
F S P
LG11 LG12 LG13 LG14
LG15 LG16
LG01
LG02 LG03 LG04 LG05 LG06 LG07 LG08 LG10
LG0
9
I F B
I F B
CPC Drawer 0
P
C
I
e
P
C
I
e
P
C
I
e
P
C I e
P
C
I
e
P
C
I
e
P
C
I
e
P
C I e
P
C I e
P
C I e
F S P
F S P
LG11 LG12 LG13 LG14 LG15 LG16LG01 LG02 LG03 LG04 LG05 LG06 LG07 LG08 LG10LG09
I F B
I F B
I F B
I F B
P
C
I
e
P
C
I
e
P
C
I
e
P
C I e
P
C
I
e
P
C
I
e
P
C
I
e
P
C I e
P
C I e
P
C I e
F S P
F S P
LG11 LG12 LG13 LG14 LG15 LG16LG01 LG02 LG03 LG04 LG05 LG06 LG07 LG08 LG10LG09
I F B
I F B
I F B
I F B
CPC Drawer 0
CPC Drawer 1
fanout slots and two IFB fanout slots. The CPC drawer has two FSPs for system control.
LGXX is the location code.
Up to 4 PCIe fanouts (LG11 - LG14) and two IFB fanouts (LG09 - LG10) can be installed in the CPC drawer.
Figure 2-23 Model N10 drawer: Location of the PCIe and IFB fanouts
Figure 2-24 shows the location of the fanouts for a Model N20 with two CPC drawers. Each CPC drawer has two FSPs for system control. N20 single CPC drawer system, only
CPC drawer 1 is installed.
Up to 8 PCIe fanouts (LG03 - LG06 and LG11 - LG14) and four IFB fanouts (LG07 - LG10) can be installed in each CPC drawer.
LGXX is the location code. If the model is a
Figure 2-24 Model N20 two CPC drawer: Locations of the PCIe and IFB fanouts
A PCIe Generation 3 fanout connected to a PCIe I/O drawer can be repaired concurrently with the use of a redundant I/O interconnect. For more information, see 2.6.1, “Redundant I/O interconnect” on page 67.
Five types of fanouts are available: 򐂰 PCIe Generation 3 fanout card: This copper fanout provides connectivity to the PCIe
switch cards in the PCIe I/O drawer.
򐂰 Host Channel Adapter (HCA2-C): This copper fanout provides connectivity to the IFB-MP
cards in the I/O drawer.
򐂰 Integrated Coupling Adapter (ICA SR): This adapter provides coupling connectivity
between z13s and z13/z13s servers.
򐂰 Host Channel Adapter (HCA3-O (12xIFB)): This optical fanout provides 12x InfiniBand
coupling link connectivity up to 150 meters (492 ft.) distance to z13s, z13, zEC12, zBC12, z196, and z114 servers.
66 IBM z13s Technical Guide
Page 95
򐂰 Host Channel Adapter (HCA3-O LR (1xIFB)): This optical long reach fanout provides
1x InfiniBand coupling link connectivity up to a 10 km (6.2 miles) unrepeated, or 100 km (62 miles) when extended by using z Systems qualified DWDM equipment) distance to z13s, z13, zEC12, zBC12, z196, and z114 servers.
When you are configuring for availability, balance the channels, coupling links, and OSAs across drawers. In a system that is configured for maximum availability, alternative paths maintain access to critical I/O devices, such as disks and networks.
Note: Fanout plugging rules for z13s servers are different than previous EC12 and z114 servers.
򐂰 Preferred plugging for PCIe Generation 3 fanouts will always be in CPC Drawer 1 (bottom
drawer) alternating between the two nodes. Previous systems EC12 and z114 were plugged across two drawers. This configuration is for performance reasons and to maintain RAS characteristics.
򐂰 If the configuration contains two CPC drawers, two PCIe I/O drawers, and PCIe ICA
Coupling adapters. All PCIe Generation 3 fanouts are plugged in CPC Drawer 1, and all PCIe ICA coupling adapters are plugged in CPC Drawer 0 (top), alternating between the internal node affinity.
A fanout can be repaired concurrently with the use of redundant I/O interconnect.

2.6.1 Redundant I/O interconnect

Redundancy is provided for both InfiniBand I/O and for PCIe I/O interconnects.
InfiniBand I/O connection
Redundant I/O interconnect is accomplished by the facilities of the InfiniBand I/O connections to the InfiniBand Multiplexer (IFB-MP) card. Each IFB-MP card is connected to a jack in the InfiniBand fanout of a CPC drawer. IFB-MP cards are half-high cards and are interconnected through the I/O drawer backplane. This configuration allows redundant I/O interconnect if the connection coming from a CPC drawer ceases to function. This situation can happen when, for example, a CPC drawer connection to the I/O Drawer is broken for maintenance.
Chapter 2. Central processor complex hardware components 67
Page 96
A conceptual view of how redundant I/O interconnect is accomplished is shown in
HCA2-C
Figure 2-25.
Figure 2-25 Redundant I/O interconnect for I/O drawer
Normally, the HCA2-C fanout in a CPC drawer connects to the IFB-MP (A) card and services domain A in an I/O drawer. In the same fashion, a second HCA2-C fanout of a CPC drawer connects to the IFB-MP (B) card and services domain B in an I/O drawer. If the connection from the CPC drawer IFB-MP (A) to the I/O drawer is removed, connectivity to domain B is maintained. The I/O is guided to domain B through the interconnect between IFB-MP (A) and IFB-MP (B).
Note: Both IFB-MP cards must be installed in the I/O Drawer to maintain the interconnect across I/O domains. If one of the IFB-MP cards is removed, then the I/O cards in that domain (up to four) become unavailable.
68 IBM z13s Technical Guide
Page 97
PCIe I/O connection
redundant I/O
interconnect
The PCIe I/O drawer supports up to 32 PCIe features. They are organized in four hardware domains per drawer, as shown in Figure 2-26.
Figure 2-26 Redundant I/O interconnect for PCIe I/O drawer
Each domain is driven through a PCIe switch card. The two PCIe switch cards provide a backup path for each other through the passive connection in the PCIe I/O drawer backplane. During a PCIe fanout or cable failure, all 16 PCIe features in the two domains can be driven through a single PCIe switch card.
To support redundant I/O interconnect (RII) between front to back domain pairs 0,1 and 2,3, the two interconnects to each pair must be driven from two different PCIe fanouts. Normally, each PCIe interconnect in a pair supports the eight features in its domain. In backup operation mode, one PCIe interconnect supports all 16 features in the domain pair.
Note: The PCIe Gen3 Interconnect adapter must be installed in the PCIe Drawer to maintain the interconnect across I/O domains. If the PCIe Gen3 Interconnect adapter is removed, then the I/O cards in that domain (up to eight) become unavailable.

2.7 Model configurations

When a z13s order is configured, PUs are characterized according to their intended usage. They can be ordered as any of the following items:
򐂰 CP: The processor is purchased and activated. CP supports the z/OS, z/VSE, z/VM,
z/TPF, and Linux on z Systems operating systems. It can also run Coupling Facility Control Code and IBM zAware code.
Chapter 2. Central processor complex hardware components 69
Page 98
򐂰 Capacity marked CP: A processor that is purchased for future use as a CP is marked as
available capacity. It is offline and not available for use until an upgrade for the CP is installed. It does not affect software licenses or maintenance charges.
򐂰 IFL: The Integrated Facility for Linux is a processor that is purchased and activated for use
by z/VM for Linux guests and Linux on z Systems operating systems. It can also run the IBM zAware code.
򐂰 Unassigned IFL: A processor that is purchased for future use as an IFL. It is offline and
cannot be used until an upgrade for the IFL is installed. It does not affect software licenses or maintenance charges.
򐂰 ICF: An internal coupling facility (ICF) processor that is purchased and activated for use by
the Coupling Facility Control Code.
򐂰 zIIP: An IBM System z Integrated Information Processor (zIIP) that is purchased and
activated to run eligible workloads, such as Java code under the control of a z/OS Java virtual machine (JVM) or z/OS XML System Services, DB2 Distributed Relational Database Architecture (DRDA), or z/OS Communication Server IPSec.
򐂰 Additional SAP: An optional processor that is purchased and activated for use as an SAP.
A minimum of one PU that is characterized as a CP, IFL, or ICF is required per system. The maximum number of CPs is six for IFLs, and 20 for ICFs. The maximum number of zIIPs is always up to twice the number of PUs that are characterized as CPs.
Not all PUs on a model must be characterized.
The following items are present in z13s servers, but they are not part of the PUs that clients purchase and require no characterization:
򐂰 An SAP to be used by the channel subsystem. The number of predefined SAPs depends
on the z13s model.
򐂰 One IFP that is used in the support of designated features, such as zEDC and 10GbE
RoCE.
򐂰 Two spare PUs, which can transparently assume any characterization during a permanent
failure of another PU.
The z13s model is based on the number of PUs that are available for client use in each configuration. The models are summarized in Table 2-10.
Table 2-10 z13s configurations
Model Drawers /
PUs
N10 1/13 0 - 6 0 - 10 0 - 10 0 - 6 0 - 2 2 0 1
N20 1/26 0 - 6 0 - 20 0 - 20 0 - 12 0 - 3 3 2 1
N20 2/26 0 - 6 0 - 20 0 - 20 0 - 12 0 - 3 3 2 1
CPs IFLs/uIFL ICFs zIIPs Optional
SAPs
Std.
SAPs
Spares IFP
A capacity marker identifies the number of CPs that have been purchased. This number of purchased CPs is higher than or equal to the number of CPs that is actively used. The capacity marker marks the availability of purchased but unused capacity that is intended to be used as CPs in the future. They usually have this status for software-charging reasons. Unused CPs are not a factor when establishing the millions of service units (MSU) value that is used for charging monthly license charge (MLC) software, or when charged on a per-processor basis.
70 IBM z13s Technical Guide
Page 99

2.7.1 Upgrades

Concurrent CP, IFL, ICF, zIIP, or SAP upgrades are done within a z13s server. Concurrent upgrades require available PUs, and that extra PUs be installed previously, but not activated.
Spare PUs are used to replace defective PUs. On Model N10, unassigned PUs are used as spares. A fully configured Model N10 does not have any spares. Model N20 always has two dedicated spares.
If an upgrade request cannot be accomplished within the N10 configuration, a hardware upgrade to Model N20 is required. The upgrade involves extra hardware to add the second node in the first drawer, and might force the addition of a second drawer. The upgrade from N10 to N20 is disruptive.
Supported upgrade paths (see Figure 2-27 on page 71): 򐂰 Upgrade to z13s processors from earlier processors:
– From both the z114 and zBC12 servers
򐂰 Upgrade from z13s servers:
– Model N10 to N20 – Model N20 to z13 N30 Radiator-based (air-cooled) only
Note: Memory downgrades are not offered. Model downgrade (removal of a CPC drawer) is not offered, nor supported.
Figure 2-27 z13s upgrade paths
You can upgrade an IBM z114 or a zBC12 (frame roll) preserving the CPC serial number. Features that are common between systems are candidates to move.
Chapter 2. Central processor complex hardware components 71
Page 100
Important: Upgrades from z114 and zBC12 are disruptive.

2.7.2 Concurrent PU conversions

Assigned CPs, assigned IFLs, and unassigned IFLs, ICFs, zIIPs, and SAPs can be converted to other assigned or unassigned feature codes.
Most PU conversions are nondisruptive. In exceptional cases, the conversion can be disruptive, such as when a Model N20 with six CPs is converted to an all IFL system. In addition, an LPAR might be disrupted when PUs must be freed before they can be converted. Conversion information is summarized in Table 2-11.
Table 2-11 Concurrent PU conversions
To
CP IFL Unassigned
From
CP - Ye s Ye s Yes Yes Ye s Yes
IF L Ye s - Ye s Ye s Ye s Yes Yes
Unassigned IFL Yes Yes - Yes Yes Yes Yes
ICF Yes Yes Yes - Yes Yes Yes
zAAP Yes Yes Yes Yes - Yes Yes
ICF zAAP zIIP SAP
IFL
z I I P Ye s Ye s Ye s Ye s Ye s - Ye s
S A P Ye s Ye s Ye s Ye s Ye s Ye s -

2.7.3 Model capacity identifier

To recognize how many PUs are characterized as CPs, the store system information (STSI) instruction returns a model capacity identifier (MCI). The MCI determines the number and speed of characterized CPs. Characterization of a PU as an IFL, an ICF, or a zIIP is not reflected in the output of the STSI instruction because characterization has no effect on software charging. For more information about STSI output, see “Processor identification” on page 351.
Capacity identifiers: Within a z13s server, all CPs have the same capacity identifier. Specialty engines (IFLs, zAAPs, zIIPs, and ICFs) operate at full speed.
72 IBM z13s Technical Guide
Loading...