IBM Z9 Planning Manual


System z9
Processor Resource/Systems Manager Planning Guide
SB10-7041-03

System z9
Processor Resource/Systems Manager Planning Guide
SB10-7041-03
Note
Before using this information and the product it supports, be sure to read the general information under “Safety and Environmental Notices” on page xi and Appendix C, “Notices,” on page C-1.
Fourth Edition (May 2008)
This edition, SB10-7041-03, applies to the IBM® System z9® Servers.
|
This edition replaces SB10-7041-02. A technical change to the text or illustration is indicated by a vertical line to the
|
left of the change. There may be a newer version of this document in PDF format available on Resource Link™. Go to
http://www.ibm.com/servers/resourcelink and click on Library on the navigation bar. A newer version is indicated by a
lower-case alphabetic letter following the form number suffix (for example: 00a, 00b, 01a, 01b).
© Copyright International Business Machines Corporation 2005, 2008. All rights reserved.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
Safety and Environmental Notices . . . . . . . . . . . . . . . . .xi
Safety Notices . . . . . . . . . . . . . . . . . . . . . . . . .xi
World Trade Safety Information . . . . . . . . . . . . . . . . . .xi
Laser Safety Information . . . . . . . . . . . . . . . . . . . . . .xi
Laser Compliance . . . . . . . . . . . . . . . . . . . . . . .xi
Environmental Notices . . . . . . . . . . . . . . . . . . . . . .xi
Product Recycling and Disposal . . . . . . . . . . . . . . . . . .xi
Refrigeration . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Battery Return Program . . . . . . . . . . . . . . . . . . . . xiii
Flat Panel Display . . . . . . . . . . . . . . . . . . . . . . xiv
Monitors and Workstations . . . . . . . . . . . . . . . . . . . xiv
About This Publication . . . . . . . . . . . . . . . . . . . . . xvii
What is Included in this Publication . . . . . . . . . . . . . . . . . xix
Related Publications . . . . . . . . . . . . . . . . . . . . . . .xx
z/Architecture . . . . . . . . . . . . . . . . . . . . . . . .xx
Enterprise Systems Architecture/390 (ESA/390) . . . . . . . . . . . .xx
Hardware . . . . . . . . . . . . . . . . . . . . . . . . . .xx
Software . . . . . . . . . . . . . . . . . . . . . . . . . .xx
How to Send Your Comments . . . . . . . . . . . . . . . . . . . xxii
Summary of changes . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 1. Introduction to Logical Partitions . . . . . . . . . . . . . 1-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Prerequisites for Operation . . . . . . . . . . . . . . . . . . . . 1-2
PR/SM . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
zSeries Parallel Sysplex Support . . . . . . . . . . . . . . . . . 1-4
Guest Coupling Simulation . . . . . . . . . . . . . . . . . . . 1-5
Control Program Support in a Logical Partition . . . . . . . . . . . . 1-5
Input/Output Configuration Program (IOCP) Support . . . . . . . . . 1-14
Hardware Support . . . . . . . . . . . . . . . . . . . . . . 1-14
Operator Training . . . . . . . . . . . . . . . . . . . . . . 1-14
Logical Partitions . . . . . . . . . . . . . . . . . . . . . . . 1-15
Characteristics . . . . . . . . . . . . . . . . . . . . . . . 1-15
Potential Applications . . . . . . . . . . . . . . . . . . . . . 1-18
Compatibility and Migration Considerations . . . . . . . . . . . . . . 1-20
Device Numbers . . . . . . . . . . . . . . . . . . . . . . 1-20
Multiple Subchannel Sets (MSS) . . . . . . . . . . . . . . . . . 1-20
Control Programs . . . . . . . . . . . . . . . . . . . . . . 1-20
CPU IDs and CPU Addresses . . . . . . . . . . . . . . . . . 1-22
HSA Allocation . . . . . . . . . . . . . . . . . . . . . . . . 1-24
HSA Estimation Tool . . . . . . . . . . . . . . . . . . . . . 1-24
TOD Clock Processing . . . . . . . . . . . . . . . . . . . . . 1-25
No Sysplex Timer Attached and Server Time Protocol Not Enabled . . . . 1-25
Sysplex Timer Attached . . . . . . . . . . . . . . . . . . . . 1-25
Server Time Protocol Enabled . . . . . . . . . . . . . . . . . 1-25
Sysplex Testing Without a Sysplex Timer and Server Time Protocol Not
Enabled . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
© Copyright IBM Corp. 2005, 2008 iii
Synchronized Time Source and the Coupling Facility . . . . . . . . . 1-26
Extended TOD-Clock Facility . . . . . . . . . . . . . . . . . . 1-26
Chapter 2. Planning Considerations . . . . . . . . . . . . . . . . 2-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Planning the I/O Configuration . . . . . . . . . . . . . . . . . . . 2-3
Planning Considerations . . . . . . . . . . . . . . . . . . . . 2-3
Maximum Number of Logical Partitions . . . . . . . . . . . . . . . 2-5
Managing Logical Paths for ESCON and FICON Channels . . . . . . . 2-7
Managing the Establishment of Logical Paths . . . . . . . . . . . . 2-11
Shared Channel Overview . . . . . . . . . . . . . . . . . . . 2-21
Unshared ESCON or FICON Channel Recommendations . . . . . . . 2-27
Dynamically Managed CHPIDs . . . . . . . . . . . . . . . . . 2-27
IOCP Coding Specifications . . . . . . . . . . . . . . . . . . 2-28
Coupling Facility Planning Considerations . . . . . . . . . . . . . . 2-39
Test or Migration Coupling Configuration . . . . . . . . . . . . . . 2-39
Production Coupling Facility Configuration . . . . . . . . . . . . . 2-40
Internal Coupling Facility (ICF) . . . . . . . . . . . . . . . . . 2-41
Dynamic Internal Coupling Facility (ICF) Expansion . . . . . . . . . . 2-42
Enhanced Dynamic ICF Expansion Across ICFs . . . . . . . . . . . 2-43
System-Managed Coupling Facility Structure Duplexing . . . . . . . . 2-44
Single CPC Software Availability Sysplex . . . . . . . . . . . . . 2-44
Coupling Facility Nonvolatility . . . . . . . . . . . . . . . . . . 2-45
Coupling Facility Mode Setting . . . . . . . . . . . . . . . . . 2-45
Coupling Facility LP Definition Considerations . . . . . . . . . . . . 2-46
Coupling Facility LP Storage Planning Considerations . . . . . . . . . 2-47
Dump Space Allocation in a Coupling Facility . . . . . . . . . . . . 2-48
Coupling Facility LP Activation Considerations . . . . . . . . . . . . 2-49
Coupling Facility Shutdown Considerations . . . . . . . . . . . . . 2-49
Coupling Facility LP Operation Considerations . . . . . . . . . . . 2-49
Coupling Facility Control Code Commands . . . . . . . . . . . . . 2-50
Coupling Facility Level (CFLEVEL) Considerations . . . . . . . . . . 2-50
Coupling Facility Resource Management (CFRM) Policy Considerations 2-54
Coupling Facility Channels . . . . . . . . . . . . . . . . . . . 2-54
Considerations when Migrating from ICMF to ICs . . . . . . . . . . . 2-59
Linux Operating System Planning Considerations . . . . . . . . . . . 2-59
Integrated Facility for Linux (IFL) . . . . . . . . . . . . . . . . 2-59
z/VM Version 5 Utilizing IFL Features . . . . . . . . . . . . . . . 2-60
System z9 Application Assist Processor (zAAP) . . . . . . . . . . . . 2-60
IBM System z9 Integrated Information Processor (zIIP) . . . . . . . . . 2-61
Concurrent Patch . . . . . . . . . . . . . . . . . . . . . . . 2-61
CFCC Enhanced Patch Apply . . . . . . . . . . . . . . . . . . . 2-62
Dynamic Capacity Upgrade on Demand . . . . . . . . . . . . . . . 2-62
PR/SM Shared Partitions . . . . . . . . . . . . . . . . . . . 2-63
Mixed Shared and Dedicated PR/SM Partitions . . . . . . . . . . . 2-63
Multiple Dedicated PR/SM Partitions . . . . . . . . . . . . . . . 2-64
Shared Internal Coupling Facility . . . . . . . . . . . . . . . . 2-64
Dynamic Capacity Upgrade on Demand Limitations . . . . . . . . . . . 2-65
Concurrent Memory Upgrade . . . . . . . . . . . . . . . . . . . 2-66
Capacity Backup Upgrade (CBU) Capability . . . . . . . . . . . . . 2-66
Enhanced Book Availability . . . . . . . . . . . . . . . . . . . . 2-67
Preparing for Enhanced Book Availability . . . . . . . . . . . . . 2-67
Customer Initiated Upgrade (CIU) . . . . . . . . . . . . . . . . . 2-70
Concurrent Processor Unit Conversion . . . . . . . . . . . . . . . 2-70
Planning for Hot Plugging Crypto Features . . . . . . . . . . . . . . 2-71
iv PR/SM Planning Guide
Chapter 3. Determining the Characteristics of Logical Partitions . . . . . 3-1
Planning Overview . . . . . . . . . . . . . . . . . . . . . . . 3-4
Performance Considerations . . . . . . . . . . . . . . . . . . . 3-4
Recovery Considerations . . . . . . . . . . . . . . . . . . . . 3-5
Determining the Characteristics . . . . . . . . . . . . . . . . . . 3-5
Control Program Support . . . . . . . . . . . . . . . . . . . . 3-5
IOCDS Requirements . . . . . . . . . . . . . . . . . . . . . 3-6
Logical Partition Identifier . . . . . . . . . . . . . . . . . . . . 3-6
Mode of Operation . . . . . . . . . . . . . . . . . . . . . . 3-7
Storage Configurations . . . . . . . . . . . . . . . . . . . . . 3-7
Central Storage . . . . . . . . . . . . . . . . . . . . . . . 3-8
Expanded Storage . . . . . . . . . . . . . . . . . . . . . . 3-10
Dynamic Storage Reconfiguration . . . . . . . . . . . . . . . . 3-12
Number of Central Processors . . . . . . . . . . . . . . . . . 3-28
Central Processor Recommendations for Intelligent Resource Director (IRD) 3-29
Processor Considerations for Linux-Only LPs . . . . . . . . . . . . 3-30
Processor Considerations for Coupling Facility LPs . . . . . . . . . . 3-30
Processor Considerations for LPs with Multiple CP Types . . . . . . . 3-35
Dedicated Central Processors . . . . . . . . . . . . . . . . . 3-35
Shared Central Processors . . . . . . . . . . . . . . . . . . . 3-36
Enforcement of Processing Weights . . . . . . . . . . . . . . . 3-39
Defining Shared Channel Paths . . . . . . . . . . . . . . . . . 3-48
Dynamic CHPID Management (DCM) Considerations . . . . . . . . . 3-50
I/O Priority Recommendations . . . . . . . . . . . . . . . . . 3-50
Security-Related Controls . . . . . . . . . . . . . . . . . . . 3-50
Dynamic I/O Configuration . . . . . . . . . . . . . . . . . . . 3-52
Assigning Channel Paths to a Logical Partition . . . . . . . . . . . 3-52
Automatic Load for a Logical Partition . . . . . . . . . . . . . . . 3-55
Defining Logical Partitions . . . . . . . . . . . . . . . . . . . . 3-55
Global Reset Profile Definitions . . . . . . . . . . . . . . . . . 3-57
General . . . . . . . . . . . . . . . . . . . . . . . . . . 3-60
Processor Characteristics . . . . . . . . . . . . . . . . . . . 3-63
Security Characteristics . . . . . . . . . . . . . . . . . . . . 3-68
Establishing Optional Characteristics . . . . . . . . . . . . . . . 3-71
Storage Characteristics . . . . . . . . . . . . . . . . . . . . 3-73
Load Information . . . . . . . . . . . . . . . . . . . . . . 3-75
Cryptographic Characteristics . . . . . . . . . . . . . . . . . . 3-77
Enabling Input/Output Priority Queuing . . . . . . . . . . . . . . 3-81
Changing Logical Partition Input/Output Priority Queuing Values . . . . . 3-81
Moving Unshared Channel Paths . . . . . . . . . . . . . . . . . 3-83
Moving Unshared Channel Paths from a z/OS System . . . . . . . . 3-83
Moving a Channel Path from the Hardware Console . . . . . . . . . 3-83
Releasing Reconfigurable Channel Paths . . . . . . . . . . . . . 3-83
Configuring Shared Channel Paths . . . . . . . . . . . . . . . . . 3-84
Deconfiguring Shared Channel Paths . . . . . . . . . . . . . . . . 3-84
Removing Shared Channel Paths for Service . . . . . . . . . . . . 3-84
Changing Logical Partition Definitions . . . . . . . . . . . . . . . . 3-84
Changes Available Dynamically to a Running LP . . . . . . . . . . . 3-84
Changes Available at the Next LP Activation . . . . . . . . . . . . 3-85
Changes Available at the Next Power-On Reset (POR) . . . . . . . . 3-86
Chapter 4. Operating Logical Partitions . . . . . . . . . . . . . . . 4-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Available Operator Controls . . . . . . . . . . . . . . . . . . . . 4-2
Operator Controls Not Available . . . . . . . . . . . . . . . . . . 4-4
Operator Tasks . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Contents v
Editing Activation Profiles . . . . . . . . . . . . . . . . . . . . 4-5
Activating a CPC . . . . . . . . . . . . . . . . . . . . . . . 4-5
Activating an LP . . . . . . . . . . . . . . . . . . . . . . . 4-5
Performing a Load on an LP or Activating a Load Profile . . . . . . . . 4-5
Deactivating an LP . . . . . . . . . . . . . . . . . . . . . . 4-6
Locking and Unlocking an LP . . . . . . . . . . . . . . . . . . 4-6
Deactivating a CPC . . . . . . . . . . . . . . . . . . . . . . 4-7
Chapter 5. Monitoring the Activities of Logical Partitions . . . . . . . . 5-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Monitoring Logical Partition Activity . . . . . . . . . . . . . . . . . 5-2
Reviewing Current Storage Information . . . . . . . . . . . . . . . 5-2
Reviewing and Changing Current Channel Status . . . . . . . . . . . 5-3
Reviewing and Changing Current Logical Partition Controls . . . . . . . 5-3
Reviewing and Changing Current Logical Partition Group Controls . . . . 5-3
Reviewing and Changing Current Logical Partition Security . . . . . . . 5-5
Reviewing Current Logical Partition Cryptographic Controls . . . . . . . 5-6
Reviewing Current System Activity Profile Information . . . . . . . . . 5-6
Reviewing and Changing Logical Partition I/O Priority Values . . . . . . 5-7
Logical Partition Performance . . . . . . . . . . . . . . . . . . . 5-8
RMF LPAR Management Time Reporting . . . . . . . . . . . . . . 5-8
Dedicated and Shared Central Processors . . . . . . . . . . . . . 5-9
CPENABLE . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Start Interpretive Execution (SIE) Performance . . . . . . . . . . . . 5-9
Recovery Strategy . . . . . . . . . . . . . . . . . . . . . . . 5-10
Operation Considerations . . . . . . . . . . . . . . . . . . . 5-10
Application Preservation . . . . . . . . . . . . . . . . . . . . 5-11
Transparent Sparing . . . . . . . . . . . . . . . . . . . . . 5-11
Appendix A. Coupling Facility Control Code Support . . . . . . . . . A-1
Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Appendix B. Developing, Building, and Delivering a Certified System B-1
Creating Common Criteria-Based Evaluations . . . . . . . . . . . . . B-1
Functional Characteristics . . . . . . . . . . . . . . . . . . . . B-2
Trusted Configuration . . . . . . . . . . . . . . . . . . . . . . B-2
System z9 PR/SM Characteristics . . . . . . . . . . . . . . . . . B-4
Central and Expanded Storage . . . . . . . . . . . . . . . . . . B-4
I/O Security Considerations . . . . . . . . . . . . . . . . . . . . B-5
IOCDS Considerations . . . . . . . . . . . . . . . . . . . . . B-5
Operational Considerations . . . . . . . . . . . . . . . . . . . B-6
Input/Output Configuration Data Set (IOCDS) . . . . . . . . . . . . B-8
LPAR Input/Output Configurations . . . . . . . . . . . . . . . . B-8
Activation . . . . . . . . . . . . . . . . . . . . . . . . . B-9
Control Authority . . . . . . . . . . . . . . . . . . . . . . . B-9
Reconfiguring the System . . . . . . . . . . . . . . . . . . . B-10
Trusted Facility Library . . . . . . . . . . . . . . . . . . . . . B-14
Appendix C. Notices . . . . . . . . . . . . . . . . . . . . . . C-1
Trademarks and Service Marks . . . . . . . . . . . . . . . . . . C-2
Electronic Emission Notices . . . . . . . . . . . . . . . . . . . . C-3
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
vi PR/SM Planning Guide
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1

Figures

1-1. Characteristics of Logical Partitions . . . . . . . . . . . . . . . . . . . . . . 1-17
||
1-2. Migration of Four Production Systems to LPs . . . . . . . . . . . . . . . . . . 1-19
1-3. Support for Three XRF Systems . . . . . . . . . . . . . . . . . . . . . . . 1-19
1-4. CPU ID Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-22
1-5. CPU Identification Number Format . . . . . . . . . . . . . . . . . . . . . . 1-22
2-1. An ESCON Configuration that Can Benefit from Better Logical Path Management . . . . . 2-9
2-2. A Shared ESCON Configuration that Can Benefit from Better Logical Path Management 2-10
2-3. Deactivating Unneeded Logical Partitions . . . . . . . . . . . . . . . . . . . . 2-14
2-4. Configuring Offline Unneeded Channels or Shared Channels on an LP Basis . . . . . . . 2-15
2-5. Defining Devices to a Subset of Logical Partitions . . . . . . . . . . . . . . . . . 2-17
2-6. Defining Devices to a Subset of Logical Partitions . . . . . . . . . . . . . . . . . 2-18
2-7. Using the ESCD to Manage Logical Paths by Prohibiting Dynamic Connections . . . . . . 2-20
2-8. Progression of Busy Condition Management Improvements . . . . . . . . . . . . . 2-22
2-9. Consolidating ESCON Channels and ESCON Control Unit Ports . . . . . . . . . . . 2-24
2-10. Consolidating ESCON Channels and ESCD Ports . . . . . . . . . . . . . . . . . 2-25
2-11. Consolidating ESCON Channels Used for ESCON CTC Communications . . . . . . . . 2-26
2-12. Shared Devices Using Shared ESCON Channels . . . . . . . . . . . . . . . . . 2-31
2-13. Physical Connectivity of Shared Device 190 . . . . . . . . . . . . . . . . . . . 2-32
2-14. Logical View of Shared Device 190 . . . . . . . . . . . . . . . . . . . . . . 2-33
2-15. LPAR Configuration with Duplicate Device Numbers . . . . . . . . . . . . . . . . 2-34
2-16. Duplicate Device Numbers for Console . . . . . . . . . . . . . . . . . . . . . 2-35
2-17. Two Examples of Duplicate Device Number Conflicts . . . . . . . . . . . . . . . 2-36
2-18. Example of a Prepare for Enhanced Book Availability Results Panel . . . . . . . . . . 2-68
2-19. Reassign Non-Dedicated Processors Panel . . . . . . . . . . . . . . . . . . . 2-70
3-1. Central Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
3-2. Reconfigured Central Storage Layout . . . . . . . . . . . . . . . . . . . . . 3-15
3-3. Initial Central Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . 3-16
3-4. Central Storage Layout Following Reconfiguration . . . . . . . . . . . . . . . . . 3-17
3-5. Initial Central Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . 3-18
3-6. Central Storage Layout Following Reconfiguration . . . . . . . . . . . . . . . . . 3-19
3-7. Expanded Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . . 3-20
3-8. Reconfigured Expanded Storage Layout . . . . . . . . . . . . . . . . . . . . 3-21
3-9. Expanded Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
3-10. Initial Central Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . 3-23
3-11. Initial Expanded Storage layout . . . . . . . . . . . . . . . . . . . . . . . 3-23
3-12. Central Storage Layout Following Reconfiguration . . . . . . . . . . . . . . . . . 3-24
3-13. Expanded Storage Layout Following Reconfiguration . . . . . . . . . . . . . . . . 3-25
3-14. Backup Partition Layout Before Nonspecific Deactivation . . . . . . . . . . . . . . 3-27
3-15. Backup Partition Layout After Nonspecific Deactivation . . . . . . . . . . . . . . . 3-27
3-16. Options Page, Reset Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3-57
3-17. Partitions Page, Reset Profile . . . . . . . . . . . . . . . . . . . . . . . . 3-59
||
3-18. General Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . 3-60
3-19. Time Offset, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3-61
3-20. ESA Mode Logical Partition with shared Central Processors (CPs), Integrated Facilities for
Applications (IFAs) and System z9 Integrated Information Processors (zIIPs) . . . . . . . 3-63
3-21. Customization for a Linux-Only Mode Logical Partition with shared Integrated Facilities for
Linux (IFLs). There can be both an initial and reserved specification for the IFLs. . . . . . 3-64
3-22. Customization for a Coupling Facility Mode Logical Partition with Dedicated Internal Coupling
Facilities (ICFs) and shared Central Processors. There can be both an initial and reserved
specification for the ICFs and the Central Processors. . . . . . . . . . . . . . . . 3-65
3-23. Security Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . 3-69
3-24. Options Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3-71
3-25. Storage Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3-73
© Copyright IBM Corp. 2005, 2008 vii
3-26. Load Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . . 3-75
3-27. Crypto Page, Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3-77
3-28. Enabling I/O Priority Queuing . . . . . . . . . . . . . . . . . . . . . . . . 3-81
||
3-29. Change LPAR I/O Priority Queuing . . . . . . . . . . . . . . . . . . . . . . 3-81
5-1. Storage Information Task . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5-2. Change Logical Partition Controls Page . . . . . . . . . . . . . . . . . . . . . 5-3
5-3. Change LPAR Group Controls . . . . . . . . . . . . . . . . . . . . . . . . 5-4
5-4. Change Logical Partition Security Page . . . . . . . . . . . . . . . . . . . . . 5-5
5-5. View LPAR Cryptographic Controls Page . . . . . . . . . . . . . . . . . . . . 5-6
5-6. Change LPAR I/O Priority Queuing Page . . . . . . . . . . . . . . . . . . . . 5-7
5-7. ETR Increasing with CPU Utilization . . . . . . . . . . . . . . . . . . . . . . 5-8
viii PR/SM Planning Guide

Tables

1. Terminology Used in This Publication . . . . . . . . . . . . . . . . . . . . . . xvii
1-1. CPU IDs for a z9 EC . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-23
2-1. HCD Function Support . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
2-2. z/VM Dynamic I/O Support for MIF and the Coupling Facility . . . . . . . . . . . . . 2-4
2-3. Logical Path Summary by Control Unit . . . . . . . . . . . . . . . . . . . . . 2-8
2-4. MIF Maximum Channel Requirements . . . . . . . . . . . . . . . . . . . . . 2-23
2-5. Nonvolatility Choices for Coupling Facility LPs . . . . . . . . . . . . . . . . . . 2-45
2-6. Coupling Facility Mode Setting . . . . . . . . . . . . . . . . . . . . . . . . 2-46
2-7. Coupling Facility LP Storage Definition Capabilities . . . . . . . . . . . . . . . . 2-48
2-8. CPC Support for Coupling Facility Code Levels . . . . . . . . . . . . . . . . . . 2-51
3-1. Control Program Support on z9 . . . . . . . . . . . . . . . . . . . . . . . . 3-6
3-2. Central Storage Granularity . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
3-3. PR/SM LPAR Processor Weight Management with Processor Resource Capping 3-40 3-4. PR/SM LPAR Processor Weight Management without Processor Resource Capping 3-41
3-5. Example of Maintaining Relative Weight of a Capped Logical Partition . . . . . . . . . 3-42
3-6. Example Selection of Crypto Numbers . . . . . . . . . . . . . . . . . . . . . 3-79
3-7. LPAR & Crypto Assignments . . . . . . . . . . . . . . . . . . . . . . . . 3-79
A-1. Coupling Facility Limits at Different Coupling Facility Code Levels . . . . . . . . . . . A-1
B-1. Trusted Facility Library for PR/SM . . . . . . . . . . . . . . . . . . . . . . B-14
© Copyright IBM Corp. 2005, 2008 ix
x PR/SM Planning Guide

Safety and Environmental Notices

Safety Notices

Safety notices may be printed throughout this guide. DANGER notices warn you of conditions or procedures that can result in death or severe personal injury.
CAUTION notices warn you of conditions or procedures that can cause personal
injury that is neither lethal nor extremely hazardous. Attention notices warn you of conditions or procedures that can cause damage to machines, equipment, or programs.
There are no DANGER notices in this guide.

World Trade Safety Information

Several countries require the safety information contained in product publications to be presented in their national languages. If this requirement applies to your country, a safety information booklet is included in the publications package shipped with the product. The booklet contains the safety information in your national language with references to the US English source. Before using a US English publication to install, operate, or service this IBM product, you must first become familiar with the related safety information in the booklet. You should also refer to the booklet any time you do not clearly understand any safety information in the US English publications.

Laser Safety Information

All System z models can use I/O cards such as PCI adapters, ESCON, FICON, Open Systems Adapter (OSA), InterSystem Coupling-3 (ISC-3), or other I/O features which are fiber optic based and utilize lasers or LEDs.

Laser Compliance

All lasers are certified in the U.S. to conform to the requirements of DHHS 21 CFR Subchapter J for class 1 laser products. Outside the U.S., they are certified to be in compliance with IEC 60825 as a class 1 laser product. Consult the label on each part for laser certification numbers and approval information.
CAUTION: Data processing environments can contain equipment transmitting on system links with laser modules that operate at greater than Class 1 power levels. For this reason, never look into the end of an optical fiber cable or open receptacle. (C027)
CAUTION:
This product contains a Class 1M laser. Do not view directly with optical instruments. (C028)

Environmental Notices

Product Recycling and Disposal

This unit must be recycled or discarded according to applicable local and national regulations. IBM encourages owners of information technology (IT) equipment to responsibly recycle their equipment when it is no longer needed. IBM offers a variety of product return programs and services in several countries to assist
© Copyright IBM Corp. 2005, 2008 xi
equipment owners in recycling their IT products. Information on IBM product recycling offerings can be found on IBM’s Internet site at http://www.ibm.com/ibm/ environment/products/index.shtml.
Esta unidad debe reciclarse o desecharse de acuerdo con lo establecido en la normativa nacional o local aplicable. IBM a los propietarios de equipos de tecnología de la información (TI) que reciclen responsablemente sus equipos cuando éstrecomiendaos ya no les sean útiles. IBM dispone de una serie de programas y servicios de devolución de productos en varios países, a fin de ayudar a los propietarios de equipos a reciclar sus productos de TI. Se puede encontrar información sobre las ofertas de reciclado de productos de IBM en el sitio web de IBM http://www.ibm.com/ibm/environment/products/index.shtml.
Notice: This mark applies only to countries within the European Union (EU) and
Norway.
Appliances are labeled in accordance with European Directive 2002/96/EC concerning waste electrical and electronic equipment (WEEE). The Directive determines the framework for the return and recycling of used appliances as applicable throughout the European Union. This label is applied to various products to indicate that the product is not to be thrown away, but rather reclaimed upon end of life per this Directive.
In accordance with the European WEEE Directive, electrical and electronic equipment (EEE) is to be collected separately and to be reused, recycled, or recovered at end of life. Users of EEE with the WEEE marking per Annex IV of the WEEE Directive, as shown above, must not dispose of end of life EEE as unsorted municipal waste, but use the collection framework available to customers for the return, recycling, and recovery of WEEE. Customer participation is important to minimize any potential effects of EEE on the environment and human health due to the potential presence of hazardous substances in EEE. For proper collection and treatment, contact your local IBM representative.
xii PR/SM Planning Guide

Refrigeration

This system contains one or more modular refrigeration units with R-134A refrigerant and a polyol ester oil. This refrigerant must not be released or vented to the atmosphere. Skin contact with refrigerant may cause frostbite. Wear appropriate eye and skin protection. Modular refrigeration units are hermetically sealed and must not be opened or maintained.
This notice is provided in accordance with the European Union (EU) Regulation 842/2006 on fluorinated greenhouse gases. This product contains fluorinated greenhouse gases covered by the Kyoto Protocol. Per Annex I, Part 1, of EU Regulation 842/2006, the global warming potential of R-134A is 1300. Each unit contains 1.22 kg of R-134A.

Battery Return Program

This product may contain sealed lead acid, nickel cadmium, nickel metal hydride, lithium, or lithium ion battery(s). Consult your user manual or service manual for specific battery information. The battery must be recycled or disposed of properly. Recycling facilities may not be available in your area. For information on disposal of batteries outside the United States, go to http://www.ibm.com/ibm/environment/
products/index.shtml or contact your local waste disposal facility.
In the United States, IBM has established a return process for reuse, recycling, or proper disposal of used IBM sealed lead acid, nickel cadmium, nickel metal hydride, and other battery packs from IBM Equipment. For information on proper disposal of these batteries, contact IBM at 1-800-426-4333. Please have the IBM part number listed on the battery available prior to your call.
For Taiwan:
Please recycle batteries
For the European Union:
Notice: This mark applies only to countries within the European Union (EU)
Batteries or packaging for batteries are labeled in accordance with European Directive 2006/66/EC concerning batteries and accumulators and waste batteries and accumulators. The Directive determines the framework for the return and recycling of used batteries and accumulators as applicable throughout the European Union. This label is applied to various batteries to indicate that the battery is not to be thrown away, but rather reclaimed upon end of life per this Directive.
Safety and Environmental Notices xiii
Les batteries ou emballages pour batteries sont étiquetés conformément aux direc­tives européennes 2006/66/EC, norme relative aux batteries et accumulateurs en usage et aux batteries et accumulateurs usés. Les directives déterminent la marche à suivre en vigueur dans l'Union Européenne pour le retour et le recyclage des batte­ries et accumulateurs usés. Cette étiquette est appliquée sur diverses batteries pour indiquer que la batterie ne doit pas être mise au rebut mais plutôt récupérée en fin de cycle de vie selon cette norme.
In accordance with the European Directive 2006/66/EC, batteries and accumulators are labeled to indicate that they are to be collected separately and recycled at end of life. The label on the battery may also include a chemical symbol for the metal concerned in the battery (Pb for lead, Hg for mercury, and Cd for cadmium). Users of batteries and accumulators must not dispose of batteries and accumulators as unsorted municipal waste, but use the collection framework available to customers for the return, recycling, and treatment of batteries and accumulators. Customer participation is important to minimize any potential effects of batteries and accumulators on the environment and human health due to the potential presence of hazardous substances. For proper collection and treatment, contact your local IBM representative.
For Spain:
This notice is provided in accordance with Royal Decree 106/2008. The retail price of batteries, accumulators, and power cells includes the cost of the environmental management of their waste.
For California:
Perchlorate Material - special handling may apply. See http://www.dtsc.ca.gov/ hazardouswaste/perchlorate.
The foregoing notice is provided in accordance with California Code of Regulations Title 22, Division 4.5, Chapter 33. Best Management Practices for Perchlorate Materials. This product, part, or both may include a lithium manganese dioxide battery which contains a perchlorate substance.

Flat Panel Display

The fluorescent lamp or lamps in the liquid crystal display contain mercury. Dispose of it as required by local ordinances and regulations.

Monitors and Workstations

New Jersey For information about recycling covered electronic devices in the State of New Jersey, go to the New Jersey Department of Environmental Protection Web site at http://www.state.nj.us/dep/dshw/recycle/Electronic_Waste/index.html.
xiv PR/SM Planning Guide
Oregon For information regarding recycling covered electronic devices in the State of Oregon, go to the Oregon Department of Environmental Quality Web site at http://www.deq.state.or.us/lq/electronics.htm.
Washington For information about recycling covered electronic devices in the State of Washington, go to the Department of Ecology Web site at http://www.ecy.wa.gov/programs/swfa/eproductrecycle, or telephone the Washington Department of Ecology at 1-800Recycle.
Safety and Environmental Notices xv
xvi PR/SM Planning Guide

About This Publication

This publication is intended for system planners, installation managers, and other technical support personnel who need to plan for operating in logically partitioned (LPAR) mode on the IBM System z9 Enterprise Class (z9 EC) and IBM System z9 Business Class (z9 BC).
This publication assumes previous knowledge of the characteristics and functions of the installed central processor complex (CPC).
To improve readability, we refer to the different CPCs using the following terminology whenever possible:
Table 1. Terminology Used in This Publication
Terminology Central Processor Complex (CPC)
z9 BC IBM System z9 Business Class
Model R07 Model S07
z9 EC IBM System z9 Enterprise Class
Model S08 Model S18 Model S28 Model S38 Model S54
Some features, panels, and functions are model-dependent, engineering change (EC) level-dependent, machine change level-dependent (MCL-dependent), or control program-dependent. For this reason, not all of the functions discussed in this publication are necessarily available on every CPC.
Sample tasks and panels explained in this publication reference tasks and panels available from the support element console. However, detailed procedures for operator tasks and accurate task and panel references are explained in the Support Element Operations Guide.
Some illustrations and examples in this publication describe operation with as few as 2 logical partitions (LPs), although up to 60 LPs can be defined.
Hardware Management Console operators or support element console operators should use the appropriate operations guide for instructions on how to perform tasks. Control program operators should refer to the appropriate control program publication for information on control program commands.
For information about PR/SM™ LPAR mode on prior models, see the following publications:
v System z9 109 Processor Resource/Systems Manager Planning Guide,
SB10-7041-00
v zSeries 890 and 990 Processor Resource/Systems Manager Planning Guide,
SB10-7036
v zSeries 800 and 900 Processor Resource/Systems Manager Planning Guide,
SB10-7033
v S/390® Processor Resource/Systems Manager Planning Guide, GA22-7236
© Copyright IBM Corp. 2005, 2008 xvii
v ES/9000® Processor Resource/Systems Manager Planning Guide, GA22-7123
However, for the most current coupling facility control code information for all models, use this publication.
xviii PR/SM Planning Guide

What is Included in this Publication

The information presented in this publication is organized as follows:
v Chapter 1, “Introduction to Logical Partitions” describes the prerequisites for
establishing and using LPAR, the general characteristics and some potential applications for LPs.
v Chapter 2, “Planning Considerations” presents considerations and guidelines for
I/O configuration planning and coupling facility planning.
v Chapter 3, “Determining the Characteristics of Logical Partitions” includes a list of
the panels, provides guidelines for determining the CPC resources, and describes the operator tasks used to define the characteristics of LPs.
v Chapter 4, “Operating Logical Partitions” describes how to operate the Hardware
Management Console and the support element console, and describes the procedure for initializing the system.
v Chapter 5, “Monitoring the Activities of Logical Partitions” describes the panels
and operator tasks used to monitor LP activity.
v Appendix A, “Coupling Facility Control Code Support,” on page A-1 lists and
explains the support provided at different levels of coupling facility control code Licensed Internal Code (LIC).
v Appendix B, “Developing, Building, and Delivering a Certified System,” on page
B-1 provides guidance in setting up, operating, and managing a secure consolidated environment using System z9 PR/SM.
v Appendix C, “Notices,” on page C-1 contains electronic emission notices, legal
notices, and trademarks.
About This Publication xix

Related Publications

The following publications provide information about the functions and characteristics of the different CPCs and the related operating systems that run on them.
z/Architecture
®
v z/Architecture Principles of Operation, SA22-7832

Enterprise Systems Architecture/390® (ESA/390)

v Enterprise Systems Architecture/390 Principles of Operation, SA22-7201

Hardware

System z9 BC
v System z9 Business Class System Overview, SA22-1083 v Input/Output Configuration Program User’s Guide, SB10-7037 v Stand-alone IOCP User’s Guide, SB10-7152 v Hardware Management Console Operations Guide, SC28-6859 v Support Element Operations Guide, SC28-6860
System z9 EC
v System z9 Enterprise Class System Overview, SA22-6833 v Input/Output Configuration Program User’s Guide, SB10-7037 v Stand-alone IOCP User’s Guide, SB10-7152 v Hardware Management Console Operations Guide, SC28-6859 v Support Element Operations Guide, SC28-6860
ESCON® Concepts
v Introducing Enterprise Systems Connection, GA23-0383 v ESCON and FICON Channel-to-Channel Reference, SB10-7034
| |

Software

FICON
v ESCON and FICON Channel-to-Channel Reference, SB10-7034
®
Crypto Features
The following publications provide additional information on the Crypto features:
v Support Element Operations Guide, SC28-6860 v zSeries User Defined Extensions Reference and Guide, website:
http://www.ibm.com/security/cryptocards (Select a crypto card, and then click Library)
v IBM System z9 109 Technical Guide, SG24-7124
®
z/OS
zSeries Parallel Sysplex: The following publications provide additional information
about the zSeries in the z/OS environment:
v z/OS Parallel Sysplex Overview, SA22-7661 v z/OS Parallel Sysplex Application Migration, SA22-7662 v z/OS MVS Setting Up a Sysplex, SA22-7625 v z/OS MVS Programming: Sysplex Services Guide, SA22-7617 v z/OS MVS Programming: Sysplex Services Reference, SA22-7618
xx PR/SM Planning Guide
Multiple Image Facility: The following publications provide additional information
about multiple image facility in the z/OS environment: v z/OS Hardware Configuration Definition: User’s Guide, SC33-7988
Dynamic
I/O Configuration: The following publication provides information about
dynamic I/O configuration in the z/OS environment: v z/OS Hardware Configuration Definition Planning, GA22-7525
Dynamic
Storage Reconfiguration: The following publications provide additional
information on the commands, functions, and capabilities of dynamic storage reconfiguration in the z/OS environment:
v z/OS MVS Initialization and Tuning Reference, SA22-7592 v z/OS MVS Recovery and Reconfiguration Guide, SA22-7623 v z/OS MVS System Commands, SA22-7627
Crypto
Features: The following publications provide additional information on the
Crypto features:
v z/OS ICSF Administrator’s Guide, SA22-7521 v z/OS ICSF System Programmer’s Guide, SA22-7520
|
v z/OS ICSF Trusted Key Entry PCIX Workstation User’s Guide, SA22-7524
Sysplex
Failure Manager: The following publication provides an overview of SFM
and practical information for implementing and using SFM in the z/OS environment: v z/OS MVS Setting Up a Sysplex, SA22-7625
Management Time: The following publication provides information about
LPAR
the RMF™ Partition Data Report that includes LPAR Management Time reporting in a z/OS environment: v z/OS Resource Measurement Facility User’s Guide, SC33-7990
Intelligent
Resource Director (IRD): The following publication provides
information about Intelligent Resource Director in a z/OS environment: v z/OS Intelligent Resource Director, SG24-5952
®
z/VM
Hardware Configuration Definition (HCD): The following publication provides
information about the Hardware Configuration Definition (HCD): v z/VM I/O Configuration, SC24-6100
Hardware
Configuration Manager: The following publication provides information
about the Hardware Configuration Manager: v z/OS and z/VM Hardware Configuration Manager User’s Guide, SC33-7989
Dynamic
I/O Configuration: The following publication provides information about
dynamic I/O configuration:
v CP Planning and Administration, SC24-6083 v z/VM I/O Configuration, SC24-6100
Operating Systems: The following publication provides information about
Guest running guest operating systems:
v z/VM Running Guest Operating Systems, SC24-6115
About This Publication xxi

How to Send Your Comments

Your feedback is important in helping to provide the most accurate and high-quality information. Send your comments by using Resource Link at http://www.ibm.com/ servers/resourcelink. include the name of the book, the form number of the book, the version of the book, if applicable, and the specific location of the text you are commenting on (for example, a page number or table number).
Select Feedback on the Navigation bar on the left. Be sure to
xxii PR/SM Planning Guide

Summary of changes

|
|
|
|
Summary of Changes for SB10-7041-03:
This revision contains editorial changes and the following new technical changes:
New Information
v InfiniBand® channel paths (TYPE=CIB), host channel adapter (HCA), InfiniBand
coupling links for Parallel Sysplex, channel path information has been added.
|
v Control Program support information has been updated.
Summary of Changes for SB10-7041-02:
This revision contains editorial changes and the following new technical changes:
New Information
v Some Image/Activation Profile Panels have been re-designed, resulting in
performance improvements.
v Change LPAR Group Controls - new task added to the Hardware Management
Console and the Support Element Console. This task allows groups of LPARs to have a capacity value that applies to the entire group, and not just the individual image.
Summary of Changes for SB10-7041-01a:
This revision contains editorial changes and the following new technical changes:
New Information
v Information regarding Server Time Protocol (STP) was added.
Summary of Changes for SB10-7041-01:
This revision contains editorial changes and the following new technical changes.
New Information
v This book was updated to include information for both the IBM System z9
Enterprise Class (z9 EC) and the IBM System z9 Business Class (z9 BC).
v Support was added for the IBM System z9 Integrated Information Processor, or
zIIP. This is the latest customer-inspired specialty engine for System z9.
Summary of Changes for SB10-7041-00a:
This revision contains editorial changes and the following new technical changes.
Changed Information
v Updates were made to Appendix B, “Developing, Building, and Delivering a
Certified System.”
© Copyright IBM Corp. 2005, 2008 xxiii
xxiv PR/SM Planning Guide

Chapter 1. Introduction to Logical Partitions

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Prerequisites for Operation . . . . . . . . . . . . . . . . . . . . 1-2
PR/SM . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Logical Partitioning . . . . . . . . . . . . . . . . . . . . . 1-2
Central Storage . . . . . . . . . . . . . . . . . . . . . . 1-2
Expanded Storage . . . . . . . . . . . . . . . . . . . . . 1-2
Central Processors . . . . . . . . . . . . . . . . . . . . . 1-3
Multiple Image Facility . . . . . . . . . . . . . . . . . . . . 1-3
Crypto Features . . . . . . . . . . . . . . . . . . . . . . 1-3
Configurable Single Processor Crypto Express2 . . . . . . . . . . 1-4
CP Crypto Assist Functions . . . . . . . . . . . . . . . . . . 1-4
zSeries Parallel Sysplex Support . . . . . . . . . . . . . . . . . 1-4
Guest Coupling Simulation . . . . . . . . . . . . . . . . . . . 1-5
Control Program Support in a Logical Partition . . . . . . . . . . . . 1-5
z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
z/VM . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
||
z/VSE . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
TPF (Transaction Processing Facility) . . . . . . . . . . . . . . 1-11
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
Hardware Configuration Definition (HCD) . . . . . . . . . . . . 1-13
z/VM Dynamic I/O Configuration . . . . . . . . . . . . . . . . 1-14
Input/Output Configuration Program (IOCP) Support . . . . . . . . . 1-14
Hardware Support . . . . . . . . . . . . . . . . . . . . . . 1-14
Operator Training . . . . . . . . . . . . . . . . . . . . . . 1-14
Logical Partitions . . . . . . . . . . . . . . . . . . . . . . . 1-15
Characteristics . . . . . . . . . . . . . . . . . . . . . . . 1-15
Potential Applications . . . . . . . . . . . . . . . . . . . . . 1-18
Examples of Logical Partition Applications . . . . . . . . . . . . 1-19
Compatibility and Migration Considerations . . . . . . . . . . . . . . 1-20
Device Numbers . . . . . . . . . . . . . . . . . . . . . . 1-20
Multiple Subchannel Sets (MSS) . . . . . . . . . . . . . . . . . 1-20
Control Programs . . . . . . . . . . . . . . . . . . . . . . 1-20
z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
EREP . . . . . . . . . . . . . . . . . . . . . . . . . 1-21
CPU IDs and CPU Addresses . . . . . . . . . . . . . . . . . 1-22
CPU ID Fields . . . . . . . . . . . . . . . . . . . . . . 1-22
Examples of CPU ID Information . . . . . . . . . . . . . . . 1-23
HSA Allocation . . . . . . . . . . . . . . . . . . . . . . . . 1-24
HSA Estimation Tool . . . . . . . . . . . . . . . . . . . . . 1-24
TOD Clock Processing . . . . . . . . . . . . . . . . . . . . . 1-25
No Sysplex Timer Attached and Server Time Protocol Not Enabled . . . . 1-25
Sysplex Timer Attached . . . . . . . . . . . . . . . . . . . . 1-25
Server Time Protocol Enabled . . . . . . . . . . . . . . . . . 1-25
Sysplex Testing Without a Sysplex Timer and Server Time Protocol Not
Enabled . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
Synchronized Time Source and the Coupling Facility . . . . . . . . . 1-26
Extended TOD-Clock Facility . . . . . . . . . . . . . . . . . . 1-26
© Copyright IBM Corp. 2005, 2008 1-1

Overview

This chapter introduces the characteristics of logical partitioning and migration and compatibility considerations. Processor Resource/Systems Manager™ (PR/SM) is standard on all z9 EC and z9 BC models.

Prerequisites for Operation

The prerequisites for operation are:
v Programming compatibility v Hardware support v Operator training v Programming support
Control program support Input/Output Configuration Program (IOCP) support

PR/SM

PR/SM enables logical partitioning of the central processor complex (CPC).
Logical Partitioning
PR/SM enables the logical partitioning function of the CPC. The operator defines the resources that are to be allocated to each logical partition (LP). Most resources can be reconfigured without requiring a power-on reset. After an ESA/390, ESA/390 TPF, or Linux®-Only LP is defined and activated, you can load a supported control program into that LP. If a coupling facility logical partition is defined and activated, the coupling facility control code is automatically loaded into the LP.
Central Storage
Central storage is defined to LPs before LP activation. When an LP is activated, the storage resources are allocated in contiguous blocks. These allocations can be dynamically reconfigured. Sharing of allocated central storage among multiple LPs is not allowed.
On System z9, all storage is defined as central storage. Allocation of storage to logical partitions can be made as either central storage or expanded storage. Any allocation of expanded storage to an LP reduces the amount of storage available for allocation as central storage. See “Single Storage Pool” on page 3-7. If no storage is allocated as expanded storage and no other LP is currently active, an individual LP can have a central storage amount equal to installed storage minus the size of the hardware system area (HSA). The sum total of all LP central and expanded storage cannot exceed the amount of installed storage - HSA. Storage allocated to LPs never includes HSA. Central storage in excess of 2 GB should only be allocated in an LP that has an operating system capable of utilizing the new 64-bit z/Architecture.
Expanded Storage
Optional expanded storage is defined to LPs before LP activation. When an LP is activated, the storage resources are allocated in contiguous blocks. These allocations can be dynamically reconfigured. Sharing of allocated expanded storage among multiple LPs is not allowed.
See “Expanded Storage” on page 3-10.
1-2 PR/SM Planning Guide
Central Processors
Central processors (CPs) can be dedicated to a single LP or shared among multiple LPs. CPs are allocated to an LP when the LP is activated. Yo u can use operator tasks to limit and modify the use of CP resources shared between LPs while the LPs are active.
Multiple Image Facility
The multiple image facility (MIF) is available on all CPCs discussed in this publication. MIF allows channel sharing among LPs. For information about accessing devices on shared channel paths and defining shared channel paths, see “Defining Shared Channel Paths” on page 3-48.
MCSS: Multiple Logical Channel Subsystems (CSS) are available on all CPCs
discussed in this publication. Each CSS supports a definition of up to 256 channels.
Channel Paths: Active LPs can share channels. Shared channels require that the
channel subsystem establish a logical path for each channel image corresponding to an active LP that has the channel configured online. CNC, CTC, OSC, OSD,
|
OSE, OSN, CBP, CFP, CIB, ICP, FC, FCV, FCP, and IQD channel path types can be shared. CVC and CBY channel paths cannot be shared.
For information about accessing devices on shared channel paths and defining shared channel paths, see “Defining Shared Channel Paths” on page 3-48.
Crypto Features
An optional feature, Crypto Express2, is available, which is designed for Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification. Crypto Express2 contains two PCI-X adapters, each of which can be configured independently as either a coprocessor or an accelerator. The default is coprocessor. Crypto Express2 can be configured using the Cryptographic Configuration panel found under the Configuration tasks list.
Crypto Express2 Coprocessor (CEX2C) replaces the functions of the PCI X Crypto Coprocessor (PCIXCC), which was available on some previous zSeries processors, such as the z890 and z990.
Crypto Express2 Accelerator (CEX2A) replaces the functions of the PCI Crypto Accelerator (PCICA), which was available on some previous zSeries processors such as the z890 and z990.
Crypto Express2 Coprocessor is used for secure key encrypted transactions, and is the default configuration. CEX2C:
v Supports highly secure cryptographic functions, use of secure encrypted key
values, and User Defined Extensions (UDX).
v Is designed for Federal Information Processing Standard (FIPS) 140-2 Level 4
certification.
Crypto Express2 Accelerator is used for SSL acceleration. CEX2A:
v Supports clear key RSA acceleration. v Offloads compute-intensive RSA public-key and private-key cryptographic
operations employed in the SSL protocol.
Chapter 1. Introduction to Logical Partitions 1-3
Configurable Single Processor Crypto Express2
Available exclusively for the z9 BC, the Crypto Express2-1P feature has one PCI-X adapter. The PCI-X adapter can be defined as either a Coprocessor or an Accelerator. A minimum of two features must be ordered:
v Crypto Express2-1P Coprocessor - for secure-key encrypted transactions
(default). Supports highly secure cryptographic functions, use of secure encrypted key
values, and User Defined Extensions (UDX).
Is designed for Federal Information Processing Standard (FIPS) 140-2 Level 4
certification.
Crypto Express2-1P Accelerator - for Secure Sockets Layer (SSL) acceleration.
v
Supports clear key RSA acceleration. Offloads compute-intensive RSA public-key and private-key cryptographic
operations employed in the SSL protocol.
CP Crypto Assist Functions
CP Crypto Assist Functions (CPACF), supporting clear key encryption, offers the following on every Processor Unit (PU) identified as a Central Processor (CP),
| |
Integrated Facility for Linux (IFL), System z9 Integrated Information Processor (zIIP), or System z9 Application Assist Processor (zAAP):
v Data Encryption Standard (DES) v Triple Data Encryption Standard (TDES) v Advanced Encryption Standard (AES) v SHA-1 v SHA-256 v Pseudo Random Number Generation (PRNG)

zSeries Parallel Sysplex® Support

Parallel sysplex makes use of a broad range of hardware and software products to process in parallel a transaction processing workload across multiple z/OS images running in a sysplex and sharing data in a coupling facility.
Parallel sysplex allows you to manage a transaction processing workload, balanced across multiple z/OS images running on multiple CPCs, as a single data management system. It also offers workload availability and workload growth advantages.
The parallel sysplex enhances the capability to continue workload processing across scheduled and unscheduled outages of individual CPCs participating in a sysplex using a coupling facility by making it possible to dynamically reapportion the workload across the remaining active sysplex participants. Additionally, you can dynamically add processing capacity (CPCs or LPs) during peak processing without disrupting ongoing workload processing.
CPC support consists of having the capability to do any or all of the following:
v Install coupling facility channels v Define, as an LP, a portion or all of the CPC hardware resources (central
processors, storage, and coupling facility channels) for use as a coupling facility that connects to z/OS images for data sharing purposes
v Connect to a coupling facility to share data v Installing a z9 BC or z9 EC with only ICFs (1 or more) operates as a stand-alone
coupling facility; it cannot run z/OS or any other operating system.
1-4 PR/SM Planning Guide
For more information on the coupling facility including z/OS and CPC support for coupling facility levels, see “Coupling Facility Planning Considerations” on page 2-39.

Guest Coupling Simulation

Guest coupling simulation is available with z/VM.
z/VM guest coupling simulation allows you to simulate one or more complete parallel sysplexes within a single system image, providing a test environment for parallel sysplex installation. The simulated environment is not intended for production use since its single points of failure diminish the availability advantages of the parallel sysplex environment. There are no special hardware requirements external coupling facility channels, external coupling facilities, and Sysplex Timers are neither necessary nor supported. Guest operating systems within a simulated sysplex can only be coupled (through simulated coupling facility channels) to coupling facilities also running as guests of the same z/VM system. You can have up to 32 virtual machines running z/OS within a simulated sysplex, with each z/OS virtual machine coupled to up to 8 virtual machines running as coupling facilities.
There is no system-imposed limit to the number of guest parallel sysplex environments that z/VM can simulate. However, practical limits on the number of guests that can be supported by a particular hardware configuration will also constrain the number of simulated parallel sysplex environments.

Control Program Support in a Logical Partition

Control programs require certain characteristics. Before planning or defining LP characteristics, call your installation management to determine which control programs are in use or planned for operation.
Notes:
1. Use IBM Service Link to view the appropriate PSP bucket subset ID for hardware and software maintenance information.
2. For more detailed information about support for coupling facility levels (including hardware EC, driver, and MCL numbers and software APAR numbers), see “Coupling Facility Level (CFLEVEL) Considerations” on page 2-50.
z/OS
z/OS Release 9 Support
|
|
|
|
|
|
|
|
|
|
|
|
|
|
v z/OS Management Console v CHPID type OSA performance enhancements v Enhancements to On/Off Capacity on Demand v Enhancements to z9 BC Capacity Backup v Group Capacity Limit for LPAR v LDAP Support for HMC user authentication v FICON Express4-2C SX on the z9 BC v CFLEVEL 15 support (with applicable PTF) v System-initiated CHPID reconfiguration v Multipath IPL v OSA-Express Network Traffic analyzer v QDIO Diagnostic Synchronization v Power monitoring v Hardware Decimal Foating Point facilities
Chapter 1. Introduction to Logical Partitions 1-5
|
|
| |
| |
|
|
|
|
|
|
| |
|
|
|
|
| |
|
| |
| |
| | |
|
|
|
|
| |
|
|
|
v Support for Server Time Protocol (STP) v Support for z9 BC and z9 EC v Multiple Subchannel Sets (MSS) for ESCON (CHPID type CNC) and FICON
(CHPID type FC)
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v OSA Express2 large send (also called TCP segmentation offload) v Support for zAAP v Support for zIIP v Support for up to 54 CPs on z9 models v System-Managed CF Structure Duplexing support v Support for multiple LCSS’s v XES Coupling Facility cache structure architecture extensions for Batch Write,
Castout, and Cross-Invalidate functions
v z/Architecture 64-bit addressing v 1 TB of central storage on z9 models v 256 channel paths v Coupling facility level 14 v Support for Message Time Ordering providing enhanced scalability of parallel
sysplex
v Support z ELC software pricing structure v Support for zSeries PCI Cryptographic Accelerator feature by the Integrated
Cryptographic Services Facility and System SSL functions
v Support for zSeries PCI X Cryptographic Coprocessor feature by the Integrated
Cryptographic Services Facility and System SSL functions
v Support Crypto Express2 and Crypto Express2-1P (only available on the z9 BC)
and provides access to the secure key and System SSL functions via the Integrated Cryptographic Services Facility.
v Intelligent Resource Director (IRD) v WorkLoad Pricing v Peer mode channels (ICP, CFP, CBP) v WorkLoad Manager (WLM) Multisystem Enclaves support v XES Coupling Facility List Structure Architecture Extensions [also known as
Message and Queuing (MQ) Series]
v Logical partition time offset v Internal coupling facility channels v System Managed Rebuild
z/OS Release 8 Support
v CHPID type OSA performance enhancements v Enhancements to On/Off Capacity on Demand v Enhancements to z9 BC Capacity Backup v Group Capacity Limit for LPAR v LDAP Support for HMC user authentication v FICON Express4-2C SX on the z9 BC v CFLEVEL 15 support (with applicable PTF) v System-initiated CHPID reconfiguration
1-6 PR/SM Planning Guide
v Multipath IPL v OSA-Express Network Traffic analyzer v QDIO Diagnostic Synchronization v Power monitoring v Hardware Decimal Foating Point facilities v Support for Server Time Protocol (STP) v Support for z9 BC and z9 EC v Multiple Subchannel Sets (MSS) for ESCON (CHPID type CNC) and FICON
(CHPID type FC)
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v OSA Express2 large send (also called TCP segmentation offload) v Support for zAAP v Support for zIIP v Support for up to 32 CPs v System-Managed CF Structure Duplexing support v Support for multiple LCSS’s v XES Coupling Facility cache structure architecture extensions for Batch Write,
Castout, and Cross-Invalidate functions
v z/Architecture 64-bit addressing v 1 TB of central storage on z9 models v 256 channel paths v Coupling facility level 14 v Support for Message Time Ordering providing enhanced scalability of parallel
sysplex
v Support z ELC software pricing structure v Support for zSeries PCI Cryptographic Accelerator feature by the Integrated
Cryptographic Services Facility and System SSL functions
v Support for zSeries PCI X Cryptographic Coprocessor feature by the Integrated
Cryptographic Services Facility and System SSL functions
| | |
v Support for Crypto Express2 and Crypto Express2-1P (only available on the z9
BC) and provides access to the secure key and System SSL functions via the Integrated Cryptographic Services Facility.
v Intelligent Resource Director (IRD) v WorkLoad Pricing v Peer mode channels (ICP, CFP, CBP) v WorkLoad Manager (WLM) Multisystem Enclaves support v XES Coupling Facility List Structure Architecture Extensions [also known as
Message and Queuing (MQ) Series]
v Logical partition time offset v Internal coupling facility channels v System Managed Rebuild
z/OS Release 7 Support
v Multipath IPL v System-initiated CHPID reconfiguration v CFLEVEL 15 support (with applicable PTF)
Chapter 1. Introduction to Logical Partitions 1-7
v FICON Express4-2C SX on the z9 BC v Hardware Decimal Foating Point facilities v Support for Server Time Protocol (STP) v Support for z9 BC and z9 EC v Multiple Subchannel Sets (MSS) for ESCON (CHPID type CNC) and FICON
(CHPID type FC)
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v OSA Express2 large send (also called TCP segmentation offload) v Support for zAAP v Support for zIIP v Support for up to 32 CPs v System-Managed CF Structure Duplexing support v Support for multiple LCSS’s v XES Coupling Facility cache structure architecture extensions for Batch Write,
Castout, and Cross-Invalidate functions
v z/Architecture 64-bit addressing v 128 GB of central storage v 256 channel paths v Coupling facility level 14 v Support for Message Time Ordering providing enhanced scalability of parallel
sysplex
v Support z ELC software pricing structure v Support for zSeries PCI Cryptographic Accelerator feature by the Integrated
Cryptographic Services Facility and System SSL functions
v Support for zSeries PCI X Cryptographic Coprocessor feature by the Integrated
Cryptographic Services Facility and System SSL functions
| | |
v Support for Crypto Express2 and Crypto Express2-1P (only available on the z9
BC) and provides access to the secure key and System SSL functions via the Integrated Cryptographic Services Facility.
v Intelligent Resource Director (IRD) v WorkLoad Pricing v Peer mode channels (ICP, CFP, CBP) v WorkLoad Manager (WLM) Multisystem Enclaves support v XES Coupling Facility List Structure Architecture Extensions [also known as
Message and Queuing (MQ) Series]
v Logical partition time offset v Internal coupling facility channels v System Managed Rebuild
z/OS Release 6 Support
v System-initiated CHPID reconfiguration (with applicable PTF) v Multipath IPL (with applicable PTF) v CFLEVEL 15 support (with applicable PTF) v FICON Express4-2C SX on the z9 BC (with applicable PTF) v Hardware Decimal Foating Point facilities (with applicable PTF) v Support for z9 BC and z9 EC (with applicable PTFs)
1-8 PR/SM Planning Guide
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v OSA Express2 large send (also called TCP segmentation offload) v Support for zAAP v Support for zIIP v Support for > 16 CPs (up to 32) v System-Managed CF Structure Duplexing support v Support for multiple LCSS’s v XES Coupling Facility cache structure architecture extensions for Batch Write,
Castout, and Cross-Invalidate functions
v z/Architecture 64-bit addressing v 128 GB of central storage v 256 channel paths v Coupling facility level 14 v Support for Message Time Ordering providing enhanced scalability of parallel
sysplex
v Support z ELC software pricing structure v Support for zSeries PCI Cryptographic Accelerator feature by the Integrated
Cryptographic Services Facility and System SSL functions
v Support for zSeries PCI X Cryptographic Coprocessor feature by the Integrated
Cryptographic Services Facility and System SSL functions
| | | |
v Support for Crypto Express2 and Crypto Express2-1P (only available on the z9
BC) with ICSF 64-bit Virtual Support for z/OS V1.6 and z/OS.e V1.6 provides access to the secure key and System SSL functions via the Integrated Cryptographic Services Facility.
v Intelligent Resource Director (IRD) v WorkLoad Pricing v Peer mode channels (ICP, CFP, CBP) v WorkLoad Manager (WLM) Multisystem Enclaves support v XES Coupling Facility List Structure Architecture Extensions [also known as
Message and Queuing (MQ) Series]
v Logical partition time offset v Internal coupling facility channels v System Managed Rebuild
z/VM
z/VM Version 5 Release 3 Support
v System-managed Coupling Facility structure duplexing, for z/OS guests v Dedicated OSA port to an operating system v CHPID type OSA performance enhancements, for z/OS guests v CHPID type FCP performance enhancements v FICON Express4-2C SX on the z9 BC v Hardware Decimal Foating Point facilities v z/VM integrated systems management v Support for z9 BC and z9 EC v Support for FICON Express2 (CPHID type FC), including Channel-To-Channel
(CTC)
v Support for FICON Express2 (CHPID type FCP) for native support of SCSI disks
Chapter 1. Introduction to Logical Partitions 1-9
v 32 CPs v z/Architecture 64-bit addressing v 256 GB of central storage v 128 GB of expanded storage v 256 channel paths v Coupling facility level 15
|
v Crypto Express2 and Crypto Express2-1P on the BC models (guest use) v Peer mode channels (ICP, CFP, CBP) v Guest Coupling simulation v Dynamic I/O configuration support through the CP configurability function v Subspace group facility (guest use) v Able to use IFL engines (for Linux workloads) v Performance assist via pass-through of adapter I/O operations and interruptions
for FCP, IQD, and OSD CHPID types
v zIIP and zAAP Simulation on CPs v Support for zIIP v Support for zAAP
z/VM Version 5 Release 2 Support
v CHPID type OSA performance enhancements, for z/OS guests v CHPID type FCP performance enhancements v FICON Express4-2C SX on the z9 BC v Hardware Decimal Foating Point facilities v Support for z9 BC and z9 EC v System-managed Coupling Facility structure duplexing, for z/OS guests v Support for FICON Express2 (CPHID type FC), including Channel-To-Channel
(CTC)
v Support for FICON Express2 (CHPID type FCP) for native support of SCSI disks v 24 CPs v z/Architecture 64-bit addressing v 128 GB of central storage v 128 GB of expanded storage v 256 channel paths v Coupling facility level 15
|
v Crypto Express2 and Crypto Express2-1P on the BC models (guest use) v Peer mode channels (ICP, CFP, CBP) v Guest Coupling simulation v Dynamic I/O configuration support through the CP configurability function v Subspace group facility (supported for guest use) v Able to use IFL engines (for Linux workloads) v Performance assist via pass-through of adapter I/O operations and interruptions
for FCP, IQD, and OSD CHPID types
z/VM Version 5 Release 1 Support
v System-managed Coupling Facility structure duplexing, for z/OS guests v CHPID type OSA performance enhancements, for z/OS guests v CHPID type FCP performance enhancements
1-10 PR/SM Planning Guide
v FICON Express4-2C SX on the z9 BC v Support for z9 BC and z9 EC (with applicable PTFs) v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v Support for FICON Express2 (CHPID type FCP) for native support of SCSI disks v ESA/390 mode v z/Architecture 64-bit addressing v 24 CPs v 128 GB of central storage v 128 GB of expanded storage v 256 channel paths
|
v Crypto Express2 and Crypto Express2-1P on the BC models (guest use) v Dynamic I/O configuration support through the CP configurability function v Subspace group facility (supported for guest use only) v Guest coupling simulation v Able to use IFL engines (for Linux workloads) v Performance assist via passthrough of adapter interruptions for FCP, IQD, and
OSD CHPID types
| | | | |
| | | | | | | | | | | | | | | | |
z/VSE
z/VSE Version 4 Release 1 Support
v z/Architecture mode and 64-bit real addressing v 8 GB of processor storage v MWLC pricing option
z/VSE Version 3 Release 1 Support
v ESA/390 mode v 2 GB of processor storage v Up to 10 CPs v 256 channel paths v Support for z9 EC and z9 BC v OSA Express2 familiy v HiperSockets v Support for FICON Express2 / FICON Express4 (CHPID type FC), inlcuding
Channel-To-Channel (CTC)
v Support for FICON Express2 / FICON Express4 (CHPID type FCP) for native
support of SCSI disks
v CHPID type FCP performance enhancements v CP Assist for Cryptographic Functions (CPACF) v Configurable Crypto Express2 (zVSE offers support for clear-key SSL
transactions only)
v Crypto Express-1P on the z9 BC
TPF (Transaction Processing Facility)
z/TPF Version 1 Release 1 Support
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v ESA/390 or ESA/390 TPF mode v Up to 38 CPs (either shared or dedicated LP) v 512 GB’s of central storage v Coupling facility level 9 with APAR support v 256 channels
TPF Version 4 Release 1 Support
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
Chapter 1. Introduction to Logical Partitions 1-11
v ESA/390 or ESA/390 TPF mode v 16 CPs (either shared or dedicated LP) v 2048 MB of central storage v Coupling facility level 9 with APAR support v Shared multiple processor LPs with PUT 03 v 256 channels
1-12 PR/SM Planning Guide
Linux
Linux for S/390
v Support for FICON Express2 (CHPID type FC), including Channel-To-Channel
(CTC)
v Support for FICON Express2 (CHPID type FCP) for support of SCSI disks v OSA Express2 large send (also called TCP segmentation offload) v Linux-Only mode v 16 CPs v 2048 MB of central storage v 262144 MB of expanded storage v 256 channels v PKCS #11 API support for zSeries PCI Cryptographic Accelerator Feature v WLM Management of shared logical processors v Performance assist via passthrough of adapter interruptions for FCP, IQD, and
OSD CHPID types
v PCIXCC support v Support for zSeries Crypto Express2 feature only for SSL functions (available on
release 7 only). Support for clear key RSA operations only.
Hardware Configuration Definition (HCD)
You can use HCD’s interactive panels to define configuration information both to the CPC and to the operating system.
Note: HCD and Hardware Configuration Manager (HCM) are also available on
z/VM.
HCD, running on z/OS allows you to dynamically change the current I/O configuration of the CPC. HCD allows you to dynamically change the current I/O configuration of the operating system and to create a new IOCDS and make it the active IOCDS.
HCD is required to define the I/O configuration to the operating system. HCD is also the recommended way to define hardware configurations.
The HCD component of z/OS allows you to define the hardware and software I/O configuration information necessary for a parallel sysplex solution environment, including the capability to define:
|
v peer-mode channel paths (CFP, CBP, CIB, and ICP) to connect z/OS systems to
coupling facility images, and
|
v peer-mode channel paths (CFP, CBP, CIB, and ICP) to connect coupling facility
images to one another, in support of System-Managed CF Structure Duplexing.
Additionally, HCD allows you to remotely write IOCDSs from one support element CPC to another support element CPC as long as both support elements are powered-on, LAN-attached, enabled for remote licensed internal code (LIC) update, and defined to the same Hardware Management Console.
Dynamic I/O configuration does not support the following:
v Adding/deleting MIF image IDs from the initial ones in the IOCDS. (However, you
can change which LPs go to which MIF image IDs).
When using HCD in z/OS, you can define and control the configuration of the CPC affecting all LPs. Those LPs that run with HCD or z/VM can dynamically change their software configuration definitions. Other LPs may require an IPL in order to use the new configuration.
Chapter 1. Introduction to Logical Partitions 1-13
When you use HCD you must install, in the LP, the appropriate version of IOCP. Throughout the remainder of this publication, all the capabilities or restrictions documented regarding the IOCP program, also apply to definitions entered and controlled through HCD.
For more information about dynamic I/O configuration on z/OS see:
v z/OS Hardware Configuration Definition Planning, GA22-7525 v z/OS Hardware Configuration Definition: User’s Guide, SC33-7988
z/VM Dynamic I/O Configuration
You can dynamically change the current I/O configuration of the CPC. You can also change the current I/O configuration of the operating system and create a new IOCDS and make it the active IOCDS.
Dynamic I/O configuration does not support the following:
| |
v Adding/deleting MIF image IDs from the initial ones in the IOCDS. (However, you
can change which LPs go to which MIF image IDs).
You can define and control the configuration of the CPC affecting all LPs. Those LPs that run z/VM can dynamically change their software configuration definitions.

Input/Output Configuration Program (IOCP) Support

To perform a power-on reset you must use an LPAR IOCDS. To generate an LPAR IOCDS you need to use the ICP IOCP program.
PTFs for supported IOCP versions must be applied and can be obtained from the IBM Software Support Center. For more information on ICP IOCP, see Input/Output Configuration Program User’s Guide, SB10-7037.

Hardware Support

LPs operate independently but can share access to I/O devices and CPC resources. Each active LP must have sufficient channel paths and storage to meet the particular requirements of that LP. Additional central storage, expanded storage, channel paths, consoles, and other I/O devices might be necessary for the planned configuration of LPs.

Operator Training

A general knowledge of z/Architecture and ESA/390 is useful and, in some cases, required of all technical support personnel, LPAR planners, and operators.
Generally, the operator performs the following tasks:
v Edit activation profiles
Reset profiles
v Performing a CPC activation v Activating an LP v Performing a load on an LP or activating a load profile v Deactivating a CPC
- Select an IOCDS
- Optionally specify LP activation sequence
Image profiles
- Define LP characteristics
- Optional automatic load specification
Load profiles
1-14 PR/SM Planning Guide
v Deactivating an LP

Logical Partitions

This section provides an overview of LP characteristics. Some of the characteristics described in this section are model-dependent, EC-level dependent, MCL-dependent, LP mode dependent, or control-program dependent. For this reason, all of the characteristics described here are not necessarily available on all CPCs.
The resources of a CPC can be distributed among multiple control programs that can run on the same CPC simultaneously. Each control program has the use of resources defined to the logical partition in which it runs.
You can define an LP to include:
v One or more CPs v Central storage v Channel paths v Optional expanded storage v Two or more optional Crypto Express2 Coprocessors or Crypto Express2
Accelerators

Characteristics

| | | |
|
An LP can be defined to include CPs, zAAPs, and zIIPs (in combination, so long as one CP is defined) or IFLs.
You can also define an LP to be a coupling facility running the coupling facility control code.
LPs can have the following characteristics. For more information or details about exceptions to any of the characteristics described below, see “Determining the Characteristics” on page 3-5).
v The maximum number of LPs you can define on a z9 EC is 60. The z9 BC S07
supports 30 LPs. The BC R07 only supports the activation of 15 LPs but you can power-on reset the R07 with an IOCDS containing up to 30 LPs. You cannot define an IOCDS with more than 30 LPs for a z9 BC.
v LPs can operate in ESA/390, ESA/390 TPF, Linux-Only, or coupling facility mode. v The storage for each LP is isolated. Central storage and expanded storage
cannot be shared by LPs.
v Using dynamic storage reconfiguration, an LP can release storage or attach
storage to its configuration that is released by another LP.
v All channel paths can be defined as reconfigurable. Channel paths are assigned
to LPs. You can move reconfigurable channel paths between LPs using tasks available from either the Hardware Management Console or the support element console. If the control program running in the LP supports physical channel path reconfiguration, channel paths can be moved among LPs by control program commands without disruption to the activity of the control program.
v MIF allows channel paths to be shared by two or more LPs at the same time.
Only CNC, CTC, CBP, CIB, CFP, ICP, OSC, OSD, OSE, OSN, FC, FCV, FCP, and IQD channel paths can be shared.
v LPs can be defined to have as many as 54 logical CPs. CPs can be dedicated to
LPs or shared by them. CPs that you define as dedicated to an LP are not available to perform work for other active LPs. The resources of shared CPs are allocated to active LPs as needed. You can cap (limit) CP resources, if required.
Chapter 1. Introduction to Logical Partitions 1-15
You cannot define a mix of shared and dedicated CPs for a single LP (except for a coupling facility LP using the internal coupling facility feature. See “Internal Coupling Facility (ICF)” on page 2-46). CPs for an LP are either all dedicated or all shared. However, you can define a mix of LPs with shared CPs and LPs with dedicated CPs and activate them concurrently.
For security purposes, you can: Reserve reconfigurable channel paths for the exclusive use of an LP (unless
overridden by the operator)
Limit the authority of an LP to read or write any IOCDS in the configuration
and limit the authority of an LP to change the I/O configuration dynamically
Limit the authority of an LP to retrieve global performance data for all LPs in
the configuration
Limit the authority of an LP to issue certain control program instructions that
affect other LPs
Figure 1-1 shows some of the characteristics that can be defined for an LP. You can view each LP as a CPC operating within the physical CPC.
1-16 PR/SM Planning Guide
|
ZVSE
ZVM
|
Figure 1-1. Characteristics of Logical Partitions
| |
Chapter 1. Introduction to Logical Partitions 1-17

Potential Applications

The use of LPs allows multiple systems, including the I/O for the systems, to be migrated to a single CPC while maintaining the I/O performance, recovery, and multipathing capability of each system, and with minimum impact to the system generation procedures.
LPs are suitable for consideration in the following environments:
Consolidation
Migration Control programs or applications can be migrated by running the
Production and Test
Coupling Facility
Multiple production system images can be consolidated onto 1 CPC without having to merge them into one image.
old and new systems or applications in independent LPs that are active on the same CPC at the same time.
Multiple production and test systems can run on the same CPC at the same time.
A coupling facility enables high performance, high integrity data sharing for those CPCs attached to it and configured in a sysplex.
Coupled Systems
Multiple instances of the same workload can be run in multiple LPs on one or more CPCs as part of a sysplex configuration that takes advantage of the centrally accessible, high performance data sharing function provided by the coupling facility.
Extended Recovery Facility (XRF)
Primary and alternate XRF systems can run on 1 CPC. Multiple and alternate XRF systems can run on 1 CPC.
Communications Management Configuration (CMC)
The communications management configuration (CMC) machine, usually run on a separate CPC, can be run as an LP on the same CPC.
Departmental Systems
Multiple applications can be isolated from one another by running each in a separate LP.
Constrained Systems
Those systems that cannot fully use a large system because of storage constraints can alleviate the problem by using LPs to define multiple system images on the same CPC.
Diverse Workloads
Interactive workloads such as the Customer Information Control System (CICS®) and time-sharing option (TSO) can be isolated by running each in a separate LP.
1-18 PR/SM Planning Guide
Examples of Logical Partition Applications
Figure 1-2 and Figure 1-3 show examples of how LPs on a CPC can be used to support multiple systems.
Figure 1-2 represents the migration of two z/OS, one z/VM and one stand-alone coupling facility systems onto a single CPC. These systems, which were running on four separate IBM 9672 CPCs, now operate as four LPs on a single z9 EC. Two LPs are each running a z/OS system in ESA/390 mode, one is running a z/VM system in ESA/390 mode, and one is running in coupling facility mode.
9672
ZOSPROD
9672
ZOSTEST
9672 z/VM
Migrate
LP1
ZOSPROD
ESA/390
LP2
ZOSTEST
ESA/390
LP3
z/VM
ESA/390
System z9
Figure 1-2. Migration of Four Production Systems to LPs
9672-R06
Standalone
Coupling Facility
LP4
CFLP1
Coupling Facility
Mode
Figure 1-3 represents a CPC with three LPs: an active XRF system, an alternate XRF system, and an alternate for another primary XRF system.
Figure 1-3. Support for Three XRF Systems
Chapter 1. Introduction to Logical Partitions 1-19

Compatibility and Migration Considerations

This section provides migration and compatibility information for the System z9.

Device Numbers

When multiple systems are migrated to a System z9 CPC, the combination of systems could include different devices or shared devices with identical device numbers. Each system can operate in an LP without changing the device numbers as long as identical device numbers do not occur in the same LP. However, duplicate device numbers can exist in the same LP if these device numbers are in different subchannel sets.
Duplicate device number conflicts can occur when the I/O configuration is reconfigured. For example, if a reconfigurable channel path is reassigned to another LP and devices attached to the channel path have device numbers that are already assigned in the receiving LP to other online channel paths, a conflict results. When IOCP generates an LPAR IOCDS, the initial configuration contains no duplicate device number conflicts in an LP.
Device number conflicts are also detected when operator tasks change the I/O configuration (channel path tasks from the Hardware Management Console or support element console; or control program configuration command) or during LP activation.
Duplicate device number conflicts are also detected when a dynamic I/O configuration change is made.

Multiple Subchannel Sets (MSS)

A new Multiple Subchannel Sets (MSS) structure for z9 EC allows increased device connectivity for Parallel Access Volumes (PAVs). Two subchannel sets per Logical Channel Subsystem (LCSS) are designed to enable a total of 63.75K subchannels in set-0 and the addition of 64K-1 subchannels in set-1. MSS, exclusive to the System z9, is supported by ESCON (CHPID type CNC), FICON when configured as CHPID type FC, and z/OS.

Control Programs

PTFs for supported control programs must be applied and can be obtained from the IBM Software Support Center. A supported control program operates in an LP as it does in one of the basic modes, with the following exceptions:
z/OS
v Physical reconfiguration, either offline or online, of CPs is not supported on the
System z9. Logical CP reconfiguration, either offline or online, is supported in an LP. This does not affect the online/offline status of the physical CPs. To reconfigure a logical CP offline or online, use the following z/OS operator command:
CF CPU(x),<OFFLINE/ONLINE>
v Physical reconfiguration, either offline or online, of central and expanded storage
is supported. To reconfigure a central storage element offline or online, use the following z/OS
operator command:
CF STOR(E=1),<OFFLINE/ONLINE>
1-20 PR/SM Planning Guide
Additionally you can use the following command to reconfigure smaller amounts of central storage online or offline:
CF STOR(nnM),<OFFLINE/ONLINE>
To reconfigure an expanded storage element offline or online, use the following z/OS operator command:
CF ESTOR(E=x),<OFFLINE/ONLINE>
Reconfigurable Storage Unit (RSU) Considerations: The RSU parameter
should be set to the same value that you specified in the central storage Reserved field. See z/OS MVS Initalization and Tuning Reference for appropriate RSU parameter syntax.
v Reconfiguration, either offline or online, of channel paths by z/OS operator
commands is supported on a System z9. This capability also allows channel paths to be moved among LPs using z/OS operator commands.
v Preferred paths to a device are supported on a System z9. If the preferred path
parameter is specified in an LPAR IOCDS, it is accepted.
v Specifying SHAREDUP for devices is not recommended. If used, z/OS treats the
device as a SHARED device.
v Each z/OS LP can run the Resource Measurement Facility (RMF). RMF
enhancements for PR/SM allow a single LP to record system activities and report them in the Partition Data Report. To enable this, use the Change LPAR Security task available from the CPC Operational Customization task list and select Performance Data Control for the LP.
For z/OS, RMF reporting includes LPAR Management Time.
v RMF provides enhanced reporting for coupling facility configurations. v RMF with APAR support identifies which logical and physical CPs are of each
type when any combination of general purpose, IFA, IFL, zIIP and ICF processors are present in the configuration on its partition data report.You can have up to 32 virtual machines running z/OS within a simulated sysplex, with each z/OS virtual machine coupled to up to 8 virtual machines running as coupling facilities
EREP
Each control program operating in an LP has its own environmental recording, editing, and printing (EREP) processing. EREP records for ICP channel paths go to the z/OS logs of the z/OS systems attached to a coupling facility LP.
Chapter 1. Introduction to Logical Partitions 1-21

CPU IDs and CPU Addresses

Application packages and software products that are licensed to run under specific CPU identification (CPU ID) information should be checked because they may need to be updated.
CPU ID information is system-generated for each logical CP in the LP during the LP activation. It consists of a version code for the CPC machine type, a CPU identification number that is unique for each logical partition, a model number for the CPC machine type, and a value of X'8000'.
The Store CPU ID (STIDP) instruction stores the CPU ID for each logical CP in storage in the following format (Figure 1-4):
0
637 8 31 32 47 48
Version
Code
CPU Identification
Number
Machine Type 8000
Figure 1-4. CPU ID Format
Figure 1-5 shows the format of the CPU identification number (bits 8 through 31 of the CPU ID format).
PPnnnn
Legend :
P
Logical partition identifier
n
Digit derived from the serial number
Figure 1-5. CPU Identification Number Format
CPU ID Fields
The CPU identification number, with the version code and the machine type, permits a unique CPU ID for each logical partition.
v The version code for the System z9 is always zero and is not affected by the
operating mode.
v The CPU identification number for each logical CP (see Figure 1-5) consists of
a two-digit LP identifier, and digits derived from the serial number of the CPC. The logical partition identifier is specified using the Partition identifier field
on the General page in either the reset or image profile used by the LP and must be unique for each active LP in the configuration.
The following machine types (CPC model numbers) are returned as indicated
v
below:
1-22 PR/SM Planning Guide
Machine Type Models
2094 System z9 EC
(S08, S18, S28, S38, S54)
2096 System z9 BC
(R07, S07)
Note: STIDP is provided for purposes of backward compatibility. IBM highly
recommends that the Store System Information instruction (STSI) be used rather than STIDP. STSI is the preferred means to obtain all CPU information including machine serial number. When a unique logical CPU address is all that is required, use the Store CPU Address (STAP) instruction.
Examples of CPU ID Information
The following examples show the format and contents of the CPU ID information stored by the STIDP instruction for logical CPs in active LPs.
Table 1-1 shows the CPU ID information for a z9 EC with 3 active LPs.
|
Table 1-1. CPU IDs for a z9 EC
LP Name
LP
Identifier
Number of
CPs Defined CPU ID Returned by STIDP
ZVSE 1 1 00 019999 2094 8000 ZOSTEST 2 1 00 029999 2094 8000 ZOSPROD 3 8 00 039999 2094 8000
00 039999 2094 8000 00 039999 2094 8000 00 039999 2094 8000 00 039999 2094 8000 00 039999 2094 8000 00 039999 2094 8000 00 039999 2094 8000
Chapter 1. Introduction to Logical Partitions 1-23

HSA Allocation

This section addresses hardware system area (HSA) allocation.

HSA Estimation Tool

The HSA Estimation Tool, available on Resource Link at http://www.ibm.com/ servers/resourcelink, estimates the HSA size for the specified configuration and
product.
The HSA allocation can vary according to the I/O configuration. The following system I/O configurations require the given HSA size on a z9 EC.
v Small system I/O configuration (S08): 1344 MB
1 CSS 2 LPs 96 physical control units 4096 devices 64 channels defined in the IOCDS Dynamic I/O configuration not enabled Concurrent code change not authorized
v Large system I/O configuration (S38): 4288 MB
4 CSSs 60 LPs 65280 devices per LCSS for subchannel set 0 65535 devices per LCSS for subchannel set 1 Dynamic I/O configuration enabled Concurrent code change authorized
Intermediate HSA sizes exist for configuration sizes in between the given small and large I/O configuration examples.
The HSA range for the z9 BC is approximately 1-2 GB.
1-24 PR/SM Planning Guide

TOD Clock Processing

The CPC TOD clocks of all the CPs are automatically set during CPC activation. The time reference used depends on whether or not a Sysplex Timer® is attached to the CPC or Server Time Protocol (STP) is enabled. When STP is enabled, a CPC can participate in: an ETR network, a Mixed Coordinated Timing Network (CTN), or an STP-only CTN. In an ETR network or Mixed CTN, the Sysplex Timer provides the timekeeping information. In an STP-only CTN, the Sysplex Timer is no longer required. In this case the Current Time Server for the STP-only CTN provides the time information.

No Sysplex Timer Attached and Server Time Protocol Not Enabled

During PR/SM initialization, the CPC TOD clocks for each CP are set to the TOD value of the support element. Each LP starts out with this CPC TOD value at the completion of LP activation. The operating system running in an LP can set a TOD value for itself and this will be the only TOD reference it will see. Setting the TOD clock for one logical CP in the LP sets the TOD clock for all logical CPs in that LP, but does not affect the logical CPs in any other LP. The TOD clock value is used for the duration of the LP activation, or until a subsequent Set Clock instruction is issued in the LP.

Sysplex Timer Attached

The attachment and use of an IBM Sysplex Timer is supported. Also, during PR/SM initialization, when a Sysplex Timer is attached, the CPC TOD clocks for each CP are set to the TOD value from the Sysplex Timer.
The operating system in each LP can independently choose whether or not to synchronize to the Sysplex timer if one is present. Operating systems in LPs that do synchronize to the Sysplex Timer will all be running with identical TOD values. Operating systems in LPs that do not synchronize to the Sysplex Timer do not need to be aware of the presence of a Sysplex Timer and can set their TOD values independently of all other LPs.
Note that z/OS does not allow you to change the value of the TOD setting when it is using the Sysplex Timer (ETRMODE=YES in the CLOCKxx parmlib member).
The System z9 models support the specification of a logical partition time offset. When all members of a sysplex are in logical partitions on these supported models, the Logical partition time offset can be used for:
v Different local time zone support in multiple sysplexes using the same sysplex
timer. Many sysplexes have the requirement to run with a LOCAL=GMT setting in a sysplex (ETRMODE=YES) where the time returned from a Store Clock instruction yields local time. To do this, the time returned by the sysplex timer must be local time. Prior to Logical partition time offset support, this could only be accomplished for one sysplex for a given sysplex timer. With Logical partition time offset support, multiple sysplexes can each have their own local time reported to them from the sysplex timer if desired. For instance, the sysplex timer can be set to GMT, one set of sysplex partitions could specify a Logical partition time offset of minus 5 hours, and a second set of sysplex partitions could specify a Logical partition time offset of minus 6 hours.

Server Time Protocol Enabled

The enablement of STP is supported. Also, during PR/SM initialization, when STP is enabled, the CPC TOD clocks for each CP are set to the TOD value from STP.
Chapter 1. Introduction to Logical Partitions 1-25
The operating system in each LP can independently choose whether or not to synchronize to the current time source for STP, if present. Operating systems in LPs that do synchronize to STP will all be running with identical TOD values. Operating systems in LPs that do not synchronize to STP do not need to be aware of the presence of STP and can set their TOD values independently of all other LPs.
Note that z/OS does not allow you to change the value of the TOD setting when synchronized to STP (STPMODE=YES in the CLOCKxx parmlib member).
The System z9 models support the specification of a logical partition time offset. When all members of a sysplex are in logical partitions on these supported models, the Logical partition time offset can be used for:
v Different local time zone support in multiple sysplexes using the same CTN.
Many sysplexes have the requirement to run with a LOCAL=GMT setting in a sysplex (STPMODE=YES) where the time returned from a Store Clock instruction yields local time. To do this, the time returned by STP must be local time. With Logical partition time offset support, multiple sysplexes can each have their own local time reported to them from STP if desired. For instance, STP can be set to GMT, one set of sysplex partitions could specify a Logical partition time offset of minus 5 hours, and a second set of sysplex partitions could specify a Logical partition time offset of minus 6 hours.

Sysplex Testing Without a Sysplex Timer and Server Time Protocol Not Enabled

You can do sysplex testing without a Sysplex Timer or Server Time Protocol enabled by setting up a test sysplex of several z/OS LPs in multiple LPs in the same LPAR configuration. Use the SIMETRID keyword in the CLOCKxx parmlib member for z/OS to synchronize the members of the sysplex in the LPs.

Synchronized Time Source and the Coupling Facility

Improved processor and coupling facility link technologies inherent on System z9 models necessitate more rigorous time synchronization tolerance for members of a parallel sysplex hosted by those models. To help ensure that any exchanges of time-stamped information between members of a sysplex observe the correct time ordering, time stamps are now included in the message-transfer protocol between the systems and the coupling facility.
Consequently, a coupling facility hosted by any System z9 model requires connectivity to the same synchronized time source as the other z/OS systems in its parallel sysplex. If a member of its parallel sysplex is on the same server as the coupling facility, required connectivity is already provided to the synchronized time source. However, when a coupling facility is a resident of a System z9 model, which
does not include a member of the coupling facilities parallel sysplex, connectivity
attached to the synchronized time source must be implemented.

Extended TOD-Clock Facility

The extended TOD-clock facility provides an extended form TOD clock and a TOD programmable register. The extended form TOD clock is a 128-bit value that extends the current basic form by appending 8 bits on the left and 56 bits on the right. The extended form TOD clock is returned by a new problem-program instruction, STORE CLOCK EXTENDED (STCKE). The contents of the TOD programmable register are stored into the rightmost portion of the extended form TOD value when the TOD clock is inspected by STCKE. A TOD programmable
1-26 PR/SM Planning Guide
register exists for each CPU and contains the TOD programmable field in bits 16-31. The TOD programmable register is set by a new privileged instruction, SET TOD PROGRAMMABLE FIELD (SCKPF). The leftmost byte of the extended form TOD clock is the TOD Epoch Index (TEX), and is stored as zeros in machines running ESA/390.
The extended TOD clock facility satisfies three main objectives:
v Relieve constraints that exist in the current 64-bit TOD clock v Extend the TOD-clock architecture to multi-system configurations v Help ensure sysplex-wide uniqueness of the STCKE TOD values
The TOD Programmable Field (TODPF) is a 16-bit quantity contained in bit positions 16-31 of the TOD programmable register. The contents of the register can be set by the privileged instruction SET TOD PROGRAMMABLE FIELD. The contents of the register can be stored by the instruction STORE CLOCK EXTENDED, which stores the TOD programmable field in the last 16 bits of the extended form TOD clock. The contents of the register are reset to a value of all zeros by an initial CPU reset.
Chapter 1. Introduction to Logical Partitions 1-27
1-28 PR/SM Planning Guide

Chapter 2. Planning Considerations

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Planning the I/O Configuration . . . . . . . . . . . . . . . . . . . 2-3
Planning Considerations . . . . . . . . . . . . . . . . . . . . 2-3
Control Program Support . . . . . . . . . . . . . . . . . . . 2-3
Hardware Configuration Definition (HCD) Support . . . . . . . . . . 2-3
z/VM Dynamic I/O Configuration Support . . . . . . . . . . . . . 2-4
Input/Output Configuration Program (IOCP) Support . . . . . . . . . 2-4
Characteristics of an IOCDS . . . . . . . . . . . . . . . . . . 2-4
Maximum Number of Logical Partitions . . . . . . . . . . . . . . . 2-5
Determining the Size of the I/O Configuration . . . . . . . . . . . 2-5
Maximum Size of the I/O Configuration . . . . . . . . . . . . . . 2-5
Guidelines for Setting Up the I/O Configuration . . . . . . . . . . . 2-5
Recovery Considerations . . . . . . . . . . . . . . . . . . . 2-6
Managing Logical Paths for ESCON and FICON Channels . . . . . . . 2-7
Definition . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Control Unit Allocation of Logical Paths . . . . . . . . . . . . . . 2-7
Why Manage Logical Paths? . . . . . . . . . . . . . . . . . 2-7
Managing the Establishment of Logical Paths . . . . . . . . . . . . 2-11
Logical Path Considerations . . . . . . . . . . . . . . . . . 2-11
Recommendations . . . . . . . . . . . . . . . . . . . . . 2-12
Shared Channel Overview . . . . . . . . . . . . . . . . . . . 2-21
MIF Requirements . . . . . . . . . . . . . . . . . . . . . 2-21
Understanding ESCON and MIF Topologies . . . . . . . . . . . 2-21
MIF Performance Planning Considerations . . . . . . . . . . . . 2-22
Unshared ESCON or FICON Channel Recommendations . . . . . . . 2-27
Dynamically Managed CHPIDs . . . . . . . . . . . . . . . . . 2-27
IOCP Coding Specifications . . . . . . . . . . . . . . . . . . 2-28
IOCP Statements for ICP . . . . . . . . . . . . . . . . . . 2-28
Shared Devices Using Shared Channels . . . . . . . . . . . . . 2-31
Shared Devices Using Unshared Channels . . . . . . . . . . . . 2-31
Duplicate Device Numbers for Different Physical Devices . . . . . . 2-33
Coupling Facility Planning Considerations . . . . . . . . . . . . . . 2-39
Test or Migration Coupling Configuration . . . . . . . . . . . . . . 2-39
CFCC Enhanced Patch Apply . . . . . . . . . . . . . . . . . 2-39
Production Coupling Facility Configuration . . . . . . . . . . . . . 2-40
Production Coupling Facility Configuration for Full Data Sharing . . . . 2-40
Production Coupling Facility Configuration for Resource Sharing . . . . 2-40
Internal Coupling Facility (ICF) . . . . . . . . . . . . . . . . . 2-41
Dynamic Coupling Facility Dispatching (DCFD) . . . . . . . . . . 2-42
Dynamic Internal Coupling Facility (ICF) Expansion . . . . . . . . . . 2-42
Enhanced Dynamic ICF Expansion Across ICFs . . . . . . . . . . . 2-43
System-Managed Coupling Facility Structure Duplexing . . . . . . . . 2-44
Single CPC Software Availability Sysplex . . . . . . . . . . . . . 2-44
Coupling Facility Nonvolatility . . . . . . . . . . . . . . . . . . 2-45
Nonvolatility Choices . . . . . . . . . . . . . . . . . . . . 2-45
Setting the Conditions for Monitoring Coupling Facility Nonvolatility
Status . . . . . . . . . . . . . . . . . . . . . . . . 2-45
Coupling Facility Mode Setting . . . . . . . . . . . . . . . . . 2-45
Coupling Facility LP Definition Considerations . . . . . . . . . . . . 2-46
Internal Coupling Facility (ICF) . . . . . . . . . . . . . . . . 2-46
Coupling Facility LP Storage Planning Considerations . . . . . . . . . 2-47
Structures, Dump Space, and Coupling Facility LP Storage . . . . . . 2-47
Estimating Coupling Facility Structure Sizes . . . . . . . . . . . 2-48
© Copyright IBM Corp. 2005, 2008 2-1
Dump Space Allocation in a Coupling Facility . . . . . . . . . . . . 2-48
Coupling Facility LP Activation Considerations . . . . . . . . . . . . 2-49
Coupling Facility Shutdown Considerations . . . . . . . . . . . . . 2-49
Coupling Facility LP Operation Considerations . . . . . . . . . . . 2-49
Coupling Facility Control Code Commands . . . . . . . . . . . . . 2-50
Coupling Facility Level (CFLEVEL) Considerations . . . . . . . . . . 2-50
CPC Support for Coupling Facility Code Levels . . . . . . . . . . 2-50
Level 15 Coupling Facility . . . . . . . . . . . . . . . . . . 2-52
Level 14 Coupling Facility . . . . . . . . . . . . . . . . . . 2-52
Level 13 Coupling Facility . . . . . . . . . . . . . . . . . . 2-52
Level 12 Coupling Facility . . . . . . . . . . . . . . . . . . 2-52
Level 11 Coupling Facility . . . . . . . . . . . . . . . . . . 2-53
Level 10 Coupling Facility . . . . . . . . . . . . . . . . . . 2-53
Level 9 Coupling Facility . . . . . . . . . . . . . . . . . . 2-53
Level 8 Coupling Facility . . . . . . . . . . . . . . . . . . 2-53
Coupling Facility Resource Management (CFRM) Policy Considerations 2-54
Coupling Facility Channels . . . . . . . . . . . . . . . . . . . 2-54
Internal Coupling Channel . . . . . . . . . . . . . . . . . . 2-55
Integrated Cluster Bus Channels . . . . . . . . . . . . . . . 2-56
High-Performance Coupling Facility Channels (HiPerLinks) . . . . . . 2-56
|| || |
InfiniBand host channel adapter (HCA) . . . . . . . . . . . . . 2-56
InfiniBand coupling links for Parallel Sysplex . . . . . . . . . . . 2-56
Coupling Facility Channels (TYPE=CFP, TYPE=CBP, TYPE=CIB, or
TYPE=ICP) . . . . . . . . . . . . . . . . . . . . . . 2-57
Defining Internal Coupling Channels (TYPE=ICP) . . . . . . . . . 2-58
I/O Configuration Considerations . . . . . . . . . . . . . . . 2-58
Considerations when Migrating from ICMF to ICs . . . . . . . . . . . 2-59
Linux Operating System Planning Considerations . . . . . . . . . . . 2-59
Integrated Facility for Linux (IFL) . . . . . . . . . . . . . . . . 2-59
z/VM Version 5 Utilizing IFL Features . . . . . . . . . . . . . . . 2-60
System z9 Application Assist Processor (zAAP) . . . . . . . . . . . . 2-60
IBM System z9 Integrated Information Processor (zIIP) . . . . . . . . . 2-61
Concurrent Patch . . . . . . . . . . . . . . . . . . . . . . . 2-61
CFCC Enhanced Patch Apply . . . . . . . . . . . . . . . . . . . 2-62
Dynamic Capacity Upgrade on Demand . . . . . . . . . . . . . . . 2-62
PR/SM Shared Partitions . . . . . . . . . . . . . . . . . . . 2-63
Mixed Shared and Dedicated PR/SM Partitions . . . . . . . . . . . 2-63
Multiple Dedicated PR/SM Partitions . . . . . . . . . . . . . . . 2-64
Shared Internal Coupling Facility . . . . . . . . . . . . . . . . 2-64
Dynamic Capacity Upgrade on Demand Limitations . . . . . . . . . . . 2-65
Concurrent Memory Upgrade . . . . . . . . . . . . . . . . . . . 2-66
Capacity Backup Upgrade (CBU) Capability . . . . . . . . . . . . . 2-66
Enhanced Book Availability . . . . . . . . . . . . . . . . . . . . 2-67
Preparing for Enhanced Book Availability . . . . . . . . . . . . . 2-67
Getting the System Ready to Perform Enhanced Book Availability 2-67
Reassigning Non-Dedicated Processors . . . . . . . . . . . . . 2-69
Customer Initiated Upgrade (CIU) . . . . . . . . . . . . . . . . . 2-70
Concurrent Processor Unit Conversion . . . . . . . . . . . . . . . 2-70
Planning for Hot Plugging Crypto Features . . . . . . . . . . . . . . 2-71
2-2 PR/SM Planning Guide

Overview

This chapter describes planning considerations for I/O configuration and for coupling facility LPs.

Planning the I/O Configuration

This section describes the planning considerations and guidelines for creating an IOCDS. It assumes you understand the IOCP configuration and coding requirements described in the Input/Output Configuration Program User’s Guide, SB10-7037.

Planning Considerations

PR/SM is standard on all System z9 models.
Control Program Support
The maximum number of supported devices is limited by the control program. In planning an I/O configuration, installation management should be aware of the maximum number of devices supported by the control program run in each LP. Please see the user’s guide for the respective operating systems.
Hardware Configuration Definition (HCD) Support
HCD supports definition of the I/O configuration for an entire installation. It is required for parallel sysplex and LPAR clusters. A single I/O data file is created for the installation and used for multiple machines and I/O configuration data sets.
HCD supports:
v Up to 60 logical partitions (LPs) per central processing complex (CPC) v Coupling facility configurations v Multiple Image Facility (MIF) v Dynamic CHPID Management (DCM) channel paths. v MCSS v Assigning reserved logical partitions a meaningful name
Table 2-1. HCD Function Support
HCD Function z/OS z/VM 5.3
Define 60 logical partitions? Yes Ye s Define shared channel paths? Yes Ye s Define coupling facility channel paths? Yes Yes (Note 2) Define dynamically managed channel paths? Ye s Ye s (Note 2) Write IOCDSs remotely? Yes (Note 1) No Access I/O devices on shared channel paths? Yes Yes Use software-only dynamic I/O? Yes Ye s
Notes:
1. HCD, running on z/OS allows you to remotely write IOCDSs from one CPC to another CPC that is powered on, LAN-attached, enabled for remote LIC update, and defined to the same Hardware Management Console.
2. HCD, running on z/VM, allows you to define coupling facility channel paths or managed channel paths for a z/OS LP but z/VM does not support coupling facility channel paths or dynamically managed channel paths for use by z/VM or guest operating systems.
Chapter 2. Planning Considerations 2-3
For more information on using HCD with Multiple Image Facility, see
v z/OS Hardware Configuration Definition: User’s Guide, SC33-7988 v z/OS Hardware Configuration Definition Planning, GA22-7525 v z/VM I/O Configuration, SC24-6100.
z/VM Dynamic I/O Configuration Support
z/VM Support for the Coupling Facility: z/VM allows you to define configurations
that use the coupling facility. However, z/VM does not support the coupling facility itself. Instead, z/VM’s dynamic I/O configuration capability allows you to define resources that can be used by a z/OS system in another LP. For a summary of z/VM’s support of dynamic I/O configuration, see Table 2-2.
z/VM Support for the Multiple Image Facility (MIF): Yo u can use z/VM to define
shared channel paths. For a summary of z/VM support of dynamic I/O configuration, see Table 2-2.
Table 2-2. z/VM Dynamic I/O Support for MIF and the Coupling Facility
Release
z/VM Function
Define shared channel paths? Yes
Define coupling facility channel paths? Write IOCDSs remotely? No Access I/O devices on shared channel paths? Yes Use software-only dynamic I/O? Yes Use hardware and software dynamic I/O? Yes Use shared ESCON CTCs? Yes
Note: z/VM can define coupling facility channel paths for a z/OS LP but does not support
real coupling facility channel paths for use by z/VM or guest operating systems.
5.3, 5.2, 5.1
Yes
(Note)
Input/Output Configuration Program (IOCP) Support
You can create up to four IOCDSs. ICP IOCP is the required supported version. You can define as many as 60 LPs. For more information on ICP IOCP, see Input/Output Configuration Program User’s Guide, SB10-7037.
Characteristics of an IOCDS
The definitions for channel paths, control units, and I/O devices are processed by the IOCP and stored in an IOCDS. During initialization of the CPC, the definitions of a selected IOCDS are transferred to the hardware system area (HSA). The IOCDS is used to define the I/O configuration data required by the CPC to control I/O requests.
| | |
2-4 PR/SM Planning Guide
Channel paths in an IOCDS are assigned to one or more LPs. The characteristics of an IOCDS are:
v Using the IOCP RESOURCE statement, you define logical channel subsystems
(CSSs) and the logical partitions that have access to the channel paths in a CSS.
v Using the IOCP RESOURCE statement, you can name logical partitions and
assign MIF image ID numbers to them. MIF image ID numbers are necessary for ESCON CTC and FICON CTC definitions. You can also reserve unnamed logical partitions associated with specific MIF image ID numbers that can later be dynamically named.
v Using the IOCP CHPID statement, you can assign a channel path as
reconfigurable or shared.
v Using the IOCP CHPID statement, you can specify a Dynamic CHPID
Management (DCM) channel path and the cluster to which the CHPID belongs. The CHPID is shareable among active LPs that have become members of the specified cluster.
v You can duplicate device numbers within a single IOCP input file, but the device
numbers cannot be duplicated within an LP. See “IOCP Coding Specifications” on page 2-28.

Maximum Number of Logical Partitions

The maximum number of LPs you can define on a z9 EC is 60. The z9 BC S07 supports 30 LPs. The BC R07 only supports the activation of 15 LPs but you can power-on reset the R07 with an IOCDS containing up to 30 LPs. You cannot define an IOCDS with more than 30 LPs for a z9 BC. See “Determining the Size of the I/O Configuration” before making decisions about the number of LPs to define.
Determining the Size of the I/O Configuration
To determine the size of the current I/O configuration (number of control unit headers and devices), review the IOCDS Totals Report for the current IOCDS.
Maximum Size of the I/O Configuration
Limits within an I/O configuration exist for the following:
v Devices v Control unit headers v Physical control units
System
z9 Models:
v The maximum number of control unit headers (CUHs) is 4096 per logical channel
subsystem (CSS).
v The maximum number of physical control units is 8192. v The maximum number of devices is 65280 per CSS for subchannel set 0. v The maximum number of devices is 65535 per CSS for subchannel set 1.
I/O Configuration Size and Central Storage Resources: Central storage reserved
for the HSA cannot be used by LPs. Increases in the size of the I/O configuration can decrease the amount of central storage available for use by LPs.
After creating an IOCDS that increases the size of the I/O configuration, make sure that all of the planned combinations of LPs can be activated.
To determine the amount of central storage available for assignment to LPs, refer to the Available Storage field under Central Storage Allocation in the Storage Information task (see Figure 5-1 on page 5-2). If the amount of central storage needed to activate the LPs is more than the amount of storage available, change the necessary LP definitions in the image profiles.
Guidelines for Setting Up the I/O Configuration
Follow these guidelines when setting up an I/O configuration.
1. Determine the number of LPs and in which logical channel subsystem (CSS) they should exist.
2. For dynamic I/O configurations, include any logical partitions for which you do not yet have a meaningful name. These logical partitions will be reserved until a subsequent dynamic I/O configuration change is made to assign them a name.
Chapter 2. Planning Considerations 2-5
3. Determine if you want to move any channel paths among LPs. If you do, then these channel paths must be defined as reconfigurable in the IOCP CHPID statement. You cannot move a channel path from an LP in one CSS to an LP in another CSS.
4. Determine if you want to share any channel paths among LPs in the same CSS. If you do, then specify these channel paths as SHARED in the IOCP CHPID statement. Doing this, helps reduce the number of channel paths configured to a physical control unit and device. Make sure that the channel path type supports being shared.
5. Determine if you want to share any channel paths among LPs in different CSSs. If you do, then define these channel paths as spanned by specifying multiple CSS IDs in the PATH keyword of the IOCP CHPID statement. Doing this further helps reduce the number of channel paths configured to a physical control unit and device. Make sure that the channel path type supports being spanned.
6. Within each LP, configure primary and backup paths from separate channel adapter cards.
7. Within each LP, configure primary and backup paths from separate self-timed interfaces (STIs).
Recovery Considerations
When planning for recovery, consider the following I/O configuration guidelines.
v Assign channel paths to LPs as described in “Guidelines for Setting Up the I/O
Configuration” on page 2-5.
v Review the recoverability characteristics of the I/O configuration described in the
section “Shared Devices” on page 2-31.
2-6 PR/SM Planning Guide

Managing Logical Paths for ESCON and FICON Channels

This section describes logical paths, explains overall system considerations, and makes specific configuration recommendations.
Definition
A logical path is a logical connection between a control unit and an ESCON channel (TYPE=CNC or TYPE=CTC), a FICON channel (TYPE=FCV) attached to an
| |
ESCON Director, or a FICON channel (TYPE=FC). (FICON Express2 and FICON Express4 features do not support TYPE=FCV channels.) Logical paths are important because each sharing LP on a CPC requires that a logical path be established between an ESCON or FICON channel and a control unit for I/O operations to occur.
Logical paths do not exist for ESCON channels attached to a 9034 ESCON Converter Model 1 (TYPE=CBY or TYPE=CVC).
Logical paths do not exist for coupling facility channel paths (TYPE=CFP), integrated cluster bus channel paths (TYPE=CBP), InfiniBand internal coupling channel paths (TYPE=ICP), Open Systems Adapter channel paths (TYPE=OSC, TYPE=OSD or TYPE=OSE, TYPE=OSN), internal queued direct communication (HiperSockets) channel paths (TYPE=IQD), or fibre channel protocol channel paths (TYPE=FCP).
Control Unit Allocation of Logical Paths
Control units allocate logical paths to channels dynamically on a first-come-first-served basis. Control units do not manage the allocation of logical paths but instead allow channels to compete for logical paths until all of the control unit’s logical paths are used.
Why Manage Logical Paths?
The ESCON and FICON environments (the use of ESCON channels, FICON
|
|
|
|
Express channels, FICON Express2 and FICON Express4 channels, and ESCON and FICON Directors) greatly enhances the connectivity potential for control units. In addition, you can define shared channels that can request additional logical paths. However, control units can only allocate a limited number of logical paths (see Table 2-3 on page 2-8) in relation to the number of logical paths that channels can request. In configurations where channels request more logical paths than a control unit can allocate, you must manage logical paths to help ensure that the I/O operations you intend take place.
The FICON Express2 and FICON Express4 SX and LX features support all of the functions of FICON Express, with the exception of support for channel path type FCV. FICON Express2 and FICON Express4, however, offers increased connectivity in the same amount of physical space, and offers the possibility of increased performance. Up to 240 FICON Express2 and FICON Express4 channels on a z9 EC (and 112 channels on a z9 BC) can be employed to greatly expand connectivity and throughput capability. The FICON connectivity solution is based on industry-standard Fibre Channel technology and leverages our exclusive native FICON architecture. For detailed information, see Input/Output Configuration Program User’s Guide, SB10-7037.
Chapter 2. Planning Considerations 2-7
Table 2-3. Logical Path Summary by Control Unit
Maximum
Control Unit
2105 Storage Control
2105
3990 Storage Control
3990-2 3990-3 3990-6
9343 Storage Controller
9343 D04 9343 DC4
3590 Magnetic Tape Subsystem
3590
3490 Magnetic Tape Subsystem
3490 D31, D32, D33, D34 3490 C10, C11, C12 3490 A01 3490 A10 3490 A02 3490 A20
2074 Console Support Controller
2074 002
3172 Interconnect Controller
3172-1
3174 Establishment Controller
3174 12L (SNA only) 3174 22L (SNA only)
Physical Links
32
16 16 16
4 4
2
1 1 2 4 4 8
2
2
1 1
Logical Paths
per Link
64 2048
Note 1 Note 1 Note 2
64 64
64 128
16 16 16 16 16 16
32 64
Note 3 16
Note 3 Note 3
Maximum
Logical Paths
16 16
128
64 64
16 16 32 64 64
128
8 8
Notes:
1. 3990-2 and 3990-3 Storage Controls can have eight logical paths per side, two sides, with a minimum of one logical path per adapter.
2. The 3990-6 Storage Control can have 64 logical paths per side, two sides, with a maximum of 32 logical paths per adapter.
3. The 3172 Interconnect Controller and the 3174 Establishment Controller can have one logical path per control unit header (CUH) with a maximum of eight CUHs per link.
2-8 PR/SM Planning Guide
ESCON Example: Figure 2-1 shows a situation where 20 ESCON channel paths
routed through an ESCON Director (ESCD) each attempt to establish a logical path to a 3990 Storage Control. The 3990 will allocate all 16 of its logical paths among the 20 channel paths dynamically on a first-come-first-served basis. Clearly, some of the ESCON channel paths will not operate in this configuration. Managing logical paths can provide a way to alleviate this situation.
LP1 LP2 LP3 LP4 LP5
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
Each of the 20 channels attempts to establish one logical path.
ESCD
3990
Cluster 1 Cluster 2
Figure 2-1. An ESCON Configuration that Can Benefit from Better Logical Path Management
Chapter 2. Planning Considerations 2-9
MIF Example: Figure 2-2 shows an ESCON shared channel configuration on an
MIF-capable CPC. In this example, all five LPs share each of four ESCON channels attached to a 3990. Each shared ESCON channel represents five channel images corresponding to the five LPs. Each channel image requests a logical path to the
3990. As in Figure 2-1 on page 2-9, a total of 20 logical paths is requested from the 3990, which can only satisfy 16 of the requests. Again, you can avoid this situation by managing logical paths.
LP1 LP2 LP3 LP4 LP5
LPAR
C
C
C
C
N
N
N
N
C
C
C
C
Each of the 4 shared ESCON channels attempts
3990
to establish 5 logical paths.
Cluster 1 Cluster 2
Figure 2-2. A Shared ESCON Configuration that Can Benefit from Better Logical Path Management
2-10 PR/SM Planning Guide

Managing the Establishment of Logical Paths

You can manage the establishment of logical paths between channels and control units. With proper planning, you can create I/O configuration definitions that allow control units in the configuration to allocate logical paths for every possible request made by channels in either of the following ways:
v Create a one-to-one correspondence between the logical path capacity of all
control units in the physical configuration and the channels attempting to request them.
v Create I/O configurations that can exceed the logical path capacity of all or some
of the control units in the physical configuration but, at the same time, provide the capability to selectively establish logical connectivity between control units and channels as needed.
This capability can be useful or even necessary in several configuration scenarios. See “Recommendations” on page 2-12.
Logical Path Considerations
You can better understand how to manage the establishment of logical paths by understanding the following:
v Control unit considerations v Connectivity considerations v Channel configuration considerations v 9035 ESCON Converter considerations
Control Unit Considerations: Consider the following factors concerning the
allocation of logical paths by control units:
v Control units allocate logical paths dynamically on a first-come-first-served basis.
Control units do not manage the allocation of logical paths but instead allow channels to compete for logical paths until all of the control unit’s logical paths are used.
v Control units vary in the number of logical paths they support. See Table 2-3 on
page 2-8.
Connectivity
Considerations: ESCON and FICON system hardware, CPCs, and
ESCON and FICON Directors significantly affect the volume of logical path requests to a control unit as follows:
v Control units can attach to one or more ports on a Director or to additional ports
on other Directors. Each Director port can dynamically connect to many other ports to which channels requesting logical paths are attached.
v For CPCs, each logical partition attaching to the same control unit will compete
for the control unit’s logical paths.
v In a configuration where control units are shared by different CPCs, I/O
configuration definitions for individual control units are not coordinated automatically among the IOCDSs of the different CPCs. Each CPC competes for a control unit’s logical paths.
v Shared channels require the establishment of a logical path for each channel
image corresponding to an active LP sharing the channel. This can significantly increase the number of logical paths that a single channel requests.
Channel
Configuration Considerations: The following configuration rules
determine how logical paths are established for ESCON and FICON channels.
v A channel initially attempts to establish logical paths:
Chapter 2. Planning Considerations 2-11
If you perform POR, only those channels configured online to LPs that are
activated at POR will attempt to establish logical paths. Shared channels will attempt to establish logical paths only for those activated LPs with the channel
configured online. When the LP is activated. When configured online (if previously configured offline).
A channel cannot establish a logical path or will have its logical path removed
v
when: An LP is deactivated. A shared channel will continue to operate for any other
remaining activated LPs to which it is defined. Logical paths to those LPs will
remain established.
A shared channel cannot establish a logical path to a control unit for an LP that
v
cannot access any of the I/O devices on the control unit. In IOCP, the
PARTITION or NOTPART keyword on the IODEVICE statement specifies which LPs can access a device.
v A channel that cannot initially establish a logical path can reattempt to establish a
logical path if the channel detects or is notified of: A change in the state of a control unit A change in the state of a link or port A dynamic I/O configuration change that frees previously allocated logical
paths
A channel cannot establish a logical path or will have its logical path removed if:
v
The Director that connects the channel to the control unit blocks either the
channel port or control unit port used in the path. The Director that connects the channel to the control unit prohibits the
dynamic connection or communication between the channel port and the
control unit port used in the path. A link involved in the path fails or is disconnected. When a shared channel is
affected by a port being blocked, a dynamic connection or communication
being prohibited, or a link failing or being disconnected, each LP sharing the
channel will be equally affected and all logical paths using the port or link
(regardless of which LP they are associated) will be removed. The channel is configured offline. When a shared channel is configured offline
for an LP, it will continue to operate for any other LP that has the channel
configured online. Logical paths to these other logical partitions will remain
established. Power to the channel, control units, or Directors in the configuration is turned
off.
9035
requires special Vital Product Data (VPD) specifications. When an ESCON Director blocks a port supporting a 9035 or prohibits a dynamic connection between the channel attached to the 9035 and the control unit, the logical path is removed from the 3990. This path is now available to other channels, either to other 9035s configured in the VPD or to channels not attached to a 9035, that attempt to establish a logical path. However, the 3990 will always attempt to establish a logical path to the 9035s configured in the VPD before it allows a channel not attached to a 9035 to have a logical path.
Recommendations
Creating I/O configuration definitions where channels could request more logical paths to control units than the control units could support can be useful in the following scenarios:
2-12 PR/SM Planning Guide
ESCON Converter Considerations: For the 3990, 9035 configuration
v Workload balancing
When a system image becomes overloaded, you may need to reassign a workload and the necessary logical paths (for example, its tape or DASD volumes, a set of display terminals, or a set of printers) to another system image that has available capacity.
v Backup
When an outage occurs, you can move the critical application set (the program and associated data) and the necessary logical paths to a backup or standby CPC. This process is simple if the CPCs have identical I/O configurations.
In I/O configurations where channels can request more logical paths to control units than the control units can support, you can manage how logical paths are established by:
v Deactivating unneeded LPs. v Configuring offline unneeded channels. For shared channels, configure offline
unneeded channels on an LP basis.
v Limiting the number of LPs that can access the I/O devices attached to a control
unit when the control unit attaches to shared channels. In IOCP, specify the PARTITION or NOTPART keyword on the IODEVICE statement for every I/O device attaching to a control unit so that 1 or more LPs cannot access any of the I/O devices.
v Using the Director to block ports or prohibit dynamic connections or
communication between ports.
v Combinations of the above.
To better understand how you can manage logical paths using these methods, consider the following examples.
Deactivating Unneeded Logical Partitions: Deactivating unneeded LPs can
prove useful for managing how logical paths are established on CPCs in some situations.
The system establishes logical paths only when an LP is activated. Deactivating an LP results in removal of those logical paths associated with the LP. This can greatly reduce the number of logical paths requested by the system at any given time.
In Figure 2-3 on page 2-14, if all five of the LPs each share all four of the ESCON channels and all of the LPs are activated, the 3990 would be requested to establish five logical paths for each of the four shared ESCON channels (or a total of 20 logical paths). Because the 3990-3 only supports 16 logical paths, you will need to manage how logical paths are established to help ensure the I/O connectivity you require.
For example, if you used LP4 and LP5 as test LPs that did not need to be active concurrently, you could reduce the number of logical paths requested by four by not activating either LP4 or LP5. In this case, four LPs (LP1, LP2, LP3, and LP4 or LP5) configured to four shared ESCON channels would request a total of 16 logical paths. Later, you could transfer logical paths between LP4 and LP5 by first deactivating one LP to remove its logical paths, then activating the other LP to use the freed logical paths.
Chapter 2. Planning Considerations 2-13
LP1 LP2 LP3 LP4 LP5
Production Production Production TestTest
LPAR
CHP30CHP31CHP32CHP
33
3990
Cluster 1 Cluster 2
LP1, LP2, and LP3 are activated.
LP4 and LP5: one activated and one deactivated.
Each shared ESCON channel establishes 4 logical paths to the 3990.
Figure 2-3. Deactivating Unneeded Logical Partitions
Configuring Offline Unneeded Channels or Shared Channels on an LP Basis:
You can configure offline unneeded channels or shared channels on an LP basis to manage how logical paths are established. In Figure 2-4 on page 2-15, all five LPs need to be active concurrently. If all five LPs had each of the four shared ESCON channels configured online, 20 logical paths (four logical paths for each of the five LPs) would be requested, exceeding the 3990’s logical path capacity.
However, if LP4 or LP5 (both test LPs) did not require four channel paths each to the 3990, you could configure offline two of the four channel images used by LP4 and two of the four channel images used by LP5, reducing the total number of logical paths requested from 20 to 16 and matching the 3990’s logical path capacity.
2-14 PR/SM Planning Guide
LP1 LP2 LP3 LP4 LP5
LPAR
CHP30CHP31CHP32CHP
33
3990-3
Cluster 1 Cluster 2
For LP1, LP2, and LP3, CHPIDs 30, 31, 32, and 33 are all configured online.
For LP4, CHPIDs 30 and 32 are configured offline.
For LP5, CHPIDs 31 and 33 are configured offline.
Each shared ESCON channel establishes 4 logical paths to the 3990.
Figure 2-4. Configuring Offline Unneeded Channels or Shared Channels on an LP Basis
Note: Because the 3990-3 supports only eight logical paths per cluster, you would
need to configure offline the channel images so that the number of logical paths requested from each cluster remains at eight.
It is also possible to manage how logical paths are established by using IOCP or Hardware Configuration Definition (HCD) to create I/O configuration definitions that:
v Define a subset of LPs that will have their corresponding channels configured
online at power-on reset (POR) (CHPID access list)
v Allow LPs to configure online their channels at a later time (CHPID candidate list)
Use IOCP or HCD to define the access lists and candidate lists for channel paths to determine the configurability of a channel to an LP. This capability exists for both unshared and shared channels and can help automate and establish the configuration in Figure 2-4. Additionally, HCD allows you to dynamically change the access list and candidate list for a channel path.
Chapter 2. Planning Considerations 2-15
Defining Devices to a Subset of Logical Partitions: Yo u can limit I/O device
access from LPs to I/O devices assigned to shared channels by using IOCP or HCD to specify device candidate lists. By defining devices attached to a control unit to a subset of LPs, you can manage which LPs will attempt to establish logical paths to the control unit through a shared channel.
If you define no devices to a control unit from a particular LP, the shared channel associated with the LP will not attempt to establish a logical path. However, if there is at least one device defined to the control unit for the shared channel associated with a particular LP, the shared channel for the LP will attempt to establish a logical path to the control unit for the LP.
In Figure 2-5 on page 2-17, LP access to a series of 3174s is managed through use of the device candidate lists for the I/O devices attached to the control units. The shared ESCON channel will attempt to establish only one logical path to each of the 3174s even though 5 LPs share the channel. This is useful because the 3174 in non-SNA mode only supports one logical path.
In the example, the channel only attempts to establish a logical path for LP1 to the 3174 defined as control unit 10 because only LP1 has a device defined to that control unit. Similarly, only LP2 can access the 3174 defined as control unit 11, only LP3 can access the 3174 defined as control unit 12, only LP4 can access the 3174 defined as control unit 13, and only LP5 can access the 3174 defined as control unit 14.
Partial IOCP Deck for the Configuration: The following is a partial IOCP deck for
the example in Figure 2-5 on page 2-17.
CHPID PATH=30,SHARED
CNTLUNIT CUNUMBR=10,PATH=30 IODEVICE ADDRESS=VVVV,CUNUMBR=10,PART=LP1
CNTLUNIT CUNUMBR=11,PATH=30 IODEVICE ADDRESS=VVVV,CUNUMBR=11,PART=LP2
CNTLUNIT CUNUMBR=12,PATH=30 IODEVICE ADDRESS=VVVV,CUNUMBR=12,PART=LP3
CNTLUNIT CUNUMBR=13,PATH=30 IODEVICE ADDRESS=VVVV,CUNUMBR=13,PART=LP4
CNTLUNIT CUNUMBR=14,PATH=30 IODEVICE ADDRESS=VVVV,CUNUMBR=14,PART=LP5
2-16 PR/SM Planning Guide
LP1 LP2 LP3 LP4 LP5
10
3174 in
non-SNA
mode
LP1
11
3174 in
non-SNA
mode
LP2
LPAR
CHP
30
ESCD
12
3174 in
non-SNA
mode
LP3
CHPID 30 is a shared ESCON channel
14
3174 in
non-SNA
mode
13
3174 in
non-SNA
mode
LP5
LP4
Figure 2-5. Defining Devices to a Subset of Logical Partitions
Chapter 2. Planning Considerations 2-17
In Figure 2-6, a 3174 in SNA mode is defined as five control unit headers (CUHs). Because each 3174 CUH supports a maximum of one logical path, it is equally important in this example that the shared channel only attempts to establish a single logical path to each 3174 CUH.
LP1 LP2 LP3 LP4 LP5
LPAR
CHP
30
3174 in SNA mode
LC1 LC2 LC3 LC4 LC5
LP1
LP2
LP3
LP4
LP5
Figure 2-6. Defining Devices to a Subset of Logical Partitions
Even though 5 LPs share the ESCON channel, the channel will only attempt to establish a logical path for LP1 to CUH 0 because only LP1 has a device defined on that CUH. Similarly, only LP2 can access CUH 1, only LP3 can access CUH 2, only LP4 can access CUH 3, and only LP5 can access CUH 4.
Partial IOCP Deck for the Configuration: The following is a partial IOCP deck for
the example in Figure 2-6.
2-18 PR/SM Planning Guide
CHPID PATH=30,SHARED
CNTLUNIT CUNUMBR=10,PATH=30,CUADD=0 IODEVICE ADDRESS=VVVV,CUNUMBR=10,PART=LP1
CNTLUNIT CUNUMBR=11,PATH=30,CUADD=1 IODEVICE ADDRESS=WWWW,CUNUMBR=11,PART=LP2
CNTLUNIT CUNUMBR=12,PATH=30,CUADD=2 IODEVICE ADDRESS=XXXX,CUNUMBR=12,PART=LP3
CNTLUNIT CUNUMBR=13,PATH=30,CUADD=3 IODEVICE ADDRESS=YYYY,CUNUMBR=13,PART=LP4
CNTLUNIT CUNUMBR=14,PATH=30,CUADD=4 IODEVICE ADDRESS=ZZZZ,CUNUMBR=14,PART=LP5
Using a Director to Block Ports or Prohibit Dynamic Connections or Communication: When ESCON or FICON Directors are used in an I/O
configuration, you can prevent channels from establishing logical paths or can remove established logical paths by either blocking a Director port or by prohibiting a dynamic connection or communication between two Director ports.
In terms of logical path removal, blocking a Director port connected to a channel produces a similar outcome to configuring offline a channel or all channel images of a shared channel. Blocking a Director port connected to a control unit prevents any logical path from being established to the attached control unit port.
You can more selectively prevent logical paths from being established by prohibiting a dynamic connection between two ESCON Director ports or dynamic communication between two FICON Director ports instead of blocking a Director port. By prohibiting a dynamic connection or communication between two Director ports, you can control which channels have connectivity to a control unit port rather than blocking all connectivity to the control unit port.
Prohibiting a dynamic connection or communication between two Director ports affects all channel images of a shared channel. The system will not establish any logical paths to the attached control unit port from any of the LPs that share the ESCON or FICON channel.
You can prohibit dynamic connections or communication between Director ports by modifying the active configuration table. The active configuration table specifies the connectivity status of a port relative to the other ports on the Director. When a Director is first installed, it has a default configuration that allows any-to-any connectivity (every port can dynamically connect or communicate with every other port). If you require a different configuration than this, you can define and designate a different table to be the default configuration used at power-on of the Director. This table will allow only those dynamic connections or communication necessary to establish the logical paths the configuration requires. Dynamic connections or communication necessary to establish other logical paths (for example, those necessary for backup configurations) would be prohibited by the default configuration of the Director.
Figure 2-7 on page 2-20 shows an example of prohibiting dynamic connections. CPC1, CPC2, CPC3, and CPC4 are all production systems and CPC5 is a backup system to be used only if one of the other CPCs fail. If the default configuration used by the ESCON Director (ESCD) prohibits all dynamic connections between CPC5 and the 3990, the 3990 will only be requested to establish a total of 16 logical paths from the channels on CPC1, CPC2, CPC3, and CPC4. If one of four
Chapter 2. Planning Considerations 2-19
production CPCs fails, you could transfer the logical paths from the failing CPC to the backup CPC by prohibiting the dynamic connection to the failed CPC and allowing the dynamic connection to the backup CPC.
If a control unit is connected to more than one Director, it is necessary to coordinate allocation of the control unit’s logical paths across all of the Directors. You can use the System Automation for z/OS (SA z/OS) to dynamically manage the Directors and logical paths by sending SA z/OS commands to reconfigure one or more Directors. SA z/OS then sends the appropriate operating system Vary Path requests. SA z/OS can also provide coordination between operating systems when logical paths are removed from one system and transferred to another system as a result of blocking Director ports or prohibiting Director dynamic connections or communication.
Production
CPC3
C
C
C
C
N
N
N
Production
CPC1
C
C
C
C
N
N
N
N
C
C
C
C
Production
CPC2
C
C
C
C
N
N
N
N
C
C
C
C
N
C
C
C
C
Production
CPC4
C
C
C
C
N
N
N
N
C
C
C
C
Backup
CPC5
C
C
N
N
C
C
C
C
N
N
C
C
ESCD
Dashed lines indicate dynamic connections between ports that were initially prohibited.
3990
Cluster 1 Cluster 2
Figure 2-7. Using the ESCD to Manage Logical Paths by Prohibiting Dynamic Connections
2-20 PR/SM Planning Guide

Shared Channel Overview

MIF allows channels to be shared among multiple LPs. Shared channels are configured to an LP giving the LP a channel image of the shared channel that it can use. Each channel image allows an LP to independently access and control the shared channel as if it were a physical channel assigned to the LP.
By providing the logical equivalent of multiple physical channels dedicated to multiple LPs, a shared channel can reduce hardware requirements without a corresponding reduction in I/O connectivity. This reduction in hardware requirements can apply to physical channels, Director ports, and control unit ports, depending on the configuration.
MIF Requirements
To take advantage of MIF, you need:
v An ESCON-capable operating system:
z/OS z/VM
|
|
z/VSE AIX/ESA – TPF z/TPF
v ESCON channels operating in native mode (CNC) or channel-to-channel mode
(CTC), FICON channels attached to an ESCON director (FCV), FICON channels (FC), coupling facility channels (ICP, CBP, CIB, or CFP), open systems adapter channels (OSC, OSD, OSE or OSN), internal queued direct communication (HiperSockets) channels (IQD), and fibre channel protocol channels (FCP).
®
Note: ESCON channels that attach to a 9034 ESCON Converter Model 1 (CVC
or CBY) cannot be shared among LPs.
v IBM ESCON-capable or FICON-capable control units, or fibre channel switches.
Understanding ESCON and MIF Topologies
This section describes the following I/O topologies:
v Point-to-point topology v Switched point-to-point topology v MIF channel sharing topology
Point-to-Point Topology: The traditional point-to-point topology requires a unique
path between any two points that communicate. A channel and CU adapter are required between each communication point. In a point-to-point configuration, a channel can communicate with only one control unit. A control unit that communicates with more than one channel requires a separate control unit adapter interface to each channel. The configuration limitations in a point-to-point topology can lead to inefficient use of channels and control units. See Figure 2-8 on page 2-22 for an example of the point-to-point topology.
Switched Point-to-Point Topology: Switched point-to-point topologies eliminate
the disadvantages of point-to-point and multidrop topologies. Switched point-to-point topology enables switching between any channel path and control unit adapter when using an ESCON or FICON Director. By using the Director, paths can also be shared by multiple points. The Director enables one channel to switch to multiple control units, or one control unit to switch to multiple channels. Paths necessary to satisfy connectivity requirements in a point-to-point topology are not required in the switched point-to-point topology. This can reduce the number of channels and
Chapter 2. Planning Considerations 2-21
control unit interfaces required without a corresponding reduction in I/O activity. See Figure 2-8 for an example of switched point-to-point topology.
MIF Channel Sharing Topology: MIF further improves control unit connection
topologies for CPCs with multiple LPs. MIF enables many LPs to share a physical channel path. MIF can reduce the number of channels and control unit interfaces required without a corresponding reduction in I/O connectivity. See Figure 2-8 for and example of MIF channel sharing topology.
MIF Performance Planning Considerations
Your installation can take advantage of MIF performance enhancements offered by:
v Understanding and utilizing I/O-busy management enhancements v Planning for concurrent data transfer v Understanding examples of MIF consolidation
Understanding
and Utilizing I/O-Busy Management Enhancements: This
section shows how the various ESCON or FICON and MIF topologies offer improvements in managing I/O-busy conditions. Figure 2-8 compares the point-to-point, switched point-to point, and MIF channel sharing topologies.
Point-to-Point
PR/SM LPAR Mode
LP1 LP2 LP1 LP2 LP1 LP2
CH1 CH2 CH1 CH2 CH1 CH2
CU Busy Condition
CU CU CU
Switched
Point-to-Point
PR/SM LPAR Mode PR/SM LPAR Mode
MIF
Port Busy Condition
MIF
Channel Sharing
Channel Busy Condition
Figure 2-8. Progression of Busy Condition Management Improvements
Point-to-Point Topologies: Concentrate I/O attempts at the control unit level and
are distance dependent. At the time of a control unit busy encounter, the control unit must present control unit busy status to the channel. Once the control unit is free, it presents a control unit no longer busy status to the channel. This process of presenting status to the channel requires control unit processing and many trips over the control unit to channel link.
Switched Point-to-Point Topologies: Concentrate I/O attempts within the Director
and therefore encounter switch port busies. The processing of switch port busies
2-22 PR/SM Planning Guide
does not require any control unit involvement. Busies are handled by the ESCON or FICON Director. Therefore, the control unit is effectively relieved of handling busy conditions and is able to handle more I/O requests. Because switch port busies require fewer trips over the ESCON or FICON connection link, they are less sensitive to increased distances than control unit busy encounters.
MIF Channel Sharing: Moves busy management back into the channel subsystem,
providing the most efficient management of busy conditions. Because multiple LPs access the same physical channel, I/O attempts are concentrated at the channel level. A CU busy or a switch port busy is handled as a channel busy.
Planning for Concurrent Data Transfer: Before you can consolidate channels,
you must be aware of the channel requirements of the particular control units you are configuring. Table 2-4 shows the maximum recommended ESCON channels for a single system going to a control unit. The number of channels needed is independent of the number of LPs on a system. The number of channels is based on the number of concurrent data transfers the control unit is capable of. Although the recommended number of channels satisfies connectivity and performance requirements, additional channels can be added for availability.
Table 2-4. MIF Maximum Channel Requirements
Maximum Channels
Control Unit Type
9343 Storage Controller
9343 D04 9343 DC4
3990 Storage Control
3990-2 3990-3 3990-6
3490 Magnetic Tape Subsystem
3590 3490 D31, D32, D33, D34 3490 C10, C11, C12 3490 A01 3490 A10 3490 A02 3490 A20
(See note)
4 4
4 4 4
2 1 1 1 1 2 2
2074 Console Support Controller
2074 002 2
3174 Establishment Controller
3174 12L (SNA only) 3174 22L (SNA only)
1 1
Note: The recommended maximum channels given in this table are based on performance
and connectivity requirements and do not include channels for availability.
Understanding Examples of MIF Consolidation: The following examples provide
some general guidelines to show how MIF can help you consolidate and use hardware resources more efficiently:
ESCON Configurations: Figure 2-9 on page 2-24 shows how four shared ESCON
channels can replace 16 unshared (dedicated or reconfigurable) ESCON channels and use 12 fewer control unit ports.
Chapter 2. Planning Considerations 2-23
LP1 LP2 LP3 LP4
C
C
C
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
C
C
C
C
C
C
N
C
C
C
C
N
N
N
C
C
C
Cluster 1 Cluster 2
C
C
C N C
C
N
N
N
C
C
C
Without shared ESCON channels
LP1 LP2 LP3 LP4
With shared ESCON channels
3990
Cluster 1 Cluster 2
Figure 2-9. Consolidating ESCON Channels and ESCON Control Unit Ports
ESCD Configurations: Figure 2-10 on page 2-25 shows how shared ESCON
channels can reduce ESCD port requirements. In this example, two shared ESCON channels replace 10 unshared (dedicated or reconfigurable) ESCON channels and use eight fewer ESCD ports without a reduction in I/O connectivity.
LPAR
C
C
N
N
C
C
C
C
N
N
C
C
2-24 PR/SM Planning Guide
LP1 LP2 LP3 LP4 LP5
C
C
C
C
C
C
C
N
N
N
N
N
C
C
C
C
N
C
C
C
N
N
C
C
C
C
N
N
C
C
ESCD
3990
Cluster 1 Cluster 2
Without shared ESCON channels
With shared ESCON channels
LP1 LP2 LP3 LP4 LP5
LPAR
C
C
N
N
C
C
ESCD
3990
Figure 2-10. Consolidating ESCON Channels and ESCD Ports
Cluster 1 Cluster 2
Chapter 2. Planning Considerations 2-25
ESCON CTC Configurations: Figure 2-11 shows how shared ESCON channels
can reduce the ESCON channel requirements for ESCON CTC configurations. In this example, the CPC requires CTC communications among all its LPs.
LP1
C T C
LP2
C
C
N
T
C
C
LP3
C
C
N
T
C
C
LP4
C
C
N
T
C
C
LP5LP5
C N C
C
C
N
T
C
C
Without shared ESCON channels
ESCD
LP1
LP2
LP3
LP4
LP5
LPAR
C
C
N
T
C
C
C
C
N
T
C
C
With shared ESCON channels
Two channel pairs (CTC to CNC) provide redundancy to enhance availability.
Similar channel savings can be achieved with multiple MIF processors.
Figure 2-11. Consolidating ESCON Channels Used for ESCON CTC Communications
By using two shared ESCON CTC/CNC pairs (4 shared ESCON channels), you can:
v Replace five unshared ESCON CTC/CNC pairs (10 unshared ESCON channels)
and the ESCD used to connect them
v Provide full redundancy
I/O connectivity is maintained while hardware requirements (channels and an ESCD) are reduced.
2-26 PR/SM Planning Guide
In situations where ESCON CTC communication is required among LPs that exist on two or more CPCs, shared channels can reduce even further channel and other hardware requirements and their associated cost.
ESCON CTC configurations are well-suited to take advantage of the consolidation benefits associated with shared channels. CTC/CNC and CTC/FCV pairs used for ESCON CTC communications have no limitation on the number of logical paths that can be established between them. The only limitations are the number of control units that can be defined for an ESCON CTC channel and the performance expectations you determine for your configuration.
Infrequently Used ESCON or FICON Control Units: ESCON or FICON control units
not frequently used can make use of shared channels. You can attach such a control unit to a shared channel that is also attached to other, more frequently used control units without adding greatly to the channel utilization of the shared channel. The control unit within the Director is a good example of this.
Notes:
1. You cannot define a control unit (or multiple control units with common I/O devices) to a mixture of shared and unshared channel paths in the same IOCDS.
2. You cannot define more than one control unit with the same CUADD to the same link on a Director (or point-to-point) if the attaching CHPIDs are shared.

Unshared ESCON or FICON Channel Recommendations

Not all ESCON or FICON configurations benefit from the use of shared channels. There are some configurations where use of an unshared channel is more appropriate. Consider the following:
v Logical Path Limitations of the Control Unit
While many ESCON control units can communicate with multiple LPs at a time using multiple logical paths, there are some ESCON-capable control units that can only communicate with one LP at a time. For example, consider the 3174 Establishment Controller (Models 12L and 22L). When configured in non-SNA mode, the 3174 establishes only one logical path at a time. A shared channel would offer no connectivity benefit in this situation. However, if you defined an unshared, reconfigurable channel to the 3174, it would allow you to dynamically reconfigure the channel for any LP that had to communicate with the 3174 at a given time.
v Channel Utilization
Typically, the channel utilization of shared channels will be greater than unshared channels.
If you use shared channels to consolidate channel resources, you must consider the channel utilization of all the channels you consolidate. The channel utilization of a shared channel will roughly equal the sum of the channel utilizations of each unshared channel that it consolidates. If this total channel utilization is capable of decreasing performance, you should consider using unshared channels or a different configuration of shared and unshared channels to meet your connectivity needs.

Dynamically Managed CHPIDs

A key aspect of the Intelligent Resource Director (IRD) provided by z/OS’s WLM component is Dynamic CHPID Management (DCM). DCM provides the ability to have the system automatically move the available channel bandwidth to where it is
Chapter 2. Planning Considerations 2-27
most needed. CHPIDs identified as managed in the IOCDS (via CHPARM and IOCLUSTER keywords) are dynamically shared among z/OS images within an LPAR cluster.
Prior to DCM, available channels had to be manually balanced across I/O devices in an attempt to provide sufficient paths to handle the average load on every controller. Natural variability in demand means that some controllers at times have more I/O paths available than they need, while other controllers possibly have too few. DCM attempts to balance responsiveness of the available channels maximizing the utilization of installed hardware. Fewer overall channels are required because the DCM CHPIDs are more fully utilized. RMF provides a report showing the average aggregate utilization for all managed channels.
By using DCM, you now only have to define a minimum of one nonmanaged path and up to seven managed paths to each control unit (although a realistic minimum of two nonmanaged paths are recommended), with dynamic channel path management taking responsibility for adding additional paths as required. For more information on defining and using DCM, including detailed examples, see the z/OS Intelligent Resource Director, SG24-5952.

IOCP Coding Specifications

ICP IOCP can only generate an LPAR IOCDS. No IOCP invocation prarameter is required to generate an LPAR IOCDS.
IOCP Statements for ICP
The RESOURCE statement is used to specify all the logical partition names defined in a machine configuration. To plan for growth in the number of logical partitions in the configuration, one or more asterisks (*) may be used to specify that one or more logical partitions are to be reserved along with their associated CSS and MIF image IDs. A reserved LP can only be specified for a dynamic-capable IOCDS. A dynamic-capable IOCDS is built when using HCD on z/OS or z/VM or specifying IOCP CMS utility option DYN for z/VM. Space in the hardware system area (HSA) is allocated for reserved LPs but cannot be used until a dynamic I/O configuration change is made to assign a name to the LP. The following rules apply when specifying reserved LPs:
v A reserved LP must have a user-specified MIF image ID v A reserved LP cannot have any channel paths assigned to it v An IOCDS cannot contain only reserved LPs. At least one LP must be defined
with a name.
Dynamic CHPID Management (DCM) channel paths defined for a given LPAR cluster are shareable among all active LPs that have joined that cluster. Other than DCM channel paths, you must assign each channel path to a logical partition in an LPAR IOCDS. For each DCM channel path, ICP requires the CHPARM keyword have a value of 01 and the IOCLUSTER keyword on a CHPID statement. All other channel paths require the PART|PARTITION, NOTPART, or SHARED keyword on all CHPID statements unless a channel path is defined as spanned by specifying multiple CSS IDs in the PATH keyword of the IOCP CHPID statement.
Use the CHPARM and IOCLUSTER keywords on the CHPID statement to specify channel paths reserved for the use of a particular LPAR cluster. A DCM channel path becomes available to a candidate logical partition when the LP is activated and joins the specified cluster.
2-28 PR/SM Planning Guide
Use the CHPID PART|PARTITION, NOTPART, and SHARED keywords to determine which:
v Channel paths are assigned to each LP v Devices and control units are shared among LPs v Channel paths are reconfigurable v Channel paths are shared
Use the CHPID CPATH keyword to connect two internal coupling channels.
Use the CHPID PATH keyword to define a channel path as spanned to multiple CSSs. Spanned channel paths are also shared channel paths.
DCM channel paths are implicitly shared. Use of the IOCLUSTER keyword implies a null access list (no logical partition has the channel path brought online at activation) and a candidate list of all defined logical partitions. The IOCLUSTER keyword is mutually exclusive with the PART|PARTITION and NOTPART keywords.
All LP names that you specify in the CHPID statements must match those specified in the RESOURCE statement. An IOCDS must have at least one LP name defined.
PARTITION={(CSS(cssid),{name|0}[,REC])|
(CSS(cssid),access list)| (CSS(cssid),(access list)[,(candidate list)][,REC])| ((CSS(cssid),(access list)[,(candidate list)]),...)}
NOTPART={(CSS(cssid),access list)|
((CSS(cssid),(access list)[,(candidate list)]),...)}
IOCLUSTER=cluster_name
SHARED
CPATH=(CSS(cssid),chpid number)
Where:
name specifies the name of the LP that has authority to access the
CHPID. The LP name is a 1–8 alphanumeric (0–9, A–Z) character name that must have an alphabetic first character. Special characters ($, #, @) are not allowed. A reserved LP cannot have any channel paths assigned to it.
The following words are reserved and you cannot use them as LP names:
PHYSICAL REC SYSTEM PRIMnnnn (where nnnn are digits)
ICP IOCP supports a maximum of 60 LP names for the CPC.
cluster_name specifies the name of an LPAR cluster that has authority to access
the specified DCM CHPID. The name of the LPAR cluster is a one- to eight- alphanumeric character name (0-9, A-Z) that must have an alphabetic first character. Special characters ($, #, @) are not allowed.
REC specifies that the CHPID is reconfigurable. A reconfigurable CHPID
must have an initial access list of one LP name. It’s candidate list must consist of one or more LP names.
Chapter 2. Planning Considerations 2-29
access list specifies the LPs that have initial access to the CHPID at the
completion of the initial power-on reset. An LP name may only appear once in an access list.
You can specify that no LPs will access the channel path following LP activation for the initial POR of an LPAR IOCDS. Specifying 0 indicates a null access list.
candidate list specifies the LPs that have authority to access the CHPID. Any LP
that is not in a CHPIDs candidate list cannot access the CHPID. You may specify as many LP names as your CPC supports.
However, the number of unique LP names specified in both the access list and candidate list may not exceed the number of LPs your CPC supports.
If you specify the candidate list, you do not need to specify again the LP names specified in the initial access list. The initial access list is always included in the candidate list.
An LP name can only appear once in a candidate list. If the candidate list is not specified, it defaults to all LPs in the configuration for reconfigurable and shared channels.
| | | |
It is highly recommended that a peer mode CHPID (CBP, CFP, CIB, or ICP)
Note:
have at most one coupling facility LP specified in its initial access list in order to avoid confusion on subsequent LP activations. A peer mode CHPID can be online to only one coupling facility LP at a time.
Using the SHARED keyword specifies that the channel paths on the CHPID statement are shared. More than one LP, at the same time, can access a shared CHPID. When CHPIDs are not shared, only one LP can access it. Although you can dynamically move a reconfigurable CHPID between LPs, it can only be accessed by
|
1 LP at any given time. Only CNC, CTC, CBP, CIB, CFP, ICP, OSC, OSD, OSE, OSN, FC, FCV, FCP, and IQD channel paths (TYPE keyword) can be shared.
| |
The CPATH keyword is only valid for ICP and CIB channel paths (TYPE keyword) and required for all ICP and CIB definitions. CPATH specifies the connection between 2 ICPs at either end of a coupling link:
PATH=FE,TYPE=ICP,CPATH=FF,... PATH=FF,TYPE=ICP,CPATH=FE,...
specifies that ICP channel path FF connects to ICP channel path FE. Every ICP channel path of a coupling facility must be connected to an ICP channel path of a z/OS LP. The connection needs to be specified for each channel path. ICP channel paths cannot connect to each other if they both have candidate lists with the same, single logical partition. Note that this prevents the definition of internal coupling channels in a LPAR configuration with only one logical partition. Also, an ICP channel path cannot connect to itself.
| |
|
| |
2-30 PR/SM Planning Guide
The CPATH value for a CIB CHPID specifies the CSS and CHPID number this CIB CHPID connects with on the target system. For example:
PATH=C0,TYPE=CIB,CPATH=(CSS(1),D0),...
Defines a CIB CHPID, C0, on this system that connects with CIB CHPID D0 in CSS 1 on the remote system.
Shared Devices Using Shared Channels
MIF allows you to use shared channels when defining shared devices. Using shared channels reduces the number of channels required, allows for increased channel utilization, and reduces the complexity of your IOCP input.
Note: Yo u cannot mix shared and unshared channel paths to the same control unit
or device.
The following is an example of an IOCDS with a shared device.
Logical
Partition
A
Logical
Partition
B
CHP CHP
30 34
C0 C5
Switch 03
E0
190
Figure 2-12. Shared Devices Using Shared ESCON Channels
The following is the IOCP coding for the above figure.
CHPID PATH=(30),TYPE=CNC,SWITCH=03,SHARED . . . CHPID PATH=(34),TYPE=CNC,SWITCH=03,SHARED . . . CNTLUNIT CUNUMBR=000,PATH=(30,34),UNITADD=((90)),LINK=(E0,E0),UNIT=xxx . . . IODEVICE ADDRESS=(190),CUNUMBR=000,UNIT=xxx . . .
Logical
Partition
C
Logical
Partition
D
Shared Devices Using Unshared Channels
When coding an IOCP input file, the following specifications are allowed:
v Duplicate device numbers can be specified within a single IOCP input file, if
device numbers are not duplicated within an LP.
v You can assign a maximum of eight channel paths from each LP to a device.
Device sharing among LPs is accomplished by attaching multiple channel paths from each LP to a device.
The following section illustrates IOCP coding for IOCDSs when shared devices on unshared channels and duplicate device numbers are specified.
Shared Devices: The following examples illustrate this concept by showing the
physical connectivity of an I/O configuration for multiple LPs and the IOCP coding for the same configuration.
Using Channels: Figure 2-13 on page 2-32 shows an example of an I/O
configuration with a device shared by each of the four logical partitions. In this representation of a shared device, each logical partition views device 190 as part of
Chapter 2. Planning Considerations 2-31
its own I/O configuration. Notice the recoverability characteristics of this configuration: each logical partition has two channel paths to the shared device, each attached to a different storage director.
Logical
Partition
CHP
10
A
CHP
14
Logical
Partition
CHP
18
B
CHP
Storage Director
1C
001
8-way
Switch
Storage
Director
Shared
Device
190
Logical
Partition
CHP
20
002
C
CHP
24
Logical
Partition
CHP
28
D
CHP
2C
Figure 2-13. Physical Connectivity of Shared Device 190
The following example shows the IOCP statement for Figure 2-13.
CHPID PATH=(10),PART=(A,REC) CHPID PATH=(14),PART=(A,REC) CHPID PATH=(18),PART=(B,REC) CHPID PATH=(1C),PART=(B,REC) CHPID PATH=(20),PART=(C,REC) CHPID PATH=(24),PART=(C,REC) CHPID PATH=(28),PART=(D,REC) CHPID PATH=(2C),PART=(D,REC) CNTLUNIT CUNUMBR=0001,PATH=(10,18,20,28),UNITADD=((90)) . . . CNTLUNIT CUNUMBR=0002,PATH=(14,1C,24,2C),UNITADD=((90)) . . . IODEVICE ADDRESS=(190),CUNUMBR=(0001,0002) . . .
If 8 or less channels attach to the device, this method of defining the IOCP input provides greater flexibility because it allows you to move CHPIDs from one LP to another and eliminates possible conflicts (see Figure 2-16 on page 2-35).
Figure 2-14 on page 2-33 shows an alternative method of defining the configuration. This method is required if there are greater than 8 paths to the device. Note that this logical representation has the same recoverability characteristics as the physical connectivity:
v Each LP has two channel paths to the shared device v Each LP is attached to a different storage director
2-32 PR/SM Planning Guide
However, paths cannot be moved between the LPs.
Logical
Partition
CHP
10
A
CHP
14
Logical
Partition
CHP
18
B
CHP
1C
Logical
Partition
CHP
20
C
CHP
24
Logical
Partition
CHP
28
0001 0002 1001 1002 2001 2002 3001 3002
190 190 190 190
Figure 2-14. Logical View of Shared Device 190
The following example shows the IOCP statement for Figure 2-14.
CHPID PATH=(10),PARTITION=(A), . . . CHPID PATH=(14),PARTITION=(A), . . . CNTLUNIT CUNUMBR=0001,PATH=(10),UNITADD=((90)) . . . CNTLUNIT CUNUMBR=0002,PATH=(14),UNITADD=((90)) . . . IODEVICE ADDRESS=(190),CUNUMBR=(0001,0002) . . .
CHPID PATH=(18),PARTITION=(B), . . . CHPID PATH=(1C),PARTITION=(B), . . . CNTLUNIT CUNUMBR=1001,PATH=(18),UNITADD=((90)) . . . CNTLUNIT CUNUMBR=1002,PATH=(1C),UNITADD=((90)) . . . IODEVICE ADDRESS=(190),CUNUMBR=(1001,1002) . . .
CHPID PATH=(20),PARTITION=(C) . . . CHPID PATH=(24),PARTITION=(C) . . . CNTLUNIT CUNUMBR=2001,PATH=(20),UNITADD=((90)) . . . CNTLUNIT CUNUMBR=2002,PATH=(24),UNITADD=((90)) . . . IODEVICE ADDRESS=(190),CUNUMBR=(2001,2002) . . .
CHPID PATH=(28),PARTITION=(D), . . . CHPID PATH=(2C),PARTITION=(D), . . . CNTLUNIT CUNUMBR=3001,PATH=(28),UNITADD=((90)) . . . CNTLUNIT CUNUMBR=3002,PATH=(2C),UNITADD=((90)) . . . IODEVICE ADDRESS=(190),CUNUMBR=(3001,3002) . . .
D
CHP
2C
Duplicate Device Numbers for Different Physical Devices
Figure 2-15 on page 2-34 illustrates a configuration where duplicate device numbers are used to represent a console (110) and a printer (00E) within each of four logical partitions.
Chapter 2. Planning Considerations 2-33
Logical
Partition
A
CHP
10
110 110 110 110
CHP
14
Logical
Partition
CHP
18
B
CHP
1C
Logical
Partition
CHP
20
C
CHP
24
Figure 2-15. LPAR Configuration with Duplicate Device Numbers
The following example shows the IOCP statement for Figure 2-15. This IOCP coding example groups the input statements by logical partition. When coding IOCP, view the I/O devices from a logical partition perspective.
CHPID PATH=(10),PARTITION=(A), . . . CHPID PATH=(14),PARTITION=(A), . . . CNTLUNIT CUNUMBR=0011,PATH=(10),UNITADD=(10), . . . CNTLUNIT CUNUMBR=0012,PATH=(14),UNITADD=(0E), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0011), . . . IODEVICE ADDRESS=(00E),CUNUMBR=(0012), . . .
CHPID PATH=(18),PARTITION=(B), . . . CHPID PATH=(1C),PARTITION=(B), . . . CNTLUNIT CUNUMBR=0013,PATH=(18),UNITADD=(10), . . . CNTLUNIT CUNUMBR=0014,PATH=(1C),UNITADD=(0E), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0013), . . . IODEVICE ADDRESS=(00E),CUNUMBR=(0014), . . .
CHPID PATH=(20),PARTITION=(C), . . . CHPID PATH=(24),PARTITION=(C), . . . CNTLUNIT CUNUMBR=0015,PATH=(20),UNITADD=(10), . . . CNTLUNIT CUNUMBR=0016,PATH=(24),UNITADD=(0E), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0015), . . . IODEVICE ADDRESS=(00E),CUNUMBR=(0016), . . .
CHPID PATH=(28),PARTITION=(D), . . . CHPID PATH=(2C),PARTITION=(D), . . . CNTLUNIT CUNUMBR=0017,PATH=(28),UNITADD=(10), . . . CNTLUNIT CUNUMBR=0018,PATH=(2C),UNITADD=(0E), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0017), . . . IODEVICE ADDRESS=(00E),CUNUMBR=(0018), . . .
Logical
Partition
CHP
28
00E00E00E00E
D
CHP
2C
Eight IODEVICE statements are used, one for each console and one for each printer that has a duplicate device number. Device numbers 110 and 00E occur four times each; however, they are not duplicated within a logical partition. When coding an IOCP input file, remember that the unique device number rule applies for logical partitions in an IOCDS.
2-34 PR/SM Planning Guide
Figure 2-16 shows another example of a logical partition configuration in which the device number for a console (110) is duplicated for all four logical partitions.
Logical
Partition
A
CHP
CHP
14
10
0001 0002 0003 0004
110
Logical
Partition
B
CHP
CHP
18
1C
110 110 110
Logical
Partition
C
CHP
CHP
20
24
Logical
Partition
D
CHP
CHP
28
2C
Figure 2-16. Duplicate Device Numbers for Console
The following example shows the IOCP coding for the previous configuration. Four IODEVICE and four CNTLUNIT statements are used, one each for the console within each logical partition that has a duplicate device number.
CHPID PATH=(10),PARTITION=(A), . . . CHPID PATH=(14),PARTITION=(A), . . . CNTLUNIT CUNUMBR=0001,PATH=(10,14),UNITADD=((10)), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0001), . . .
CHPID PATH=(18),PARTITION=(B), . . . CHPID PATH=(1C),PARTITION=(B), . . . CNTLUNIT CUNUMBR=0002,PATH=(18,1C),UNITADD=((10)), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0002), . . .
CHPID PATH=(20),PARTITION=(C), . . . CHPID PATH=(24),PARTITION=(C), . . . CNTLUNIT CUNUMBR=0003,PATH=(20,24),UNITADD=((10)), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0003), . . .
CHPID PATH=(28),PARTITION=(D), . . . CHPID PATH=(2C),PARTITION=(D), . . . CNTLUNIT CUNUMBR=0004,PATH=(28,2C),UNITADD=((10)), . . . IODEVICE ADDRESS=(110),CUNUMBR=(0004), . . .
Duplicate Device Number Conflicts: IOCP allows duplicate device numbers in
an IOCDS only if the duplicate device numbers do not occur in the same logical partition. Therefore, IOCP allows systems to use different logical partitions to integrate a processor complex without changing device numbers.
IOCP requires a unique device number for each device within a logical partition. When IOCP completes without error, the initial configuration contains no duplicate device number conflicts within a logical partition.
Conflicts can occur when the I/O configuration is modified. If a channel path is configured to a logical partition and devices attached to the channel path have device numbers that are already assigned in the receiving logical partition to other online channel paths, a conflict results.
Chapter 2. Planning Considerations 2-35
When an I/O configuration is dynamically modified so the logical partition can gain access to a device not previously accessible, a device conflict can occur. The conflicts are detected when commands are processed that change the I/O configuration or when you attempt to activate the logical partition which has the device number conflict. A message displays identifying the error.
The identified device cannot be accessed while a conflict exists. Two types of conflict are possible:
v Conflicts between device numbers for the same device (a shared device) v Conflicts between device numbers for different devices (unshared devices)
Activation fails if a duplicate device number conflict exists.
Examples of Duplicate Device Number Conflicts: Figure 2-17 provides two
examples of duplicate device number conflict.
ZOSPROD ZOSPROD
CHP
CHP
00
04
Device 180
ZOSTEST ZOSTEST
CHP
04
CHP
10
CHP
00
CHP
04
CHP
04
CHP
Device 180 Device 180
10
Example, Shared Device Example, Unshared Device
Figure 2-17. Two Examples of Duplicate Device Number Conflicts
The following example shows the IOCP statement for Figure 2-17. Both examples use identical IOCP statements.
CHPID PATH=(00),PARTITION=(ZOSPROD,REC) CHPID PATH=(04),PARTITION=(ZOSPROD,REC) CNTLUNIT CUNUMBR=0001,PATH=(00,04),UNITADD=80 IODEVICE ADDRESS=180,CUNUMBR=0001
CHPID PATH=(10),PARTITION=(ZOSTEST) CNTLUNIT CUNUMBR=0002,PATH=(10),UNITADD=80 IODEVICE ADDRESS=180,CUNUMBR=0002
Channel path 04 is reassigned from ZOSPROD to ZOSTEST in each example. This creates a duplicate device number conflict for device number 180 when the devices are connected to two different control units. This occurs because a device numbered 180 already exists on the original channel path 10. If such conflicts occur, the operator must know what configuration is desired.
Shared Device
2-36 PR/SM Planning Guide
In the example on the left, the duplicate device numbers refer to the same device from different logical partitions (a new path to the same device has been moved to ZOSTEST). This may result in a performance problem because the control program in logical partition OS390 cannot access the device from channel path 4.
Unshared Device
In the example on the right, the duplicate device numbers refer to a different device from each logical partition (a new device has been moved to ZOSTEST). This may result in a data integrity problem because the control program in logical partition ZOSTEST cannot access the correct device from channel path 04.
Resolving
Duplicate Device Number Conflicts: Consider options A, B, and C
when planning the I/O configuration and the reconfigurability of channel paths. You can resolve duplicate device number conflicts by choosing one of the options:
A Use the original channel path:
If the receiving logical partition does not need a new path to a shared device or does not need the new (unshared) device, take no action. The conflict is resolved by using only the original path (shared device) or the original device. (Access is still allowed to any nonconflicting devices on the newly configured channel path.)
In Figure 2-17 on page 2-36 ZOSTEST can access device 180 only through channel path 10 if the operator takes no action in response to the conflict message.
B Deconfigure the original channel path:
If the logical partition must have the reassigned channel path to a shared device or access to a new (unshared) device, the conflict is resolved by substituting the reassigned channel path for the original channel path. Do the following:
1. Configure offline the original channel path (CHP 10 in Figure 2-17).
2. Configure offline and then online the reassigned channel path (CHP 04 in Figure 2-17).
3. If necessary, configure online the original channel path (CHP 10 in Figure 2-17). Another conflict message is issued because a new conflict has been created. The operator then ignores this conflict as described in option A. (Access is still allowed to any nonconflicting devices on the original channel path.)
Figure 2-17, ZOSTEST can access device 180 only through channel path
In 04 if the preceding steps are performed in response to the conflict message.
C Change the I/O Configuration:
Only option C provides a permanent resolution to a device number conflict. If the logical partition must have access to all devices over the original
channel path and the reassigned channel path (shared devices), or to a new device and the original device (unshared devices), do one of the following:
v Create a new configuration with unique device numbers, if they are
unshared devices.
v For shared devices, define a single device with access to all of the
channel paths attached to the physical control units.
v For a shared device assigned to unshared channel paths, change the
channel paths to shared and consolidate the control units and device definitions to one each.
v If the device is assigned to shared channel paths, control access to the
devices using their device candidate list.
Chapter 2. Planning Considerations 2-37
The configuration can be activated by performing a POR or by performing a dynamic I/O configuration.
In Figure 2-17 (shared device), ZOSTEST can access device 180 through CHP 04 and CHP 10 if CHP 04 is defined to ZOSTEST in the IOCDS.
In Figure 2-17 (unshared device), ZOSTEST can access either device 180 (unshared device) if one or both of the devices are assigned a new device number in the IOCDS.
When a device number conflict exists, logical partitions will fail to activate. This will
happen when one of the following occurs:
v The receiving logical partition was deactivated when a channel path is
reassigned
v The receiving logical partition is deactivated after a channel path is reassigned
Failure to activate can result if options A or B are used. If a logical partition fails to activate, use option B or C to resolve the conflict and to activate the logical partition.
In Figure 2-17, if ZOSTEST is not active when CHP 04 is reassigned, or ZOSTEST is deactivated and then activated after CHP 04 is reassigned, ZOSTEST does not activate until the conflict over device 180 is resolved.
If you resolve the conflict by using option&rbl.B do the following steps:
1. Establish the correct configuration by configuring offline one of the channel paths (CHP 04 or CHP 10)
2. Configure offline and then online the other channel path
If it’s necessary to have access to other devices on the first channel path, the operator can configure online the first channel path while the LP is active. Ignore the messages issued at the hardware console.
The following IOCP statement example shows coding that removes duplicate device number conflicts for shared devices.
CHPID PATH=(00),PARTITION=(ZOSPROD,REC), . . . CHPID PATH=(04),PARTITION=(ZOSPROD,REC), . . . CHPID PATH=(10),PARTITION=(ZOSTEST), . . . CNTLUNIT CUNUMBR=0001,PATH=(00,04),UNITADD=80 CNTLUNIT CUNUMBR=0002,PATH=(10),UNITADD=80 IODEVICE ADDRESS=180,CUNUMBR=(0001,0002)
2-38 PR/SM Planning Guide

Coupling Facility Planning Considerations

The coupling facility provides shared storage and shared storage management functions for the sysplex (for example, high speed caching, list processing, and locking functions). Applications running on z/OS images in the sysplex define the shared structures used in the coupling facility.
The coupling facility allows applications, running on multiple z/OS images that are configured in a sysplex, to efficiently share data so that a transaction processing workload can be processed in parallel across the sysplex.
PR/SM LPAR allows you to define the coupling facility, which is a special logical partition (LP) that runs coupling facility control code. Coupling facility control code is Licensed Internal Code (LIC).
At LP activation, coupling facility control code automatically loads into the coupling facility LP from the support element hard disk. No initial program load (IPL) of an operating system is necessary or supported in the coupling facility LP.
Coupling facility control code runs in the coupling facility LP with minimal operator intervention. Operator activity is confined to the Operating System Messages task. PR/SM LPAR limits the hardware operator controls usually available for LPs to avoid unnecessary operator activity.
Coupling facility channel hardware provides the connectivity required for data sharing between the coupling facility and the CPCs directly attached to it. Coupling facility channels are point-to-point connections that require a unique channel definition at each end of the channel. See “Coupling Facility Channels” on page 2-54.

Test or Migration Coupling Configuration

You can run a test or migration coupling facility to test and develop data sharing applications. You can define a test or migration coupling facility LP on the same CPC where other LPs are:
v Running z/OS images connected to the coupling facility v Running noncoupled production work
A single CPC configuration has the following considerations:
v Simultaneous loss of the coupling facility and any z/OS images coupled to it (a
more likely possibility in a single CPC configuration) can potentially cause extended recovery times
You can define a test or migration coupling facility with or without coupling facility channel hardware. See “Defining Internal Coupling Channels (TYPE=ICP)” on page 2-58 for information on how to define a test or migration facility without coupling facility channel hardware.
CFCC Enhanced Patch Apply
With the CFCC Enhanced Patch Apply, you can perform a disruptive install of new CFCC code on a test CF and run it, while a production CF image in the same CEC remains at the base CFCC code level. Then when the test CF is successful, the new CFCC code can be installed on the production CF. Both installs can be done without a Power On Reset (POR) of the CEC.
Chapter 2. Planning Considerations 2-39

Production Coupling Facility Configuration

IBM recommends that you run your production applications on a sysplex that uses a production coupling facility configuration.
A properly configured production coupling facility configuration can reduce the potential for extended recovery times, achieve acceptable performance, and maximize connectivity to the coupling facility.
For production configurations, the use of one or more dedicated Coupling Facility engines is recommended; shared Coupling Facility engines are strongly discouraged. For more information, see the Important Note on page 3-31.
Production Coupling Facility Configuration for Full Data Sharing
The preferred solution for a full data sharing (IMS™, DB2®, VSAM/RLS) production parallel sysplex is a coupling facility configuration that consists of:
v One stand-alone coupling facility running as a single dedicated coupling facility
LP to provide large capacity shared storage and maximum coupling facility channel connectivity (up to 64 coupling facility channels).
v A second stand-alone coupling facility, similarly configured, to reduce the
possibility of a single point of failure. A second stand-alone coupling facility improves application subsystem availability by allowing fast recovery from one coupling facility to the other in the event of a coupling facility outage. Alternatively, an Internal Coupling Facility (ICF) feature can be used to provide the backup coupling facility. See “Internal Coupling Facility (ICF)” on page 2-41.
Notes:
1. The backup CF in the configuration must provide sufficient storage, processor, and connectivity resources to assume the workload of the other production CF in the event of its failure.
2. With the use of System-Managed CF Structure Duplexing for all relevant data sharing structures, it is possible to have a production data-sharing configuration that uses only 2 or more internal CFs, because duplexing avoids the “single point of failure” failure-isolation issue.
Production Coupling Facility Configuration for Resource Sharing
A viable solution for a resource sharing (XCF Signaling, Logger Operlog, RACF®, BatchPipes®, Logger Logrec, Shared Tape, GRS, WLM Enclave Support, LPAR Clusters) production level parallel sysplex is a coupling facility configuration that consists of:
v One dedicated ICF provides reduced cost of ownership without compromising
sysplex availability or integrity.
v A second dedicated ICF reduces the possibility of a single point of failure. A
second ICF improves application subsystem availability by allowing fast recovery from one coupling facility to the other in the event of a coupling facility outage.
The backup CF in the configuration must provide sufficient storage,
Note:
processor, and connectivity resources to assume the workload of the other production CF in the event of its failure.
These configurations offer the best performance, the best reliability, availability, and serviceability (RAS).
2-40 PR/SM Planning Guide

Internal Coupling Facility (ICF)

You can purchase and install one or more ICF features for use in coupling facility LPs. With this feature, the coupling facility runs on special ICF CPs that no customer software can utilize. This allows the coupling facility function to be performed on the CPC without affecting the model group and thus without impacting software licensing costs for the CP resources utilized by the coupling facility. See “Considerations for Coupling Facilities using Internal Coupling Facility (ICF) CPs” on page 3-31.
These features are ordered separately, and are distinguished at the hardware level from any Integrated Features for Linux (IFLs), Integrated Facility for Applications (IFAs) features, and zIIPs. ICFs, IFLs, IFAs, or zIIPs are perceived by the system as multiple resource pools.
With the CFCC Enhanced Patch Apply process, you can perform a disruptive install of new CFCC code on an ICF image by simply deactivating and then reactivating the CF image, without the much greater disruption of a Power On Reset (POR) of the entire CEC that contains the CF image. This greatly improves the availability characteristics of using ICFs.
ICFs are ideal for coupling resource sharing sysplexes (sysplexes that are not in production data sharing with IMS, DB2 or VSAM/RLS). You can simplify systems management by using XCF structures instead of ESCON CTC connections.
IBM does not recommend use of ICFs for most coupling facility structures involved in data sharing because of the possibility of “double outages” involving the simultaneous loss of an ICF image and one or more z/OS system images that are using the ICF for data sharing. Depending on the structure, a double outage can result in a significantly more involved recovery than a single outage of either an ICF image or a z/OS image in isolation from one another.
With the use of System-Managed CF Structure Duplexing for all relevant data sharing structures, it is possible to have a production data-sharing configuration that uses only 2 or more internal CFs, because duplexing avoids the “single point of failure” failure-isolation issue.
ICFs on stand-alone coupling facilities need configuration planning to account for storage and channels. ICFs will likely increase storage requirements for the CPC with an ICF installed, especially if software exploits the coupling facility to provide additional function not available except when running a coupling facility in a parallel sysplex.
The following table indicates the maximum number of ICFs that you can install on a given model.
|
||
|
| |
| |
|
|
| |
| |
| |
| |
| |
|
System z9 BC
Model R07 Model S07
System z9 EC
Model S08 Model S18 Model S28 Model S38 Model S54
|
CPC Model Maximum Number of ICF’s Supported
6 7
8
16 16 16 16
Chapter 2. Planning Considerations 2-41
Dynamic Coupling Facility Dispatching (DCFD)
With DCFD, the coupling facility uses CP resources in a shared CP environment efficiently, making a coupling facility using shared CPs an attractive option as a back-up coupling facility. Without DCFD, a coupling facility using shared CPs would attempt to get all the CP resource it could even when there was no real work for it to do. With dynamic coupling facility dispatching, the coupling facility monitors the request rate that is driving it and adjust its usage of CP resource accordingly. If the request rate becomes high enough, the coupling facility reverts back to its original dispatching algorithm, constantly looking for new work. When the request rate lowers, the coupling facility again becomes more judicious in its use of CP resource.
This behavior plays well into a hot-standby back-up coupling facility role. In back-up mode, the coupling facility will have a very low request rate so it will throttle back to very low CP usage. In this mode, the requests themselves will experience some elongation in response time but this will not adversely affect the performance of the overall system. Since the coupling facility is not consuming more CP resource than it needs to, you can now set the processor weights for the coupling facility to a value high enough to handle the load if the coupling facility was to take over for a failing primary coupling facility. If the primary coupling facility does fail, the requests can be moved immediately to the back-up coupling facility which can then get the CP resource it needs automatically with properly defined LP weights.
Dynamic coupling facility dispatching is particularly useful in configurations where less than one CP of capacity is needed for the back-up coupling facility use. Dynamic coupling facility dispatching is automatically enabled for any coupling facility LP that uses shared CPs, except for stand-alone coupling facilities. To enable dynamic coupling facility dispatching on stand-alone coupling facilities, use the DYNDISP coupling facility control code command. See “Coupling Facility Control Code Commands” on page 2-50. It is recommended that anytime a shared processor is used for a coupling facility, whether standalone or ICF processors, that DCFD be enabled by insuring that the DYNDISP coupling facility control command be set to ON.
Both the ICF and dynamic coupling facility dispatching enhancements represent real production configuration options at significantly reduced cost. You will need to examine the RAS trade-offs for each potential configuration to determine their acceptability for your environment.

Dynamic Internal Coupling Facility (ICF) Expansion

Important Note
Dynamic ICF Expansion is not recommended for coupling facilities that are used for primary production workload. All processors assigned to the coupling facility partition should be dedicated to that logical partition if it is used used for primary production workload.
Dynamic internal coupling facility (ICF) expansion allows an ICF LP to acquire
additional processing power from the LPAR pool of shared general purpose CPs being used to run production and/or test work on the system.
This is very useful when you are using an ICF LP to back up another coupling facility. In this scenario, the ICF, using dynamic ICF expansion, can acquire additional processing capacity to handle the full coupling facility workload.
2-42 PR/SM Planning Guide
Dynamic ICF expansion combines the benefits of ICFs and dynamic CF dispatching. Dynamic ICF expansion allows you to define a coupling facility LP that uses a combination of dedicated ICF CPs and shared general purpose CPs. The dedicated ICF CPs run a continuous polling loop looking for requests to process while the shared general purpose CPs use a dynamic coupling facility dispatch algorithm. This provides the capability to allow peak workload expansion into the shared pool of general purpose CPs.
For each ICF coupling facility LP, you define the number of ICF features (maximum varies by model and what has been installed) that are dedicated to that LP and the amount of additional capacity you want to be made available by specifying a number of shared general purpose CPs the ICF LP can acquire. You specify a processing weight for the shared CPs as you would for any other LP using shared CPs. Since the coupling facility is not using more CP resource than it requires on the shared CPs, you must set the processing weight to a high value but this will not affect other work during normal operation.
IBM recommends that the processing weight you specify for the shared logical CPs of a coupling facility using dynamic ICF expansion be set higher than the weight for all other logical CPs that are sharing the same physical CPs. On occasions when the shared CP is needed by the coupling facility, you can achieve the best response time when this CP is dispatched as quickly as possible. To help ensure that this occurs, you should assign the shared coupling facility CP a weight which is much higher than that of the other LPs’ shared CPs.
Note that when you define and activate a dynamic ICF configuration for a coupling facility LP, dynamic CF dispatching will be forced on any shared CPs in that coupling facility. You cannot use the DYNDISP command to turn it off.

Enhanced Dynamic ICF Expansion Across ICFs

Important Note
Enhanced Dynamic ICF Expansion across ICFs is not recommended for coupling facilities that are used for primary production workload. All processors assigned to the coupling facility partition should be dedicated to that logical partition if it is used used for primary production workload.
An alternative dynamic ICF expansion configuration is supported where a CF logical
partition can be defined with a mixture of both dedicated and shared ICF processors. This has all the same operating characteristics of the original dynamic ICF expansion configuration except that the shared processors are in the ICF pool rather than the general purpose processor pool. These shared ICF processors may be used by other CF logical partitions using only shared ICF processors and/or other CF logical partitions using a mixture of dedicated and shared ICF processors.
The following table lists the desired actions for all configurations involving coupling facility control code. It covers two areas: forcing of dynamic CF dispatching and defaulting of dynamic CF dispatching. It is recommended that anytime a shared processor is used for a coupling facility, whether standalone or ICF processors, that DCFD be enabled by insuring that the DYNDISP coupling facility control command be set to ON. See “Coupling Facility Control Code Commands” on page 2-50.
Chapter 2. Planning Considerations 2-43
All
Dedicated
Coupling Facility
Configuration
CF Model Dynamic Dispatching
Default Value Off On On Off On Off Dynamic Dispatching
Forced? No Yes Yes No No No
CPs or
ICF(s)
Dedicated ICF(s) and
Shared
CP(s)
Dedicated
ICF(s) and
Shared
ICF(s) Shared ICF Shared CP Shared ICF
System z9
System z9
with only ICF
processors
with 1 or
more general
purpose CPs
System z9
with 1 or more general purpose CPs

System-Managed Coupling Facility Structure Duplexing

A set of parallel sysplex architectural extensions is provided for support of
system-managed duplexing of coupling facility structures for high availability. All
three structure types, cache, list, and locking, can be duplexed using this architecture.
Benefits of system-managed CF structure duplexing include:
v Availability: Faster recovery of structures by having the data already in the
second CF.
v Manageability and Usability: A consistent procedure to set up and manage
structure recovery across multiple exploiters
v Cost Benefits: Enables the use of non-standalone CFs (for example, ICFs) for
all resource sharing and data sharing environments.
Preparations
for CF duplexing include the requirement to connect coupling facilities
to one another with coupling links. The required CF-to-CF connectivity is bi-directional, so that signals may be exchanged between the CFs in both directions. A single peer-mode coupling link between each pair of CFs can provide the required CF-to-CF connectivity; however, for high availability at least two peer-mode links between each pair of CFs are recommended.
|
Note that while peer-mode CHPIDs cannot be shared between multiple coupling facility images, they can be shared between a single coupling facility image and one
|
or more z/OS images. With this sharing capability, a single peer-mode ISC-3 or ICB link can serve both as a z/OS-to-CF link and as a CF-to-CF link, and thus provide the connectivity between z/OS and CF partitions as well as the connectivity between CFs that is required for CF Duplexing. Of course, at least two such links
| | | | |
are recommended for high availability. In addition IFB links can provide the ability to actually share the same physical link between multiple CF images. By defining multiple CHPIDs on the same physical IFB link, the individual CHPIDs can be defined for a single CF image while the physical link is being shared by multiple CF images.

Single CPC Software Availability Sysplex

For single CPC configurations, System z9 models can utilize an ICF to form a single CPC sysplex, providing significant improvement in software continuous operations characteristics when running two z/OS LPs in data-sharing mode versus one large z/OS image. For these configurations, overall RAS is improved over that provided by a single z/OS image solution. Hardware failures can take down the entire single CPC sysplex, but those failures are far less frequent than conditions
2-44 PR/SM Planning Guide
taking down a software image, and planned software outages are the predominant form of software image outages in any case. Forming a single CPC sysplex allows software updates to occur in a “rolling” IPL fashion, maintaining system availability throughout. An LPAR cluster is one example of a single CPC sysplex which has significantly improved system availability over a single LP. For additional benefits provided by an LPAR cluster using IRD technology, see redbook z/OS Intelligent Resource Director, SG24-5952.

Coupling Facility Nonvolatility

Continuous availability of the transaction processing workload in a coupling facility configuration requires continuous availability of the shared structures in the coupling facility. To help ensure this, you must provide an optional backup power supply to make coupling facility storage contents nonvolatile across utility power failures.
Nonvolatility Choices
The following table indicates the optional nonvolatility choices available and their capabilities:
Table 2-5. Nonvolatility Choices for Coupling Facility LPs
Nonvolatility Choices z9 BC z9 EC
Uninterruptible power supply (UPS) (See Note 1) Ye s Ye s Internal Battery Feature (IBF) Yes Yes Local Uninterruptible Power Supply (LUPS) (See Note 2) Yes Yes
Notes:
1. Optional uninterruptible power supply (UPS) provides a secondary power source for use during extended utility power outages allowing continuous coupling facility operation.
2. The optional Local Uninterruptible Power Supply supports 0 to 18 minutes of full power operation.
Setting the Conditions for Monitoring Coupling Facility Nonvolatility Status
In addition to installing an optional backup power supply to help ensure continuous availability, you must also set the conditions by which the coupling facility determines its volatility status. Software subsystems with structures defined in the coupling facility can monitor this status. Use the coupling facility control code MODE command as follows:
v MODE NONVOLATILE sets coupling facility volatility status to nonvolatile and
should be used if a floor UPS is available to the CPC. Coupling facility control code does not monitor the installation or availability of UPS but maintains a nonvolatile status for the coupling facility.
v MODE VOLATILE sets coupling facility volatility status to volatile and should be
used if no backup power supply is installed and available. Coupling facility control code maintains volatile status for the coupling facility even if a backup power supply is installed and available.
The coupling facility MODE setting is saved across power-on reset and activation of the coupling facility. You can use online help from the Operator Messages panel to get additional information on coupling facility control code commands.

Coupling Facility Mode Setting

The following table summarizes the relationship between the coupling facility MODE setting, and the resulting conditions you can expect if utility power fails at your site.
Chapter 2. Planning Considerations 2-45
Table 2-6. Coupling Facility Mode Setting
Local UPS
CF MODE
Setting
VOLATILE Yes Ride out utility power failure on UPS/IBF.
VOLATILE No
NONVOLATILE Yes Ride out utility power failure on UPS/IBF.
NONVOLATILE No
Note: Reflects the real time status of the power (volatile or nonvolatile).
or IBF
Installed Results on Utility Power Failure
Machine down unless alternate floor level UPS/IBF provided.
Machine down unless alternate floor level UPS/IBF provided.
Note: This is the recommended setting when
providing floor-wide UPS/IBF backup.

Coupling Facility LP Definition Considerations

You can define coupling facility mode for an LP at the Hardware Management Console or Support Element console using the Activation Profiles task available from the CPC Operational Customization Tasks Area.
You can define coupling facility LPs with shared or dedicated CPs on all System z9 models. Coupling facility LPs must be defined with at least 128 MB of central storage. See Table 3-2 on page 3-8.
Coupling facility LPs do not support some LP definition controls typically available to other LPs. For coupling facility LPs, you cannot define:
v Reserved central storage (coupling facility LPs do not support dynamic storage
reconfiguration)
v Expanded storage initial or reserved amounts or expanded storage origin v Crypto Express2 Coprocessors or Crypto Express2 Accelerators v Automatic load v Automatic load address v Automatic load parameters
Internal Coupling Facility (ICF)
You can install one or more internal coupling facility (ICF) features. See “Considerations for Coupling Facilities using Internal Coupling Facility (ICF) CPs” on page 3-31.
2-46 PR/SM Planning Guide
Loading...