Hitachi Universal Storage Platform VM, Universal Storage Platform V User Manual

S
Hit achi Universal Storage Platform V
Hit achi Universal Storage Platform VM
User and Reference Guide
F
ASTFIND
Document Organization Product Version Getting Help Contents
L
MK-96RD635-04
Copyright © 2008 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED
Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”).
Hitachi Data Systems reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. Hitachi Data Systems products and services can only be ordered under the terms and conditions of Hitachi Data Systems’ applicable agreements. All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information on feature and product availability.
This document contains the most current information available at the time of publication. When new and/or revised information becomes available, this entire document will be updated and distributed to all registered users.
Hitachi, the Hitachi logo, and Hitachi Data Systems are registered trademarks and service marks of Hitachi, Ltd. The Hitachi Data Systems logo is a trademark of Hitachi, Ltd.
Dynamic Provisioning, Hi-Track, ShadowImage, TrueCopy, and Universal Star Network are registered trademarks or trademarks of Hitachi Data Systems.
All other brand or product names are or may be trademarks or service marks of and are used to identify products or services of their respective owners.
ii
Hitachi Universal Storage Platform V/VM User and Reference Guide

Contents

Preface..................................................................................................vii
Safety and Environmental Notices....................................................................... viii
Intended Audience..............................................................................................ix
Product Version...................................................................................................ix
Document Revision Level.....................................................................................ix
Source Document(s) for this Revision ...................................................................ix
Changes in this Revision.......................................................................................x
Document Organization ........................................................................................x
Referenced Documents........................................................................................xi
Document Conventions.......................................................................................xii
Convention for Storage Capacity Values............................................................... xii
Getting Help ..................................................................................................... xiii
Comments........................................................................................................ xiii
Product Overview................................................................................. 1-1
Universal Storage Platform V Family ...................................................................1-2
New and Improved Capabilities..........................................................................1-3
Specifications at a Glance ..................................................................................1-4
Specifications for the Universal Storage Platform V........................................1-4
Specifications for the Universal Storage Platform VM .....................................1-6
Software Products.............................................................................................1-8
Architecture and Components................................................................ 2-1
Hardware Architecture.......................................................................................2-2
Multiple Data and Control Paths...................................................................2-3
Storage Clusters.........................................................................................2-4
Hardware Components......................................................................................2-5
Shared Memory..........................................................................................2-6
Cache Memory ...........................................................................................2-6
Front-End Directors and Host Channels ........................................................2-7
Contents iii
Hitachi Universal Storage Platform V/VM User and Reference Guide
Back-End Directors and Array Domains........................................................ 2-9
Hard Disk Drives...................................................................................... 2-11
Service Processor..................................................................................... 2-11
Power Supplies........................................................................................ 2-12
Batteries ................................................................................................. 2-12
Control Panel and Emergency Power-Off Switch................................................ 2-13
Control Panel........................................................................................... 2-13
Emergency Power-Off Switch.................................................................... 2-15
Intermix Configurations................................................................................... 2-16
RAID-Level Intermix................................................................................. 2-16
Hard Disk Drive Intermix.......................................................................... 2-17
Device Emulation Intermix........................................................................ 2-17
Functional and Operational Characteristics ............................................. 3-1
RAID Implementation....................................................................................... 3-2
Array Groups and RAID Levels....................................................................3-2
Sequential Data Striping............................................................................. 3-4
LDEV Striping Across Array Groups.............................................................. 3-5
CU Images, LVIs, and LUs.................................................................................3-7
CU Images................................................................................................3-7
Logical Volume Images...............................................................................3-7
Logical Units.............................................................................................. 3-8
Storage Navigator.............................................................................................3-9
System Option Modes, Host Modes, and Host Mode Options.............................. 3-10
System Option Modes............................................................................... 3-10
Host Modes and Host Mode Options.......................................................... 3-21
Mainframe Operations..................................................................................... 3-22
Mainframe Compatibility and Functionality ................................................. 3-22
Mainframe Operating System Support........................................................ 3-22
Mainframe Configuration .......................................................................... 3-23
Open-Systems Operations............................................................................... 3-24
Open-Systems Compatibility and Functionality............................................ 3-24
Open-Systems Host Platform Support........................................................ 3-25
Open-Systems Configuration..................................................................... 3-26
Battery Backup Operations.............................................................................. 3-27
Troubleshooting................................................................................... 4-1
General Troubleshooting................................................................................... 4-2
Service Information Messages ........................................................................... 4-3
Calling the Hitachi Data Systems Support Center................................................. 4-4
iv Contents
Hitachi Universal Storage Platform V/VM User and Reference Guide
Units and Unit Conversions....................................................................A-1
Acronyms and Abbreviations Index
Contents v
Hitachi Universal Storage Platform V/VM User and Reference Guide
vi Contents
Hitachi Universal Storage Platform V/VM User and Reference Guide

Preface

This document describes the physical, functional, and operational characteristics of the Hitachi Universal Storage Platform V (USP V) and Hitachi Universal Storage Platform VM (USP VM) storage systems and provides general instructions for operating the USP V and USP VM.
Please read this document carefully to understand how to use this product, and maintain a copy for reference purposes.
This preface includes the following information:
Safety and Environmental NoticesIntended Audience Product VersionDocument Revision LevelSource Document(s) for this Revision Changes in this RevisionDocument OrganizationReferenced DocumentsDocument ConventionsConvention for Storage Capacity ValuesGetting HelpComments
Notice: The use of the Hitachi Universal Storage Platform V and VM storage systems and all other Hitachi Data Systems products is governed by the terms of your agreement(s) with Hitachi Data Systems.
Preface vii
Hitachi Universal Storage Platform V/VM User and Reference Guide

Safety and Environmental Notices

Federal Communications Commission (FCC) Statement
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense.
“EINE LEICHT ZUGÄNGLICHE TRENN-VORRICHTUNG, MIT EINER KONTAKT­ÖFFNUNGSWEITE VON MINDESTENS 3mm IST IN DER UNMITTELBAREN NÄHE DER VERBRAUCHERANLAGE ANZUORDNEN (4-POLIGE ABSCHALTUNG).”
Maschinenlärminformationsverordnung 3. GSGV, 18.01.1991: Der höchste Schalldruckpegel beträgt 70 db(A) oder weniger gemäß ISO 7779.
CLASS 1 LASER PRODUCT
CLASS 1 LASER PRODUCT LASER KLASSE 1
WARNING: This is a Class A product. In a domestic environment this product may cause radio interference in which case the user may be required to take adequate measures.
WARNUNG: Dies ist ein Produkt der Klasse A. In nichtgewerblichen Umgebungen können von dem Gerät Funkstörungen ausgehen, zu deren Beseitigung vom Benutzer geeignete Maßnahmen zu ergreifen sind.
viii Preface
Hitachi Universal Storage Platform V/VM User and Reference Guide

Intended Audience

This document is intended for system administrators, Hitachi Data Systems representatives, and authorized service providers who are involved in installing, configuring, and operating the Hitachi Universal Storage Platform V and/or Hitachi Universal Storage Platform VM storage systems.
This document assumes the following:
The user has a background in data processing and understands RAID
storage systems and their basic functions.
The user is familiar with the host systems supported by the Hitachi
Universal Storage Platform V/VM.
The user is familiar with the equipment used to connect RAID storage
systems to the supported host systems.

Product Version

This document revision applies to USP V/VM microcode 60-02-4x and higher.

Document Revision Level

Revision Date Description
MK-96RD635-P February 2007 Preliminary Release MK-96RD635-00 May 2007 Initial Release, supersedes and replaces MK-96RD635-P MK-96RD635-01 June 2007 Revision 1, supersedes and replaces MK-96RD635-00 MK-96RD635-02 September 2007 Revision 2, supersedes and replaces MK-96RD635-01 MK-96RD635-03 November 2007 Revision 3, supersedes and replaces MK-96RD635-02 MK-96RD635-04 April 2008 Revision 4, supersedes and replaces MK-96RD635-03

Source Document(s) for this Revision

Exhibit M1, DKC610I Disk Subsystem, Hardware Specifications, revision 13
Exhibit M1, DKC615I Disk Subsystem, Hardware Specifications, revision 5
Public Mode for RAID600, R600_Public_Mode_2008_0314.xls
Preface ix
Hitachi Universal Storage Platform V/VM User and Reference Guide

Changes in this Revision

Added the 400-GB disk drive (Specifications at a Glance, Hard Disk Drives).
Updated the maximum usable capacity values (Specifications at a Glance).
Added a table of specifications for the disk drives (new Table 2-3).
Updated the list of public system option modes (Table 3-1).
Added the following new modes: 545, 685, 689, 690, 697, 701, 704. Modified the description of mode 467 as follows:
Changed the default from OFF to ON.
Added Universal Volume Manager to the list of affected functions.
Added a caution about setting mode 467 ON when using external
volumes as secondary copy volumes.
Added a note about copy processing time and the prioritization of host I/O performance.
Removed mode 198.

Document Organization

The following table provides an overview of the contents and organization of this document. Click the chapter title The first page of each chapter provides links to the sections in that chapter.
in the left column to go to that chapter.
Chapter Description
Product Overview
Architecture and Components
Functional and Operational Characteristics
Troubleshooting
Units and Unit Conversions
Acronyms and Abbreviations Defines the acronyms and abbreviations used in this document. Index Lists the topics in this document in alphabetical order.
Provides an overview of the Universal Storage Platform V/VM, including features, benefits, general function, and connectivity descriptions.
Describes the Universal Storage Platform V/VM architecture and components.
Discusses the functional and operational capabilities of the Universal Storage Platform V/VM.
Provides troubleshooting guidelines and customer support contact information for the Universal Storage Platform V/VM.
Provides conversions for standard (U.S.) and metric units of meas ure associated with the Universal Storage Platform V/VM.
x Preface
Hitachi Universal Storage Platform V/VM User and Reference Guide

Referenced Documents

Hitachi Universal Storage Platform V/VM documentation:
Table 1-3 lists the user documents for Storage Navigator-based software.
Table 1-4 lists the user documents for host- and server-based software.
Table 3-5 lists the configuration guides for host attachment.
Other referenced USP V/VM documents:
USP V Installation Planning Guide, MK-97RD6668 USP VM Installation Planning Guide, MK-97RD6679
IBM® documentation:
Planning for IBM Remote Copy, SG24-2595
DFSMSdfp Storage Administrator Reference, SC28-4920
DFSMS MVS V1 Remote Copy Guide and Reference, SC35-0169
OS/390 Advanced Copy Services, SC35-0395
Storage Subsystem Library, 3990 Transaction Processing Facility Support
RPQs, GA32-0134
3990 Operations and Recovery Guide, GA32-0253
Storage Subsystem Library, 3990 Storage Control Reference for Model 6,
GA32-0274
Preface xi
Hitachi Universal Storage Platform V/VM User and Reference Guide

Document Conventions

The terms “Universal Storage Platform V” and “Universal Storage Platform VM” refer to all models of the Hitachi Universal Storage Platform V and VM storage systems, unless otherwise noted.
This document uses the following icons to draw attention to information:
Icon Meaning Description
Note Calls attention to important and/or additional information.
Tip
Caution
WARNING
DANGER
ELECTRIC SHOCK HAZARD!
ESD Sensitive
Provides helpful information, guidelines, or suggestions for performing tasks more effectively.
Warns the user of adverse conditions and/or consequences (e.g., disruptive operations).
Warns the user of severe conditions and/or consequences (e.g., destructive operations).
Dangers provide information about how to avoid physical injury to yourself and others.
Warns the user of electric shock hazard. Failure to take appropriate precautions (e.g., do not touch) could result in serious injury.
Warns the user that the hardware is sensiti ve to electrostatic discharge (ESD). Failure to take appropriate precautions (e.g., grounded wrist strap) could result in damage to the hardware.

Convention for Storage Capacity Values

Physical storage capacity values (e.g., disk drive capacity) are calculated based on the following values:
1 KB = 1,000 bytes 1 MB = 1,000 1 GB = 1,000 1 TB = 1,000 1 PB = 1,000
Logical storage capacity values (e.g., logical device capacity) are calculated based on the following values:
1 KB = 1,024 bytes 1 MB = 1,024 1 GB = 1,024 1 TB = 1,024 1 PB = 1,024 1 block = 512 bytes
xii Preface
2
bytes
3
bytes
4
bytes
5
bytes
2
bytes
3
bytes
4
bytes
5
bytes
Hitachi Universal Storage Platform V/VM User and Reference Guide

Getting Help

If you need to call the Hitachi Data Systems Support Center, make sure to provide as much information about the problem as possible, including:
The circumstances surrounding the error or failure.
The exact content of any message(s) displayed on the host system(s).
The exact content of any message(s) displayed by Storage Navigator.
The service information messages (SIMs), including reference codes and
severity levels, displayed by Storage Navigator and/or logged at the host.
The Hitachi Data Systems customer support staff is available 24 hours/day, seven days a week. If you need technical support, please call:
United States: (800) 446-0744
Outside the United States: (858) 547-4526

Comments

Please send us your comments on this document. Make sure to include the document title, number, and revision. Please refer to specific section(s) and paragraph(s) whenever possible.
E-mail: doc.comments@hds.com
Fax: 858-695-1186
Mail:
Technical Writing, M/S 35-10 Hitachi Data Systems 10277 Scripps Ranch Blvd. San Diego, CA 92131
Thank you! (All comments become the property of Hitachi Data Systems Corporation.)
Preface xiii
Hitachi Universal Storage Platform V/VM User and Reference Guide
xiv Preface
Hitachi Universal Storage Platform V/VM User and Reference Guide
1

Product Overview

This chapter provides an overview of the Universal Storage Platform V and VM storage systems.
Universal Storage Platform V Family New and Improved Capabilities Specifications at a GlanceSoftware Products
Product Overview 1-1
Hitachi Universal Storage Platform V/VM User and Reference Guide

Universal Storage Platform V Family

The Hitachi Universal Storage Platform™ V family, the industry’s highest performing and most scalable storage solution, represents the first implementation of a large-scale, enterprise-class virtualization layer combined with thin provisioning software, delivering virtualization of internal and external storage into one pool. Users realize the consolidation benefits of external storage virtualization with the efficiencies, power, and cooling advantages of thin provisioning in one integrated solution.
The Universal Storage Platform V family, which includes the USP V floor models and the rack-mounted USP VM, offer a wide range of storage and data services, including thin provisioning with Hitachi Dynamic Provisioning™ software, application-centric storage management and logical partitioning, and simplified and unified data replication across heterogeneous storage systems. The Universal Storage Platform V family enables users to deploy applications within a new framework, leverage and add value to current investments, and more closely align IT with business objectives.
The Universal Storage Platform V family is an integral part of the Services Oriented Storage Solutions architecture from Hitachi Data Systems. These storage systems provide the foundation for matching application requirements to different classes of storage and deliver critical services such as:
Business continuity services
Content management services (search, indexing)
Non-disruptive data migration
Volume management across heterogeneous storage arrays
Thin provisioning
Security services (immutability, logging, auditing, data shredding)
Data de-duplication
I/O load balancing
Data classification
File management services
For further information on storage solutions and the Universal Storage Platform V and VM storage systems, please contact your Hitachi Data Systems account team.
1-2 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide

New and Improved Capabilities

The Hitachi Universal Storage Platform V and VM storage systems offer the following new and improved capabilities as compared with the TagmaStore Universal Storage Platform and Network Storage Controller:
NEW! Hitachi Dynamic Provisioning™
Hitachi Dynamic Provisioning is a new and advanced thin-provisioning software product that provides “virtual storage capacity” to simplify administration and addition of storage, eliminate application service interruptions, and reduce costs.
Cache capacity
The USP V supports up to 256 GB (128 GB for TagmaStore USP).
Shared memory capacity
The USP V supports up to 32 GB (12 GB for TagmaStore USP). The USP VM supports up to 16 GB (6 GB for TagmaStore NSC).
Total storage capacity (internal and external storage)
The USP V supports up to 247 PB (32 PB for TagmaStore USP). The USP VM supports up to 96 PB (16 PB for TagmaStore NSC).
Aggregate bandwidth
The USP V provides an aggregate bandwidth of up to 106 GB/sec (81 GB/sec for TagmaStore USP).
Fibre-channel ports
The USP V supports up to 224 FC ports (192 for TagmaStore USP).
FICON
The USP V supports up to 112 FICON ports (96 for TagmaStore USP). The USP VM supports up to 24 FICON ports (16 for TagmaStore NSC).
ESCON
The USP V supports up to 112 ESCON ports (96 for TagmaStore USP).
®
ports
®
ports
Open-system logical devices
The USP VM supports up to 65,536 LDEVs (16,384 for TagmaStore NSC).
Product Overview 1-3
Hitachi Universal Storage Platform V/VM User and Reference Guide

Specifications at a Glance

Specifications for the Universal Storage Platform V

Table 1-1 provides a brief overview of the USP V specifications.
Table 1-1 Specifications – Universal Storage Platform V
Controller Basic platform packaging unit: Integrated control/array frame and 1 to 4 optional array frames Universal Star Network Crossbar Switch Number of switches 8 Aggregate bandwidth 106 GB/sec Aggregate IOPS 4.5 million Cache Memory Boards 32 Board capacity 4 GB or 8 GB Maximum 256 GB Shared Memory Boards 8 Board capacity 4 GB Maximum 32 GB Front-End Directors (Connectivity) Boards 14 Fibre-channel host ports per board 8 or 16 Maximum fibre-channel host ports 224 Virtual host ports 1,024 per physical port Maximum FICON host ports 112 Maximum ESCON host ports 112 Logical Devices (LDEVs)—Maximum Supported Open systems 65,536 Mainframe 65,536 Hard Disk Drives Type (fibre channel) 73 GB, 146 GB, 300 GB, 400 GB, 750 GB Number of drives (minimum–maximum) 4–1152 Spare drives per system (minimum–maximum) 1–16
1-4 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide
Internal Raw Capacity Minimum (73-GB disks) 82 TB Maximum (750-GB disks) 850.8 TB Maximum Usable Capacity—RAID-5 Open systems (750-GB disks) 739.3 TB Mainframe (400-GB disks) 374.3 TB Maximum Usable Capacity—RAID-6 Open systems (750-GB disks) 633.7 TB Mainframe (400-GB disks) 318.5 TB Maximum Usable Capacity—RAID-1+ Open systems (750-GB disks) 423.9 TB Mainframe (400-GB disks) 207.7 TB External Storage Support Maximum internal and external capacity 247 PB
Virtual Storage Machines 32 Standard Back-End Directors 1-8 Operating System Support Mainframe: IBM OS/390®, MVS/ESA™, MVS/XA™, VM/ESA®, VSE/ESA™, z/OS, z/OS.e, z/VM®,
zVSE™; Fujitsu MSP; Red Hat Linux for IBM S/390
®
and zSeries®
Open Systems: Sun Solaris, HP-UX, IBM AIX®, Microsoft® Windows, Novell NetWare, Red Hat and SuSE Linux, VMWare ESX, HP Tru64, SGI IRIX, HP OpenVMS
Product Overview 1-5
Hitachi Universal Storage Platform V/VM User and Reference Guide

Specifications for the Universal Storage Platform VM

Table 1-2 provides a brief overview of the USP VM specifications.
Table 1-2 Specifications – Universal Storage Platform VM
Controller Single-rack configuration: controller and up to two disk chassis
Optional second rack: up to two disk chassis Universal Star Network Crossbar Switch Number of switches 2 Aggregate bandwidth 13.3 GB/sec Aggregate IOPS 1.2 million Cache Memory Boards 8 Board capacity 4 or 8 GB Maximum 64 GB Shared Memory Boards 4 Board capacity 4 GB Maximum 16 GB Front-End Directors (Connectivity) Boards 3 Fibre-channel host ports per feature 8 or 1 6 Fibre-channel port performance 4 Gb/sec Maximum number of fibre-channel host ports 48 Virtual host ports 1,024 per physical port Maximum FICON host ports 24 Maximum ESCON host ports 24 Logical Devices (LDEVs)—Maximum Supported Open systems 65,536 Mainframe 65,536 Hard Disk Drives Capacity and (fibre channel) 73 GB, 146 GB, 300 GB, 400 GB, 750 GB Number (minimum–maximum) 0–240 Spare drives per system (minimum–maximum) 1–16 Internal Raw Capacity Minimum (73-GB disks) 0 GB (146 GB) Maximum (750-GB disks) 177 TB
1-6 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide
Controller Maximum Usable Capacity—RAID-5
Open systems (750-GB disks) 144.7 TB Mainframe (400-GB disks) 73.3 TB Maximum Usable Capacity—RAID-6 Open systems (750-GB disks) 124 TB Mainframe (400-GB disks) 62.4 TB Maximum Usable Capacity—RAID-1+ Open systems (750-GB disks) 87.1 TB Mainframe (400-GB disks) 42.7 TB External Storage Support Maximum internal and external capacity 96 PB
Virtual Storage Machines 8 Standard Back-End Director 1 Operating System Support Mainframe: IBM OS/390®, MVS/ESA™, MVS/XA™, VM/ESA®, VSE/ESA™, z/OS, z/OS.e, z/VM®,
zVSE™; Fujitsu MSP; Red Hat Linux for IBM S/390
®
and zSeries®
Open Systems: Sun Solaris, HP-UX, IBM AIX®, Microsoft® Windows, Novell NetWare, Red Hat and SuSE Linux, VMWare ESX, HP Tru64, HP OpenVMS
Product Overview 1-7
Hitachi Universal Storage Platform V/VM User and Reference Guide

Software Products

The Universal Storage Platform V and VM provide many advanced features and functions that increase data accessibility and deliver enterprise-wide coverage of online data copy/relocation, data access/protection, and storage resource management. Hitachi Data Systems’ software products and solutions provide a full set of industry-leading copy, availability, resource management, and exchange software to support business continuity, database backup and restore, application testing, and data mining.
Table 1-3 lists and describes the Storage Navigator-based software for the Universal Storage Platform V and VM. host/server-based software for the Universal Storage Platform V and VM.
NEW – Hitachi Dynamic Provisioning
Hitachi Dynamic Provisioning is a new and advanced thin-provisioning software product for the Universal Storage Platform V/VM that provides “virtual storage capacity” to simplify administration and addition of storage, eliminate application service interruptions, and reduce costs.
Dynamic Provisioning allows storage to be allocated to an application without being physically mapped until it is used. This “just-in-time” provisioning de­couples the provisioning of storage to an application from the physical addition of storage capacity to the storage system to achieve overall higher rates of storage utilization. Dynamic Provisioning also transparently spreads many individual I/O workloads across multiple physical disks. This I/O workload balancing feature directly reduces performance and capacity management expenses by eliminating I/O bottlenecks across multiple applications.
Table 1-4 lists and describes the
For further information on Hitachi Dynamic Provisioning, please contact your Hitachi Data Systems account team, or visit Hitachi Data Systems online at
www.hds.com.
1-8 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide
Table 1-3 Storage Navigator-Based Software
Name Description Documents
Hitachi Storage Navigator Hitachi Storage Navigator
Messages NEW: Hitachi Dynamic
Provisioning
Hitachi TrueCopy Hitachi TrueCopy for
IBM z/OS Hitachi ShadowImage
Hitachi ShadowImage for IBM z/OS
Hitachi Compatible Mirroring for IBM FlashCopy
Hitachi Universal Replicator Hitachi Universal Replicator
for IBM z/OS
Hitachi Compatible Replication for IBM XRC*
Hitachi Copy-on-Write Snapshot
Hitachi Universal Volume Manager
Hitachi Virtual Partition Manager
Obtains system configuration and status information and sends user­requested commands to the storage systems. Serves as the integrated user interface for all Resource Manager components.
Provides “virtual storage capacity” to simpli fy administration and addition of storage, eliminate application service interruptions, and reduce costs. See
Enables the user to perform remote copy operations between storage systems in different locations. TrueCopy provides synchronous and asynchronous copy modes for open-system and mainframe data.
Allows the user to create internal copies of volumes for purposes such as application testing and offline backup. Can be used in conjunction with TrueCopy to maintain multiple copies of data at primary and secondary sites.
Provides compatibility with th e IBM FlashCopy mainframe host software function, which performs server-based data replication for mainframe data.
Provides a RAID storage-based hardware solution for disaster recovery which enables fast and accurate system recovery, particularly for large amounts of data which span multiple volumes. Using UR, you can configure and manage highly reliable data replication systems using journal volumes to reduce chances of suspension of copy operations.
Provides compatibility with the IBM Extended Remote Copy (XRC) mainframe host software function, which performs server-based asynchronous remote copy operations for mainframe LVIs.
Provides ShadowImage functionality using less capacity of the storage system and less time for processing than ShadowImage by using “virtual” secondary volumes. COW Snapshot is useful for copying and managing data in a short time with reduced cost. ShadowImage provides higher data integrity.
Realizes the virtualization of the storage system. Users can connect other storage systems to the USP V/VM and access the data on the external storage system over virtual devices on the USP V/VM. Functions such as TrueCopy and Cache Residency can be performed on the external data.
Provides storage logical partition and cache logical partition:
Storage logical partition allows you to divide the availab le storage
among various users to reduce conflicts over usage.
Cache logical partition allows you to divide the cache into multiple
virtual cache memories to reduce I/O contention.
Hitachi Dynamic Provisioning.
MK-96RD621 MK-96RD613
MK-96RD641
MK-96RD622 MK-96RD623
MK-96RD618 MK-96RD619
MK-96RD614
MK-96RD624 MK-96RD625
MK-96RD610
MK-96RD607
MK-96RD626
MK-96RD629
Hitachi LUN Manager
Hitachi SNMP Agent
Audit Log
Enables users to configure the fibre-channel ports and devices (LUs) for operational environments (for example, arbitrated-loop and fabric topologies, host failover support).
Provides support for SNMP monitoring and management. Includes Hitachi specific MIBs and enable s SNMP-based reporting on status and alerts. SNMP agent on the SVP gathers usage and error information and transfers the information to the SNMP manager on the host.
Provides detailed records of all operations performed using Storage Navigator (and the SVP).
MK-96RD615
MK-96RD620
MK-96RD606
Product Overview 1-9
Hitachi Universal Storage Platform V/VM User and Reference Guide
Encrypted Communications
Hitachi LUN Expansion
Hitachi Virtual LVI/LUN
Hitachi Cache Residency Manager
Hitachi Compatible PAV
Hitachi LUN Security Hitachi Volume Security
Hitachi Database Validator*
Hitachi Data Retention Utility Hitachi Volume Retention
Manager Hitachi Performance Monitor Performs detailed monitoring of storage system and volume activity. MK-96RD617 Hitachi Volume Migration Performs automatic relocation of volumes to optimize performance. MK-96RD617 Hitachi Server Priority
Manager*
Volume Shredder Enables users to overwrite data on logical volumes with dummy data. MK-96RD630
Allows users to employ SSL-encrypted communications with the Hitachi Universal Storage Platfo r m V/VM.
Allows open-system users to concatenate multiple LUs into single LUs to enable open-system hosts to acc ess the data on the entire Universal Storage Platform V/VM using fewer logical units.
Enables users to convert single volumes (LVIs or LUs ) into multiple smaller volumes to improve data access performance.
Allows users to “lock” and “unlock” data into cache in real time to optimize access to your most frequently accessed data.
Enables the mainframe host to issue multiple I/O requests in parallel to single LDEVs in the USP V/VM. Compatible PAV provides compatibility with the IBM Workload Manager (WLM) host software function and supports both static and dynamic PAV functionality.
Allows users to restrict host access to data on the USP V/VM. Open­system users can restrict host access to LUs based on the host’s world wide name (WWN). Mainframe users can restrict host access to LVIs based on node IDs and logical partition (LPAR) number s.
Prevents corrupted data environments by identifying and rejecting corrupted data blocks before they are written onto the storage disk, thus minimizing risk and potential costs in backup, restore, and recovery operations.
Allows users to protect data from I/O operations performed by hosts. Users can assign an access attribute to each logi cal volume to restrict read and/or write operations, preventing unauthorized access to data.
Allows open-system users to designate prioritized ports (for example, for production servers) and non-prioritized ports (for example, for development servers) and set thresholds and upper limits for the I/O activity of these ports.
MK-96RD631
MK-96RD616
MK-96RD630
MK-96RD609
MK-96RD608
MK-96RD615 MK-96RD628
MK-96RD611
MK-96RD612 MK-96RD627
MK-96RD617
* Please contact your Hitachi Data Systems account team for the latest information on the availability of these features.
1-10 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide
Table 1-4 Host/Server-Based Software
Name Description Documents
Hitachi Command Control Interface
Hitachi Cross-OS File Exchange
Hitachi Code Converter
HiCommand Global Link Availability Manager
Hitachi Dynamic Link Manager
HiCommand Device Manager
HiCommand Provisioning Manager
Hitachi Business Continuity Manager
HiCommand Replication Monitor
Enables open-system users to perform data replication and data protection operations by issuing commands from the host to the Hitachi storage systems. The CCI software supports scripting and provides failover and mutual hot standby functionality in cooperation with host failover products.
Enables users to transfer data between mainframe and open-system platforms using the FICON and/or ESCON channels, for high-speed data transfer without requiring network communication links or tape.
Provides simple, integrated, single-point, multipath storage connection management and reporting. Improves system reliability and reduces downtime by automated path health checks, reporting alerts and error information from hosts, and assisting with rapid troubleshooting. Administrators can optimize application performance by controlling path bandwidth (per host LUN load balancing), and keep applications online while performing tasks that require taking a path down by easily switching to and from alternate paths
Provides automatic load balancing, path failover, and recovery capabilities in the event of a path failure.
Enables users to manage the Hitachi storage systems and perform functions (e.g., LUN Manager, ShadowImage) from virtually any location via the Device Manager Web Client, command line interface (CLI), and/or third-party application.
Designed to handle a variety of storage systems to simplify storage management operations and reduce costs. Works together with HiCommand Device Manager to provide the functionality to integrate, manipulate, and manage storage using provisioning plans.
Enables mainframe users to make Po int-in-Time (PiT) copies of production data, without quiescing the application or causing any disruption to end-user operations, for such uses as application testing, business intelligence, and disaster recovery for business continuance.
Supports management of storage replication (copy pair) operations, enabling users to view (report) the configuration, change the status, and troubleshoot copy pair issues. Replication Monitor is particularly effective in environments that include multiple storage systems or multiple physical locations, and in environments in which various types of volume replication functionality (such as both ShadowImage and TrueCopy) are used.
User and Reference Guide: MK-90RD011
User’s Guide: MK-96RD647 Code Converter: MK-94RD253
User’s Guide: MK-95HC106 Installation & Admin: MK-95HC107 Messages: MK-95HC108
Concepts & Planning: MK-96HC144 For AIX: MK-92DLM111 For HP-UX: MK-92DLM112 For Linux: MK-92DLM113 For Solaris: MK-92DLM114 For Windows: MK-92DLM129
Web Client: MK-91HC001 Server Inst & Config: MK-91HC002 CLI: MK-91HC007 Messages: MK-92HC016 Agent: MK-92HC019
User’s Guide: MK-93HC035 Server: MK-93HC038 Messages: MK-95HC117
Installation: MK-95HC104 Reference Guide: MK-95HC105 User’s Guide: MK-94RD247 Messages: MK-94RD262
Install & Config: MK-96HC131 Messages: MK-96HC132 User’s Guide: MK-94HC093
Product Overview 1-11
Hitachi Universal Storage Platform V/VM User and Reference Guide
HiCommand Tuning Manager
HiCommand Protection Manager
HiCommand Tiered Storage Manager
Hitachi Copy Manager for TPF
Hitachi Cache Manager
Hitachi Dataset Replication for z/OS
Provides intelligent and proactive performance and capacity monitoring as well as reporting and forecasting capabilities of storage resources.
Systematically controls storage systems, backup/recovery products, databases, and other system components to provide efficient and reliable data protection using simple operations without complex procedures or expertise.
Enables users to relocate data non-disruptively from one volume to another for purposes of Data Lifecycle Management (DLM). Helps improve the efficiency of the entire data storage system by enabling quick and easy data migration according to the user’s environment and requirements.
Enables TPF users to control DASD copy functions on Hitachi RAID storage systems from TPF through an interface that is simple to install and use.
Enables users to perform Cache Residency Manager operations from the mainframe host system. Cache Residency Manager allows you to place specific data in cache memory to enable virtually immediate access to this data.
Operates together with the ShadowImage feature. Rewrites the OS management information (VTOC, VVDS, and VTOCIX) and dataset name and creates a user catalog for a ShadowImage target volume after a split operation. Provides the prepare, volume divide, volume unify, and volume backup functions to enable use of a ShadowImage target volume.
Server Installation: MK-95HC10 9 Getting Started: MK-96HC120 Server Administration: MK-92HC021 User’s Guide: MK-92HC022 CLI: MK-96HC119 Performance Reporter: MK-93HC033 Agent Admin Guide: MK-92HC013 Agent Installation: MK-96HC110 Hardware Agent: MK-96HC111 OS Agent: MK-96HC112 Database Agent: MK-96HC113 Messages: MK-96HC114
User’s Guide: MK-94HC070 Console: MK-94HC071 Command Reference: MK-94HC072 Messages: MK-94HC073
Server: MK-94HC089 User’s Guide: MK-94HC090 CLI: MK-94HC091 Messages: MK-94HC092
Administrator’s Guide: MK-92RD129 Messages: MK-92RD130 Operations Guide: MK-92RD131
User’s Guide: MK-96RD646
User’s Guide: MK-96RD648
1-12 Product Overview
Hitachi Universal Storage Platform V/VM User and Reference Guide
2

Architecture and Components

This chapter describes the architecture and components of the Hitachi Universal Storage Platform V and VM storage systems:
Hardware ArchitectureHardware ComponentsControl Panel and Emergency Power-Off SwitchIntermix Configurations
Architecture and Components 2-1
Hitachi Universal Storage Platform V/VM User and Reference Guide

Hardware Architecture

Figure 2-1 illustrates the hardware architecture of the Universal Storage Platform V storage system. the Universal Storage Platform VM storage system. As shown, the USP V and USP VM share the same hardware architecture, differing only in number of features (FEDs, BEDs, etc.), number of hard disk drives (HDDs), and power supply.
Figure 2-2 illustrates the hardware architecture of
Input Power
Input Power
Power Supply
AC-Box
AC-Box
Power Supply
AC-Box
AC-Box
AC-DC
AC-DC
Power
Power
Supply
Supply
AC-DC
Power
AC-DC
Supply
Power
Supply
Battery
Battery
Box
Box
Controller
SM Path
DKA
Cache Path
DKA
DKA
BED
Channel Interface
SMA SMA
DKA
DKA
DKA
FED
CSW
CSW
CMA
CACHE
Disk Path (max. 64 paths)
Max. 48
HDDs
DKA
DKA
DKA
FED
CSW
CSW
CMA
CACHE
FC-AL (4 Gbps/port)
DKA
DKA
BED
DKA
Battery
Battery
Box
Box
Disk Unit
Max. 1,152 HDDs per storage system
Figure 2-1 Universal Storage Platform V Hardware Architecture
2-2 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide
Channel Interface
Input Power
PDB-Box
PDU
Input Power
PDB-Box
PDU
Power Supply
AC-DC
AC-DC
Power
Power
Supply
Supply
Battery
Battery
Box
Box
Power Supply
AC-DC
Power
AC-DC
Supply
Power Supply
Controller
SM Path
Cache Path
DKA
SMA SMA
CHA
CHA
CHA
Disk Path (max. 8 paths)
Max. 60
HDDs
CHA
CHA
CHA
CSW CSW
CMA CMA
FC-AL (4 Gbps/port)
DKA
Disk Unit
Figure 2-2 Universal Storage Platform VM Hardware Architecture

Multiple Data and Control Paths

The Universal Storage Platform V/VM employs the proven Hi-Star™ crossbar switch architecture, which uses multiple point-to-point data and command paths to provide redundancy and improve performance. Each data and command path is independent. The individual paths between the front-end or back-end directors and cache are steered by high-speed cache switch cards (CSWs). The USP V/VM does not have any common buses, thus eliminating the performance degradation and contention that can occur in bus architecture. All data stored on the USP V/VM is moved into and out of cache over the redundant high-speed paths.
Max. 240 HDDs per storage system
Architecture and Components 2-3
Hitachi Universal Storage Platform V/VM User and Reference Guide

Storage Clusters

Each controller consists of two redundant controller halves called storage clusters. Each storage cluster contains all physical and logical elements (for example, power supplies, channel adapters, disk adapters, cache, control storage) needed to sustain processing within the storage system. Both storage clusters should be connected to each host using an alternate path scheme, so that if one storage cluster fails, the other storage cluster can continue processing for the entire storage system.
The front-end and back-end directors are split between clusters to provide full backup. Each storage cluster also contains a separate, duplicate copy of cache and shared memory contents. In addition to the high-level redundancy that this type of storage clustering provides, many of the individual components within each storage cluster contain redundant circuits, paths, and/or processors to allow the storage cluster to remain operational even with multiple component failures. Each storage cluster is powered by its own set of power supplies, which can provide power for the entire storage system in the event of power supply failure. Because of this redundancy, the USP V/VM can sustain the loss of multiple power supplies and still continue operation.
The redundancy and backup features of the USP V/VM eliminate all active single points of failure, no matter how unlikely, to provide an additional level of reliability and data availability.
2-4 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide

Hardware Components

The USP V/VM hardware includes the controller, disk unit, and power supply components. Each component is connected over the cache paths, shared memory paths, and/or disk paths. The USP V/VM controller is fully redundant and has no active single point of failure. All components can be repaired or replaced without interrupting access to user data.
The main hardware components of the USP V and VM storage systems are:
Shared Memory
Cache Memory
Host Channels and Front-End Directors
Back-End Directors and Array Domains
Hard Disk Drives
Service Processor
Power Supplies
Batteries
Architecture and Components 2-5
Hitachi Universal Storage Platform V/VM User and Reference Guide

Shared Memory

The nonvolatile shared memory contains the cache directory and configuration information for the USP V/VM storage system. The path group arrays (for example, for dynamic path selection) also reside in the shared memory. The shared memory is duplexed, and each side of the duplex resides on the first two shared memory cards, which are in clusters 1 and 2. In the event of a power failure, shared memory is protected for at least 36 hours by battery backup.
The Universal Storage Platform V can be configured with up to 32 GB of shared memory, and the Universal Storage Platform VM can be configured with up to 16 GB of shared memory. The size of the shared memory is determined by several factors, including total cache size, number of logical devices (LDEVs), and replication function(s) in use. Any required increase beyond the base size is automatically shipped and configured during the installation or upgrade process.

Cache Memory

The Universal Storage Platform V can be configured with up to 256 GB of cache, and the Universal Storage Platform VM can be configured with up to 64 GB of cache memory. All cache memory in the USP V/VM is nonvolatile and is protected for at least 36 hours by battery backup.
The Universal Storage Platform V and VM storage systems place all read and write data in cache. The amount of fast-write data in cache is dynamically managed by the cache control algorithms to provide the optimum amount of read and write cache, depending on the workload read and write I/O characteristics.
The cache is divided into two equal areas (called cache A and cache B) on separate cards. Cache A is in cluster 1, and cache B is in cluster 2. The Universal Storage Platform V/VM places all read and write data in cache. Write data is normally written to both cache A and B with one channel write operation, so that the data is always duplicated (duplexed) across logic and power boundaries. If one copy of write data is defective or lost, the other copy is immediately destaged to disk. This “duplex cache” design ensures full data integrity in the unlikely event of a cache memory or power-related failure.
Note: Mainframe hosts can specify special attributes (for example, cache fast write (CFW) command) to write data (typically sort work data) without write duplexing. This data is not duplexed and is usually given a discard command at the end of the sort, so that the data will not be destaged to the disk drives.
2-6 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide

Front-End Directors and Host Channels

The Universal Storage Platform V and VM support all-mainframe, all-open­system, and multiplatform configurations. The front-end directors (FEDs) process the channel commands from the hosts and manage host access to cache. In the mainframe environment, the front-end directors perform CKD­to-FBA and FBA-to-CKD conversion for the data in cache.
Each front-end director feature (pair of boards) is composed of one type of host channel interface: fibre-channel, FICON, or Extended Serial Adapter (ExSA) (compatible with ESCON protocol). The channel interfaces on each board can transfer data simultaneously and independently.
The FICON and fibre-channel FED features are available in shortwave (multimode) and longwave (single mode) versions. When configured with shortwave features, the USP V/VM can be located up to 500 meters (2750 feet) from the host. When configured with longwave features, the USP V/VM can be located up to ten kilometers from the host(s).
FICON. The FICON features provide data transfer speeds of up to 4 Gbps
and have 8 ports per feature (pair of boards). Note: FICON data transmission rates vary according to configuration: S/390 Parallel Enterprise Servers - Generation 5 (G5) and Generation 6
(G6) only support FICON at 1 Gbps. z800 and z900 series hosts have the following possible configurations:
FICON channel will operate at 1 Gbps ONLY. FICON EXPRESS channel transmission rates will vary according to
microcode release. If microcode is 3G or later, the channel will auto­negotiate to set a 1-Gbps or 2-Gbps transmission rate. If microcode is previous to 3G, the channel will operate at 1 Gbps ONLY.
For further information on FICON connectivity, refer to the Mainframe Host Attachment and Operations Guide (MK-96RD645), or contact your Hitachi Data Systems representative.
ESCON. The ExSA features provide data transfer speeds of up to 17
MB/sec and have 8 ports per feature (pair of boards). Each ExSA channel can be directly connected to a CHPID or a serial channel director. Shared serial channels can be used for dynamic path switching. The USP V/VM also supports the ESCON Extended Distance Feature (XDF).
Fibre-Channel. The fibre-channel features provide data transfer speeds of
up to 4 Gbps and can have either 8 or 16 ports per feature (pair of boards). The USP V/VM supports shortwave (multimode) and longwave (single-mode) versions of fibre-channel ports on the same adapter board.
Note: Fiber-channel connectivity is also supported for IBM mainframe attachment when host FICON channel paths are defined to operate in fiber­channel protocol (FCP) mode.
Architecture and Components 2-7
Hitachi Universal Storage Platform V/VM User and Reference Guide
Table 2-1 lists the specifications and configurations for the front-end directors and specifies the number of channel connections for each configuration.
Table 2-1 Front-End Director and Channel Specifications
Parameter Specifications
Number of front-end director features USP V: 1 – 8, 14 when FEDs are installed in BED slots
USP VM: 1 – 3
Simultaneous data transfers per FED pair:
FICON ExSA (ESCON) Fibre-channel
Maximum data transfer rate:
FICON ExSA (ESCON) Fibre-channel
Physical interfaces per FED pair:
FICON ExSA (ESCON) Fibre-channel
Max. physical FICON interfaces per system USP V: 112
Max. physical ExSA interfaces per system USP V: 112
Max. physical FC interfaces per system USP V: 224
Logical paths per FICON port 2105 emulation: 65,536 (1024 host paths × 64 CUs)
Logical paths per ExSA (ESCON) port 512 (32 host paths × 16 CUs) * Max. FICON logical paths per system 2105 emulation: 131,072
Max. ExSA (ESCON) logical paths per system 8,192 Maximum LUs per fibre-channel port 2048 Maximum LDEVs per storage system USP V: 130,560 (256 LDEVs x 510 CUs)
8 8 8, 16
400 MB/sec (4 Gbps) 17 MB/sec 400 MB/sec (4 Gbps)
8 8 8, 16
USP VM: 24
USP VM: 24
USP VM: 48
2107 emulation: 261,120 (1024 host paths x 255 CUs)
2107 emulation: 522,240
USP VM: 65,280
*Note: When the number of devices per CHL image is limited to a maximum of 1024, 16 CU images can be assigned per CHL image. If one CU includes 256 devices, the maximum number of CUs per CHL image is limited to 4.
2-8 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide

Back-End Directors and Array Domains

The back-end director (BED) features control the transfer of data between the disk drives and cache. The BEDs are installed in pairs for redundancy and performance. The USP V can be configured with up to eight BED pairs, providing up to 64 concurrent data transfers to and from the disk drives. The USP VM is configured with one BED pair, which provides eight concurrent data transfers to and from the disk drives.
The disk drives are connected to the BED pairs by fibre cables using an arbitrated-loop (FC-AL) topology. Each BED pair has eight independent fibre back-end paths controlled by eight back-end microprocessors. Each dual­ported fibre-channel disk drive is connected through its two ports to each board in a BED pair over separate physical paths for improved performance as well as redundancy.
Table 2-2 lists the BED specifications. Each BED pair contains eight buffers (one per fibre path) that support data transfer to and from cache. Each dual­ported disk drive can transfer data over either port. Each of the two paths shared by the disk drive is connected to a separate board in the BED pair to provide alternate path capability. Each BED pair is capable of eight simultaneous data transfers to or from the HDDs.
Table 2-2 BED Specifications
Parameter Specifications
Number of back-end director features USP V: 1 – 8
USP VM: 1 Back-end paths per BED feature 8 Back-end paths per storage system USP V: 8 – 64
USP VM: 8 Back-end array interface type Fibre-channel arbitrated loop (FC-AL) Back-end interface transfer rate (burst rate) 400 MB/sec (4 Gbps) Maximum concurrent back-end operations per BED feature 8 Maximum concurrent back-end operations per storage system USP V: 64
USP VM: 8 Back-end (data) bandwidth USP V: 68 GB/sec
USP VM: 8.5 GB/sec
Architecture and Components 2-9
Hitachi Universal Storage Platform V/VM User and Reference Guide
Figure 2-3 illustrates a conceptual array domain. All functions, paths, and disk
*1
drives controlled by one BED pair are called an “array domain.” An array domain can contain a variety of LVI and/or LU configurations. RAID-level intermix (all RAID types) is allowed within an array domain (under a BED pair) but not within an array group.
BED Pair
BED (CL1)
BED (CL2)
RAID group (7D+1P / 4D+4D)
Fibre Port Number
Fibre Port
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7
Fibre Loop
62 63 00 01
62 63 00 01
62 63 00 01
62 63 00 01
62 63 00 01
62 63 00 01
62 63 00 01
62 63 00 01
RAID group (3D+1P / 2D+2D)
*1: A RAID group (3D+1P/2D+2D) consists of fibre port number 0, 2, 4, and 6, or 1, 3, 5 and 7.
Figure 2-3 Conceptual Array Domain
2-10 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide
Max. 64 HDDs per FCAL

Hard Disk Drives

The Universal Storage Platform V/VM uses disk drives with fixed-block­architecture (FBA) format. hard disk drives: 72 GB, 146 GB, 300 GB, 400 GB, and 750 GB.
Table 2-3 Disk Drive Specifications
Table 2-3 lists and describes the currently available
Disk Drive
Size
72 GB 71.50 GB 15,000 rpm FC 400 MB/s 146 GB 143.76 GB 15,000 rpm FC 400 MB/s 300 GB 288.20 GB 10,000 rpm FC 200 MB/s 300 GB 288.20 GB 15,000 rpm FC 400 MB/s 400 GB 393.85 GB 10,000 rpm FC 400 MB/s 750 GB 738.62 GB 7,200 rpm SATA 300 MB/s * The storage capacity values for the disk drives (raw capacity) are calculated based on the following
values: 1 KB = 1,000 bytes, 1 MB = 1,000
Formatted
Capacity*
Revolution
Speed
Interface Interface Data Transfer Rate
2
bytes, 1 GB = 1,0003 bytes, 1 TB = 1,0004 bytes.
(maximum)
Each disk drive can be replaced non-disruptively on site. The USP V/VM utilizes diagnostic techniques and background dynamic scrubbing that detect and correct disk errors. Dynamic sparing is invoked automatically if needed. For an array group of any RAID level, any spare disk drive can back up any other disk drive of the same rotation speed and the same or lower capacity anywhere in the storage system, even if the failed disk and the spare disk are in different array domains (attached to different BED pairs). The USP V/VM can be configured with up to 16 spare disk drives. The standard configuration provides one spare drive for each type of drive installed in the storage system. The Hi-Track monitoring and reporting tool detects disk failures and notifies the Hitachi Data Systems Support Center automatically, and a service representative is sent to replace the disk drive.
Note: The spare disk drives are used only as replacements and are not included in the storage capacity ratings of the storage system.

Service Processor

The Universal Storage Platform V/VM includes a built-in custom PC called the service processor (SVP). The SVP is integrated into the controller and can only be used by authorized Hitachi Data Systems personnel. The SVP enables the Hitachi Data Systems representative to configure, maintain, service, and upgrade the storage system. The SVP also provides the Storage Navigator functionality, and it collects performance data for the key components of the USP V/VM to enable diagnostic testing and analysis. The SVP is connected with a service center for remote maintenance of the storage system.
Note: The SVP does not have access to any user data stored on the Universal Storage Platform V/VM.
Architecture and Components 2-11
Hitachi Universal Storage Platform V/VM User and Reference Guide

Power Supplies

Each storage cluster is powered by its own set of redundant power supplies, and each power supply is able to provide power for the entire system, if necessary. Because of this redundancy, the Universal Storage Platform V/VM can sustain the loss of multiple power supplies and still continue to operate. To make use of this capability, the USP V/VM should be connected either to dual power sources or to different power panels, so if there is a failure on one of the power sources, the USP V/VM can continue full operations using power from the alternate source.
The AC power supplied to the USP V/VM is converted by the AC-DC power supply to supply 56V/12V DC power to all storage system components. Each component has its own DC-DC converter to generate the necessary voltage from the 56V/12V DC power that is supplied.

Batteries

The Universal Storage Platform V/VM uses nickel-hydrogen batteries to provide backup power for the control and operational components (cache memory, shared memory, FEDs, BEDs) as well as the hard disk drives. The configuration of the storage system and the operational conditions determine the number and type of batteries that are required.
2-12 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide

Control Panel and Emergency Power-Off Switch

Control Panel

Figure 2-4 shows the location of the control panel on the USP V, and Figure 2-5 shows the location of the control panel on the USP VM. the items on the USP V/VM control panel. To open the control panel cover, push and release on the point marked PUSH.
Table 2-4 describes
FRONT VIEW
Control Panel
SUB-SYSTEM
PS
ENABLE
DISABLE
ON
OFF
READY
ALARM
MESSAGE
EMERGENCY
RESTART
ON
BS-
PS-ON
REMOTE
MAINTENANCE
PROCESSING
ENABLE
Figure 2-4 Location of Control Panel on the USP V
FRONT VIEW
Control Panel
SUB-SYSTEM
PS
ENABLE
DISABLE
ON
OFF
READY
ALARM
MESSAGE
EMERGENCY
RESTART
BS-ON
PS-ON
REMOTE
MAINTENANC PROCESSING
ENABLE
Figure 2-5 Location of Control Panel on the USP VM
Architecture and Components 2-13
Hitachi Universal Storage Platform V/VM User and Reference Guide
Table 2-4 Control Panel (USP V and USP VM)
Name Type Description
SUBSYSTEM READY
SUBSYSTEM ALARM
SUBSYSTEM MESSAGE
SUBSYSTEM RESTART
REMOTE MAINTENANCE PROCESSING
REMOTE MAINTENANCE ENABLE/DISABLE
BS-ON
PS-ON
PS SW ENABLE Switch
PS ON / PS OFF Switch
EMERGENCY
LED (Green)
LED (Red)
LED (Amber)
Switch
LED (Amber)
Switch
LED (Amber)
LED (Green)
LED (Red)
When lit, indicates that input/output o peration on the channel interface is possible. Applies to both storage clusters.
When lit, indicates that low DC voltage, high DC current, abnormally high temperature, or a failure has occurred. Applies to both storage clusters.
On: Indicates that a SIM (Message) was generated from eith er of the clusters. Applied to both storage clusters.
Blinking: Indicates that the SVP failure has occurred. Used to un-fence a fenced drive path and to release the Write Inhibit
command. Applies to both storage clusters. When lit, indicates that remote maintenance activity is in process. If
remote maintenance is not in use, this LED is not lit. Applies to both storage clusters.
Used for remote maintenance. While executing remote maintenance (the REMOTE MAINTENANCE PROCESSING LED in item 5 is blinking), when switching from ENABLE to DISABLE, remote maintenance is interrupted. If the remote maintenance function is not used, this switch is ineffective. Applies t o both storage clusters.
Indicates input power is available.
Indicates that storage system is powered on. Applies to both storage clusters.
Used to enable the PS ON/ PS OFF switch. To be enabling the PS ON/ PS OFF switch, turn the PS SW ENABLE switch to the ENABLE position.
Used to power storage system on/off. This switch is valid when the PS REMOTE/LOCAL switch is set to LOCAL. Applies to both storage clusters.
This LED shows status of EPO switch on the rear door. OFF: Indicates that the EPO switch is off. ON: Indicates that the EPO switch is on.
2-14 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide

Emergency Power-Off Switch

Figure 2-6 shows the location of the emergency power-off (EPO) switch on the USP V (top right corner of the back side of controller frame). the location of the EPO switch on the USP VM (next to the control panel on the primary rack). Use the EPO switch only in case of an emergency.
To power off the USP V/VM storage system in case of an emergency, pull the EPO switch up and then out towards you, as illustrated on the switch. The EPO switch must be reset by service personnel before the storage system can be powered on again.
REAR VIEW
OF
CONTROLLER
FRAME
EMERGENCY
UNIT
EMERGENCY
POWER OFF
Figure 2-7 shows
Figure 2-6 Location of EPO Switch on the USP V
FRONT VIEW OF
PRIMARY RACK
EMERGENCY
UNIT
EMERGENCY
POWER OFF
Figure 2-7 Location of EPO Switch on the USP VM
Architecture and Components 2-15
Hitachi Universal Storage Platform V/VM User and Reference Guide

Intermix Configurations

RAID-Level Intermix

RAID technology provides full fault-tolerance capability for the disk drives of the Universal Storage Platform V/VM. The cache management algorithms enable the USP V to stage up to one full RAID stripe of data into cache ahead of the current access to allow subsequent access to be satisfied from cache at host channel transfer speeds.
The Universal Storage Platform V supports RAID-1, RAID-5, RAID-6, and intermixed RAID-level configurations, including intermixed array groups within an array domain. array groups (RAID-5 3D+1P, 7D+1P; RAID-1 2D+2D, 4D+4D; RAID-6 6D+2P) can be intermixed under one BED pair.
Figure 2-8 illustrates an intermix of RAID levels. All types of
1st, 3rd 5th or 7th BED Pair
2nd, 4th 6th or 8th BED Pair
BED (CL1)
BED (CL2)
BED (CL2)
BED (CL1)
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
RAID group
(2D+2D)
RAID group
00
00 01
00 01
00 01
00 01
00 01
01
0100
(3D+1P)
0100
02
02
03
03
03 46 47 02
RAID group
(7D+1P/6D+2P)
46 47 02 03
46 47 02 03
46 47 02 03
46 47 02 03
46 47 02 03
46 47
46 47
RAID group
(3D+1P)
RAID group
(4D+4D)
Figure 2-8 Sample RAID Level Intermix
2-16 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide
RAID group
(4D+4D)

Hard Disk Drive Intermix

A
A
A
A
All hard disk drives (HDDs) in one array group (parity group) must be of the same capacity and type. Different HDD types can be attached to the same BED pair. All HDDs under a single BED pair must operate at the same data transfer rate (200 or 400 MB/sec), so certain restrictions apply. For example, when an array group consisting of HDDs with 200 MB/sec transfer rate is intermingled with an array group consisting of HDDs with 400 MB/sec transfer rate, both array groups operate at 200 MB/sec.

Device Emulation Intermix

Figure 2-9 illustrates an intermix of device emulation types. The Universal Storage Platform V supports an intermix of all device emulations on the same BED pair, with the restriction that the devices in each array group have the same type of track geometry or format.
The Virtual LVI/LUN function enables different logical volume types to coexist. When Virtual LVI/LUN is not being used, an array group can be configured with only one device type (for example, 3390-3 or 3390-9, not 3390-3 and 3390-
9). When Virtual LVI/LUN is being used, you can intermix 3390 device types, and you can intermix OPEN-x device types, but you cannot intermix 3390 and OPEN device types.
Note: For the latest information on supported LU types and intermix requirements, please contact your Hitachi Data Systems account team.
rray Frame
rray Frame
3390-9 3390-3
OPEN-V
USP V
Controller Frame
4th
Pair
3rd
Pair
2nd
BED
Pair
1st
BED
Pair
BED
BED
rray Frame
OPEN-3 3390-9 OPEN-V
rray Frame
Figure 2-9 Sample Device Emulation Intermix
Architecture and Components 2-17
Hitachi Universal Storage Platform V/VM User and Reference Guide
2-18 Architecture and Components
Hitachi Universal Storage Platform V/VM User and Reference Guide
3
Functional and Operational
Characteristics
This chapter discusses the functional and operational capabilities of the USP V.
RAID ImplementationCU Images, LVIs, and LUs Storage NavigatorSystem Option Modes, Host Modes, and Host Mode OptionsMainframe Operations Open-Systems OperationsBattery Backup Operations

Functional and Operational Characteristics 3-1

Hitachi Universal Storage Platform V/VM User and Reference Guide

RAID Implementation

This section provides an overview of the implementation of RAID technology on the Universal Storage Platform V:
Array Groups and RAID Levels
Sequential Data Striping
LDEV Striping Across Array Groups

Array Groups and RAID Levels

The array group (also called parity group) is the basic unit of storage capacity for the USP V. Each array group is attached to both boards of a BED pair over 16 fibre paths, which enables all disk drives in the array group to be accessed simultaneously by the BED pair. Each array frame has two canister mounts, and each canister mount can have up to 128 physical disk drives.
The USP V supports the following RAID levels: RAID-1, RAID-5, RAID-6, and RAID1+0 (also known as RAIDA). RAID-0 is not supported on the USP V. When configured in four-drive RAID-5 parity groups (3D+1P), ¾ of the raw capacity is available to store user data, and ¼ of the raw capacity is used for parity data.
RAID-1.
(2D+2D) array group consists of two pair of disk drives in a mirrored configuration, regardless of disk drive capacity. A RAID-1 (4D+4D) group* combines two RAID-1 (2D+2D) groups. Data is striped to two drives and mirrored to the other two drives. The stripe consists of two data chunks. The primary and secondary stripes are toggled back and forth across the physical disk drives for high performance. Each data chunk consists of either eight logical tracks (mainframe) or 768 logical blocks (open systems). A failure in a drive causes the corresponding mirrored drive to take over for the failed drive. Although the RAID-5 implementation is appropriate for many applications, the RAID-1 option on the USP V is ideal for workloads with low cache-hit ratios.
*Note for RAID-1(4D+4D): It is recommended that both RAID-1 groups within a RAID-1 (4D+4D) group be configured under the same BED pair.
Figure 3-1 illustrates a sample RAID-1 (2D+2D) layout. A RAID-1
3-2 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
RAID-1 using 2D + 2D and 3390-x LDEVs
Track 0
to
Track 7
Track 16
to
Track 23
Track 32
to
Track 39
Track 48
to
Track 55
Track 0
to
Track 7
Track 16
to
Track 23
Track 32
to
Track 39
Track 48
to
Track 55
Track 8
to
Track 15
Track 24
to
Track 31
Track 40
to
Track 47
Track 56
to
Track 63
Track 8
to
Track 15
Track 24
to
Track 31
Track 40
to
Track 47
Track 56
to
Track 63
Figure 3-1 Sample RAID-1 2D + 2D Layout
RAID-5. A RAID-5 array group consists of four (3D+1P) or eight (7D+1P) disk drives. The data is written across the four (or eight) disk drives in a stripe that has three (or seven) data chunks and one parity chunk. Each chunk contains either eight logical tracks (mainframe) or 768 logical blocks (open). The enhanced RAID-5+ implementation in the USP V minimizes the write penalty incurred by standard RAID-5 implementations by keeping write data in cache until an entire stripe can be built and then writing the entire data stripe to the disk drives. The 7D+1P RAID-5 increases usable capacity and improves performance.
Figure 3-2 illustrates RAID-5 data stripes mapped over four physical drives. Data and parity are striped across each of the disk drives in the array group (hence the term “parity group”). The logical devices (LDEVs) are evenly dispersed in the array group, so that the performance of each LDEV within the array group is the same.
Figure 3-2 also shows the parity chunks that are the “Exclusive OR” (EOR) of the data chunks. The parity and data chunks rotate after each stripe. The total data in each stripe is either 24 logical tracks (eight tracks per chunk) for mainframe data, or 2304 blocks (768 blocks per chunk) for open-systems data. Each of these array groups can be configured as either 3390-x or OPEN-x logical devices. All LDEVs in the array group must be the same format (3390-x or OPEN-x). For open systems, each LDEV is mapped to a SCSI address, so that it has a TID and logical unit number (LUN).
Functional and Operational Characteristics 3-3
Hitachi Universal Storage Platform V/VM User and Reference Guide
RAID-5 using 3D + 1P and 3390-x LDEVs
Track 0
to
Track 7
Track 32
to
Track 39
next
8
tracks
Parity
Tracks
Track 8
to
Track 15
Track 40
to
Track 47
Parity Tracks
next
8
tracks
Track 16
to
Track 23
Parity Tracks
next
8
tracks
next
8
tracks
Parity Tracks
Track 24
to
Track 31
next
8
tracks
next
8
tracks
Figure 3-2 Sample RAID-5 3D + 1P Layout (Data Plus Parity Stripe)
RAID-6. A RAID-6 array group consists of eight (6D+2P) disk drives. The data is written across the eight disk drives in a stripe that has six data chunks and two parity chunks. Each chunk contains either eight logical tracks (mainframe) or 768 logical blocks (open).
In the case of RAID-6, data can be assured when up to two drives in an array group fail. Therefore, RAID-6 is the most reliable of the RAID levels.

Sequential Data Striping

The Universal Storage Platform V’s enhanced RAID-5+ implementation attempts to keep write data in cache until parity can be generated without referencing old parity or data. This capability to write entire data stripes, which is usually achieved only in sequential processing environments, minimizes the write penalty incurred by standard RAID-5 implementations. The device data and parity tracks are mapped to specific physical disk drive locations within each array group. Therefore, each track of an LDEV occupies the same relative physical location within each array group in the storage system.
In a RAID-6 (dual parity) configuration, data is striped twice across four rows. RAID-6 uses two parity drives to prevent loss of data in the unlikely event of a second failure during a rebuild of a previous failure.
3-4 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

LDEV Striping Across Array Groups

In addition to the conventional concatenation of RAID-1 array groups (4D+4D), the Universal Storage Platform V supports LDEV striping across multiple RAID-5 array groups for improved LU performance in open-system environments. The advantages of LDEV striping are:
Improved performance, especially of an individual LU, due to an increase in
the number of HDDs that constitute an array group.
Better workload distribution: in the case where the workload of one array
group is higher than another array group, you can distribute the workload by combining the array groups, thereby reducing the total workload concentrated on each specific array group.
The supported LDEV striping configurations are:
LDEV striping across two RAID-5 (7D+1P) array groups (see Figure 3-3).
The maximum number of LDEVs in this configuration is 1000.
LDEV striping across four RAID-5 (7D+1P) array groups (see Figure 3-4).
The maximum number of LDEVs in this configuration is 2000.
LDEV#A
A-1 A-2 A-3
A-4
LDEV#B
B-1 B-2 B-3 B-4
Array group 1 (7D+1P) Array group 2 (7D+1P)
LDEV Striping
Array group 1 (7D+1P)
A-1 B-2
A-3 B-4
Array group 2 (7D+1P)
B-1 A-2 B-3 A-4
Figure 3-3 LDEV Striping Across 2 RAID-5 (7D+1P) Array Groups
Functional and Operational Characteristics 3-5
Hitachi Universal Storage Platform V/VM User and Reference Guide
LDEV#A
LDEV#B
A-1 A-2 A-3 A-4
Array group 1 (7D+1P)
LDEV#C
C-1 C-2 C-3
C-4
Array group 2 (7D+1P)
LDEV#D
Array group 3 (7D+1P) Array group 4 (7D+1P)
LDEV Striping
A-1 D-2 C-3 B-4
Array group 1 (7D+1P)
Array group 2 (7D+1P)
B-1 B-2 B-3 B-4
D-1 D-2 D-3 D-4
B-1 A-2 D-3 C-4
C-1 B-2 A-3
D-4
Array group 3 (7D+1P) Array group 4 (7D+1P)
D-1 C-2 B-3 A-4
Figure 3-4 LDEV Striping Across 4 RAID-5 (7D+1P) Array Groups
All disk drives and device emulation types are supported for LDEV striping. LDEV striping can be used in combination with all USP V data management functions.
3-6 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

CU Images, LVIs, and LUs

CU Images

The Universal Storage Platform V/VM is configured with one control unit image for each 256 devices (one SSID for each 64 or 256 LDEVs) and supports a maximum of 510 CU images (255 in each logical disk controller, or LDKC).
The USP V/VM supports the following control unit (CU) emulation types:
3990-6, 3990-6E
2105, 2107
Note: The mainframe data management features of the USP V/VM may have restrictions on CU image compatibility.
For further information on CU image support, refer to the Mainframe Host Attachment and Operations Guide (MK-96RD645), or contact your Hitachi Data Systems account team.

Logical Volume Images

The Universal Storage Platform V/VM supports the 3390 (and 3380*) mainframe LVI types:
3390-3, -3R, -9, -L, and –M
Note: The 3390-3 and 3390-3R LVIs cannot be intermixed in the same storage system.
3380-3, -F, -K
Note: The use of 3380 device emulation is restricted to Fujitsu environments.
The LVI configuration of the USP V/VM storage system depends on the RAID implementation and physical disk drive capacities. The LDEVs are accessed using a combination of logical disk controller number (00-01), CU number (00-FE), and device number (00-FF). All control unit images can support an installed LVI range of 00 to FF.
Functional and Operational Characteristics 3-7
Hitachi Universal Storage Platform V/VM User and Reference Guide

Logical Units

The Universal Storage Platform V/VM is configured with OPEN-V LU types. The OPEN-V LU can vary in size from 48.1 MB to 4 TB. For information on other LU types (OPEN-3, OPEN-9), contact your Hitachi Data Systems representative.
For maximum flexibility in LU configuration, the USP V provides the Virtual LVI/LUN (VLL) and LUN Expansion (LUSE) features. Virtual LVI/LUN allows users to configure multiple LUs under a single LDEV, and LUN Expansion enables users to concatenate multiple LUs into large volumes. For further information on Virtual LVI/LUN and LUN Expansion, please refer to the Virtual LVI/LUN User’s Guide (MK-96RD630) and LUN Expansion User’s Guide (MK-96RD616).
3-8 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

Storage Navigator

V
The Hitachi Storage Navigator software communicates directly with the the Universal Storage Platform V and VM storage systems via a local-area network (LAN) to obtain storage system configuration and status information and send user-requested commands to the storage systems. Storage Navigator displays detailed storage system information and allows users to configure and perform operations on the Universal Storage Platform V/VM.
Storage Navigator is provided as a Java
®
applet program that can be executed on any machine that supports a Java Virtual Machine (JVM). A PC hosting the Storage Navigator software is called a remote console. Each time a remote console accesses and logs into the SVP of the desired storage system, the Storage Navigator applet is downloaded from the SVP to the remote console. Figure 3-5 illustrates remote console and SVP configuration for Storage Navigator.
For further information on Storage Navigator, refer to the Storage Navigator User’s Guide (MK-96RD621).
Storage Navigator
SVP
Public
LAN
Universal Storage Platfo rm
Private LAN
Private LAN
Web Browser
Web Client
Java™ Applet
program is
running.
Web Client (JVM™)
Download
Data communication
Information acquisition
HTTPD
server
RMI™ Server
Web Server (JVM™)
Web Client
Java™ Applet
Configuration
information
Figure 3-5 Storage Navigator and SVP Configuration
Functional and Operational Characteristics 3-9
Hitachi Universal Storage Platform V/VM User and Reference Guide

System Option Modes, Host Modes, and Host Mode Options

System Option Modes

To provide greater flexibility and enable the Universal Storage Platform V/VM to be tailored to unique customer operating requirements, additional operational parameters, or system option modes, are available. At installation, the modes are set to their default values (specified in below). Be sure to discuss these settings with your Hitachi Data Systems team. The system option modes can only be changed by a Hitachi Data Systems representative.
Table 3-1 lists the system option mode information for Universal Storage Platform V/VM microcode 60-02-48-00/00. between modes 503 and 269 for Storage Navigator operations. specifies the relationship between modes 503 and 269 for SVP operations.
The system option mode information may change in future microcode releases. Contact your Hitachi Data Systems representative for the latest information on the USP V/VM system option modes.
Table 3-1 System Option Modes
Mode Category Function Default MCU/RCU
Table 3-2 specifies the relationship
Table 3-3
20 TrueCopy for z/OS R-VOL read only function. OFF RCU 36 TrueCopy for z/OS Setting default function (CRIT=Y) option for SVP panel (TCz). OFF MCU 64 TrueCopy for z/OS Setting effective range of CGROUP. OFF MCU 80
87 ShadowImage ShadowImage Quick Resync by CCI. OFF – 93
104 TrueCopy for z/OS Changing default of CGROUP Freeze option. OFF MCU 114 TrueCopy for z/OS Turning TCz Links Around option. OFF MCU/RCU 122
161 Open Suppression of high speed micro-program exchange for CHT. OFF – 190 TrueCopy for z/OS
ShadowImage for z/OS
TrueCopy for z/OS Async
ShadowImage for z/OS
Suppression of ShadowImage (and Business Continuity Manager) Quick Restore function.
TrueCopy for z/OS Async throttling feature. OFF MCU
Suppression of ShadowImage for z/OS (and Business Continuity Manager) Quick Split and Resync function.
TCz – Allows you to update the VOLSER and VTOC of the R-VOL while the pair is suspended if both mode 20 and 190 are ON.
OFF –
OFF –
OFF RCU
3-10 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
269 Common
278 Open Tru64 (Host Mode 07) and OpenVMS (Host Mode 05)
292 TrueCopy for z/OS Issuing OLS when Switching Port
305 Mainframe
313 Open OPEN-V Geometry
316 Open Auto Negotiation in fixed speed.
High-Speed Format for Virtual LVI/LUN (VLL) (available for all device emulation types).
(1) High-Speed Format support: When redefining all LDEVs in an array group using VLL Volume Initialize or Make Volume operation, LDEV format, as the last process, will be performed in high speed.
(2) Make Volume feature enhancement: The Make Volume operation (recreating new CVs after deleting all volumes in a VDEV) is now supported for all device emulation types.
Mode 269 = ON: The High Speed format is available when performing VLL operations on Storage Navigator (or LDEV format (SVP Maintenance) for all LDEVs in a array group).
Mode 269 = OFF: Only the low-speed format is available when performing VLL operations on Storage Navigator (and LDEV format operations on SVP Maintenance).
Note: If mode 503 is ON, since the format processing is prevented, the High Speed Format function for VLL operations (mode 269 ON) is not available. For details on the relationship between modes 503 and 269, see below.
Note: Mode 269 is effective only when using the SVP to format the CVS.
Caution: Host offline: Required
When the mainframe host (FICON) is connected with the CNT­made FC switch (e.g., FC9000) and is using along with the TrueCopy for z/OS with Open Fibre connection, the occurrence of Link Incident Report for the mainframe host from the FC switch will be deterred when switching the CHT port attribute (including automatic switching when executing CESTPATH and CDELPATH in case of mode 114 = ON).
Mode 292 = ON: When switching the port attribute, issue the OLS (100 ms) first, and then reset the Chip.
Mode 292 = OFF: When switching the port attribute, reset the Chip without issuing the OLS.
Pre-labeling: Pre-label is added via SVP as tentative volume serial name into logical device formatting.
For OPEN-V and setting Host Mode Option 16 ON or System Option Mode 313 ON, the same geometry shared by USP V and USP/NSC can be responded to the host. When changing Mode 313 or Host Mode Option 16, the connecting server should be powered off.
Caution: Host offline: Required
If signal synchronizing has been unmatched for 2.6 seconds during Auto Negotiation, the fixed speed can be set as follows:
Mode 316 = ON: 1 Gbps Mode 316 = OFF: 2 Gbps
The mode should be set when a fixed speed of Auto Negotiation is needed though the transfer speed may slow down. The mode is available for CHT PCB.
Table 3-2 and Table 3-3
OFF –
OFF –
OFF MCU/RCU
OFF –
OFF –
OFF –
Functional and Operational Characteristics 3-11
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
346 Open
448
449
454
459
Universal Replicator
Universal Replicator for z/OS
Universal Replicator
Universal Replicator for z/OS
Virtual Partition Manager
ShadowImage for z/OS
ShadowImage
If an HBA containing HP WWN being connected, the LUN Security function is operational by setting system option mode
346. Mode 346 = ON: LUN Security is operational without host mode
setting restriction. Mode 346 = OFF: LUN Security is operational only when host
mode 03 is set to ON. Note: For a new storage system, system option mode 346 ON
can be set as the default setting. Caution: For a storage system in use, you must shut down the
server in order to set mode 346 ON in this HBA configuration:
1. Shut down the server.
2. Set mode 346 to ON.
3. Power on the server.
4. Check whether any problems exist. Mode 448 = ON: (Enabled) If the SVP detects a blocked path,
the SVP assumes that an error occurred and immediately splits (suspends) the mirror.
Mode 448 = OFF: (Disabled) If the SVP detects a blocked path and the path does not recover within the sp ecified period of time, the SVP assumes that an error occurred and splits (suspends) the mirror.
Note: The mode 448 setting takes effect only when mode 449 is set to OFF.
Detecting and monitoring path blockade between MCU and RCU of UR/URz.
Mode 449 = ON: Detecting and monitoring of path blockade will NOT be performed.
Mode 449 = OFF: Detecting and monitoring of the path blockade will be performed.
When making a destage schedule for CLPRs, using the average workload of all the CLPRs or the highest workload of those of the CLPRs can be optional.
Mode 454 = ON: The average workload of all the CLPRs is used to make the destage schedule.
Mode 454 = OFF: The highest workload of those of the CLPRs is used to make the destage schedule.
Note: The priority of the destage processing for a specific CLPR in the overloaded status decreases and the overloaded status is not released so that TOV (MIH) may occur.
When the secondary volume of an SI/SIz pair is an external volume, the transaction to change the status from SP-PEND to SPLIT is as follows:
(1) Mode 459 = ON when creating an SI/SIz pair: The copy data is created in cache memory. When the write proce ssing on the external storage completes and the data is fixed, the pair status will change to SPLIT.
(2) Mode 459 = OFF when creating an SI/SIz pair Once the copy data has been created in cache memory, the pair status will change to SPLIT. The external storage data is not fixed (current spec).
OFF –
OFF –
OFF –
OFF –
OFF –
3-12 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
460 SM SVP
464 TrueCopy for z/OS SIM Report without Inflow Limit
466 UR/URz
467
SI/SIz, FlashCopy, COW Snapshot, Volume Migration, Universal Volume Manager
When turning off PS, the control information of the following software stored in shared memory will be backed up in SVP. After that, when performing volatile PS ON, the control information will be restored into shared memory from SVP.
TrueCopy, TrueCopy for z/OS, ShadowImage, ShadowImage for z/OS, Volume Migration, FlashCop y, Universal Replicator, Universal Replicator for z/OS, COW Snapshot
Setting Mode 460 to ON is required to enable the function. Note: This support only applies to the case of volatile PS ON
after PS OFF. As usual, power outage, offline micro-program exchange, DCI and System Tuning are not supported.
Note: Since PS-OFF/ON takes up to 25 minutes, when using power monitoring devices (PCI, etc.), it is required to take enough time for PS-OFF/ON.
For TrueCopy for z/OS, the SIM report for the volume without inflow limit is available when mode 464 is set to ON.
SIM: RC=490x-yy (x=CU#, yy=LDEV#) For UR/URz operations it is strongly recommended that the
path between main and remote storage systems have a minimum data transfer speed of 100 Mbps. If the data transfer speed falls to 10 Mbps or lower, UR operations cannot be properly processed. As a result, many retries occur and UR pairs may be suspended. Mode 466 is provided to ensure proper system operation for data transfer speeds of at least 10 Mbps.
Mode 466 = ON: Data transfer speeds of 10 Mbps and higher are supported. The JNL read is performed with 4-multiplexed read size of 256 KB.
Mode 466 = OFF: For conventional operations. Data transfer speeds of 100 Mbps and higher are supported. The JNL read is performed with 32-multiplexed read size of 1 MB by default.
Note: The data transfer speed can be changed using the Change JNL Group options.
For the following features, the current copy processing slows down when the percentage of “dirty” data is 60% or higher, and it stops when the percentage is 75% or higher. Mode 467 is provided to prevent the percentage from exceeding 60%, so that the host performance is not affected.
ShadowImage, ShadowImage for z/OS, FlashCopy, Copy-on­Write SnapShot, Volume Migration, Universal Volume Manager
Mode 467 = ON: Copy overload prevention. Copy processing stops when the percentage of “dirty” data reaches 60% or higher. When the percentage falls below 60%, copy processing restarts.
Mode 467 = OFF: Normal operation. The copy processing slows down if the dirty percentage is 60% or larger, and it stops if the dirty percentage is 75% or larger.
Caution: This mode must always be set to ON when using an external volume as the secondary volume of any of the above­mentioned replication products.
Note: It takes longer to finish the copy processing because it stops for prioritizing the host I/O performance.
OFF –
OFF MCU
OFF –
ON –
Functional and Operational Characteristics 3-13
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
471 Common
474
Universal Replicator for z/OS
481 OPEN
484 TrueCopy for z/OS
491 ShadowImage
ShadowImage for z/OS
FlashCopy V1
SIMs that require action only by the user and not maintenance personnel are displayed on the Information of SVP and Storage Navigator and are not reported to maintenance personnel. This mode is provided for sites where it is required to report all SIMs to maintenance personnel.
Mode 471 = ON: Report SIMs to maintenance personnel. Mode 471 = OFF: Not report SIMs to maintenance personnel.
Reduce UR for z/OS initial copy time for better performance by using the TC for z/OS initial copy operation.
Mode 474 = ON: For a suspended URz pair, a dedicated script can be used to create a TC z pair on the same P-VOL and S-VOL as the URz pair to shorten the initial copy time.
Mode 474 = OFF: For a suspended URz pair, a dedicated script cannot be used to create a TC for z/OS pair on the same P-VOL and S-VOL of the UR for z/OS pair to shorten the UR for z/OS initial copy time.
Display the detail of Identifier Type=1 of Inquiry P83 (for Windows Vista).
Mode 481 = ON: Not display the detail of Identifier Type=1 of Inquiry Page 83.
Mode 481 = OFF: Display the detail of Identifier Type=1 of Inquiry Page 83.
Note: System option mode 312 must be OFF to use this mode. Display the information of PPRC path QUERY in FC interface
format. Previously, the PPRC path QUERY information was only displayed in ESCON interface format even when the path was FC link. When IBM host functions (e.g., PPRC, GDPS) are being used, mode 484 can be enabled to display the PPRC path QUERY information in FC interface format.
Mode 484 = ON: Display information of PPRC path QUERY in FC interface format.
Mode 484 = OFF: Display information of PPRC path QUERY in ESCON interface format.
Improve the performance of ShadowImage, ShadowImage for z/OS, and FlashCopy version 1.
Mode 491 = ON: The option (Reserve 05) of SI/SIz is available. When this option is set to ON, copy operations (SI, SIz, FCv1) are increased from 64 processes to 128 processes for improved performance.
Mode 491 = OFF: The option (Reserve 05) of SI/SIz is not available. The copy operations (SI, SIz, FCv1) are performed with 64 processes.
Notes:
Mode 491 requires at least three BED features. If there are
less than three BED features, mode 491 is not effective.
Enable mode 491 when the performance of ShadowImage,
ShadowImage for z/OS, and/or is considered to be important.
Do not enable mode 491 when host I/O performance is
considered to be important.
When mode 491 is ON, set mode 467 to OFF. If mode 467 is
ON, the performance may not improve.
OFF –
OFF MCU/RCU
OFF –
OFF MCU/RCU
OFF –
3-14 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
493 Mainframe
494 Mainframe Enables CUIR processing when replacing a FICON PCB.
498 OPEN One path performance improvement for the OPEN random read.
The CUIR function requires that the SA_ID reported to the host is unique. The SA_ID value cannot be changed during online operations. To change the SA_ID value from normal to unique, set mode 493 to ON and then perform a power cycle*. Setting mode 493 to ON without performing a power cycle does not enable the function.
*Power cycle includes PS-OFF/ON (volatile/non-volatile), start­up after breaker OFF/ON, or offline micro-program exchange.
Mode 493 = ON: When mode 493 is ON and a power cycle is performed, a unique SA_ID value for each port is reported to the host.
Mode 493 = OFF: When mode 493 is OFF and a power cycle is performed, and 2107 port is not set, normal SA_ID values are reported to the host. When 2107 emulation is set, the SA_ID value of mainframe PCB port remains unique even after setting mode 493 to OFF and then performing a power cycle.
Caution: Power cycle is required.
Mode 494 = ON: CUIR processing is available when replacing a FICON PCB, but only when mode 493 is ON and a power cycle* is performed to enable the SA_ID unique mode.
*Power cycle includes PS-OFF/ON (volatile/non-volatile), start­up after breaker OFF/ON, or offline micro-program exchange.
Mode 494 = OFF: CUIR processing is not available. Caution: Power cycle is required.
Mode 498 = ON: One path performance improvement for the OPEN random read is available.
Mode 498 = OFF: One path performance improvement for the OPEN random read is not available.
Notes:
When mode 498 is ON, the maximum performance of CHP is
improved, which icreases the number o f I/Os to the back-end (DKP, HDD, external initiator MP, external storage). Because of this, back-end performance must be checked to make sure it is sufficient. If back-end performance is insufficient, a timeout may occur due to a bottleneck in the back-end.
When a bottleneck occurs in the back-end, the performance
may be worse than when mode 498 is OFF.
OFF –
OFF –
OFF –
Functional and Operational Characteristics 3-15
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
503 Common
505 VPM
530
531
545 Mainframe
Universal Replicator for z/OS
OPEN and mainframe
For the Install CV, Make Volume, and Volume Initialize functions of Virtual LVI/LUN, after LDEVs are installed, LDEV format is suppressed, and a blocked LDEV is created.
Also, VLL operations and the UVM Add LU and Delete LU operations are available even when an LDEV is blocked or being formatted.
Mode 503 = ON: VLL operations with LDEV format suppressed are available, and VLL operations can be performed when an LDEV is blocked.
Mode 503 = OFF: After LDEV installation using VLL, LDEV format is performed, and VLL operations cannot be performed when an LDEV is blocked.
Notes:
When mode 503 is ON, LDEV format processing is
prevented, so the High Speed Format function for VLL operations (mode 269) is not available.
For details about the relationship between mode 503 and mode 269, see
When a PDEV is blocked or a correction copy is in progress,
VLL operations cannot be performed.
Speed up changing CLPR cache assignment, and reduce the processing time to a maximum of one minute per 1 GB.
Mode 505 = ON: Speed up changing CLPR cache assignment (max 1 minute per 1 GB).
Mode 505 = OFF: Speed of changing CLPR cache assignment is normal (max 5 minutes per 1 GB).
When a UR for z/OS pair is in the duplex state, this option switches the display of Consistency Time (C/T) between the values at JNL restore completion and at JNL copy completion.
Mode 530 = ON: C/T displays the value of when JNL copy is completed.
Mode 530 = OFF: C/T displays the value of when JNL restore is completed.
Note: At the time of Purge suspend or RCU failure suspend, the C/T of UR for z/OS displayed by Business Continuity Manager or Storage Navigator may show earlier time than the time shown when the pair was in the duplex state.
When PIN data is generated, the SIM currently stored in SVP is reported to the host.
Mode 531 = ON: The SIM for PIN data generation is stored in SVP and reported to the host.
Mode 531 = OFF: The SIM for PIN data generation is stored in SVP only, not reported to the host, the same as the current specification.
When creating the record #0 field, this option is used to all o w the record #0 format with the WRFTK (x95) command in the case the CCHH of Count part transferred from the host differs from the CCHH of the currently accessed track address.
Mode 545 = ON: The record #0 format is allowed. Mode 545 = OFF: The record #0 format is not allowed. Note: Use this mode when CU type 2107 is used, or INVALID
TRACK FORMAT ERROR occurs on VM MINIDISK.
Table 3-2 and Table 3-3 below.
OFF –
OFF –
OFF RCU
OFF MCU/RCU
OFF MCU
3-16 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
676 Audit Log Store an audit log on the system disk as specified by the user.
Mode 676 = ON: Store an audit log onto the system disk. Mode 676 = OFF: Do not store an audit log onto the system
disk.
677
OPEN and mainframe
This option is used not to perform SM backup for saving battery consumption after planned PS Off.
Mode 677 = ON: Mode 677 is available only when Mode 460 is ON.
When planned PS Off ends normally and the SM information is successfully saved in SVP, the fo llowing operations are performed according to the model:
USP V, USP VM model: Battery is not used for SM backup. NSC model: BASE PCB is powered off.
Mode 677 = OFF: Although planned PS Off ends normally, battery is used for SM backup.
Note: This option is recommended when PS Off that exceeds the backup time is performed, or where NSC model is used.
685
OPEN and mainframe
This option treats the blocakde of cache module group as that of cache PCB to prevent performance degradation.
Mode 685 = ON: Cache module group blockade is treated as Cache PCB blockade.
Mode 685 = OFF: Cache module group blockade is treated as Cache PCB blockade when the blocked part reaches 75% of the Cache capacity.
Notes:
This mode must not be set to ON in the device containing
only two Cache PCBs because the performance greatly degrades with the device where only two Cache PCBs are mounted since only one side of the cluster can be used due to the Cache PCB blockade.
Since the failure of a cache module group is treated as the
failure of a cache PCB, the unavailable cac he capacity is the capacity of the entire PCB, instead of that of the failed module group.
689
TrueCopy and TrueCopy for z/OS
This option is used to prevent the initial copy operation when the Write Pending rate on RCU exceeds 6 0%.
Mode 689 = ON: The initial copy operation is prevented when the Write Pending rate on RCU exceeds 6 0%.
Mode 689 = OFF: The initial copy operation is not prevented when the Write Pending rate on RCU exceeds 60% (the same as before).
Notes:
This mode can be set online. The micro-programs on both MCU and RCU must support
this mode.
This mode should be set per customer’s requests. If the Write Pending status long keeps 60% or more on RCU,
it takes extra time for the initial copy to be completed by making up for the prevented copy operation.
OFF –
OFF MCU/RCU
OFF MCU/RCU
OFF MCU/RCU
Functional and Operational Characteristics 3-17
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
690
697
701
Universal Replicator and Universal Replicator for z/OS
TrueCopy Async, TrueCopy for z/OS Async, ShadowImage, ShadowImage for z/OS
Universal Volume Manager
This option is used to prevent Read JNL or JNL Restore when the Write Pending rate on RCU exceeds 60% as fol lows:
When CLPR of JNL-Volume exceeds 60%, Read JNL is
prevented.
When CLPR of Data (secondary)-Volume exceeds 60%, JNL
Restore is prevented.
Mode 690 = ON: Read JNL or JNL Restore is prevented when the Write Pending rate on RCU exceeds 6 0%.
Mode 690 = OFF: Read JNL or JNL Restore is not prevented when the Write Pending rate on RCU exceeds 60% (the same as before).
Notes:
This mode can be set online. This mode should be set per customer’s requests. If the Write Pending status long keeps 60% or more on RCU,
it takes extra time for the initial copy to be completed by making up for the prevented copy operation.
If the Write Pending status long keeps 60% or more on RCU,
the pair status may become Suspend due to the JNL-Vol being full.
This option prevents the SI Split command execution when the coordinated TCA pair status is Suspend, and its Consistency state is not guaranteed.
Mode 697 = ON: SI Split is not executed when the coordinated TCA pair status is Suspend, and its Consistency state is not guaranteed.
Mode 697 = OFF: SI Split is executed regardless of the pair status or Consistency state of the coordinated TCA.
Note: This option should be applied only to prevent SI Split when the following conditions 1 and 2, or 1 and 3 are met.
1. TCA S-VOL and SI P-VOL coexist (for either mainframe or open).
2. The TCA that is coordinated with SI has not been in Suspend.
3. The TCA that is coordinated with SI is in Suspend, and its Consistency is not in the latest state.
This option is used to issue the Read command at the LU discovery operation using UVM.
Mode 701 = ON: The Read command is issued at the LU discovery operation.
Mode 701 = OFF: The Read command is not issued at the LU discovery operation.
Notes:
When the Open LDEV Guard attribute (VMA) is defined on an
external device, set the system option to ON.
When this option is set to ON, it takes longer time to
complete the LU discovery. The amount of time depends on external storages.
With this system option OFF, if searching for external
devices with VMA set, the VMA information cannot be read.
OFF RCU
OFF MCU/RCU
OFF –
3-18 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
Mode Category Function Default MCU/RCU
704
OPEN and mainframe
To reduce the chance of MIH, this option can reduce the priority of SI, VM, CoW Shapshot, Flash Copy or Resync copy internal IO requests so that host IO has a higher priority. This mode creates new work queues where these jobs can be assigned with a lower priority.
Mode 704 = ON: Copy processing requested is registered into a newly created queue so that the processing is scheduled with lower priority than host I/O.
Mode 704 = OFF: Copy processing requested is not registered into a newly created queue. Only the existing queue is used.
Note: If the PDEV is highly loaded, the priority of Read/Write processing made by SI, VM, CoW Shapshot, Flash Copy or Resync may become lower. As a consequence the copy speed may be slower.
OFF –
Functional and Operational Characteristics 3-19
Hitachi Universal Storage Platform V/VM User and Reference Guide
Table 3-2 Modes 503 and 269: Storage Navigator Operations
Mode 269
Mode 503 Operation Target of Operation ON OFF
ON Virtual LVI/LUN (CVS) All LDEVs in a PG No format No format Virtual LVI/LUN (CVS) Some LDEVs in a PG No format No format Format PG is specified No op eration No operation Format All LDEVs in a PG Low speed Low speed Format Some LDEVs in a PG Low speed Low speed OFF Virtual LVI/LUN (CVS) All LDEVs in a PG High speed Low speed Virtual LVI/LUN (CVS) Some LDEVs in a PG Low speed Low speed Format PG is specified No op eration No operation Format All LDEVs in a PG Low speed Low speed Format Some LDEVs in a PG Low speed Low speed
Table 3-3 Modes 503 and 269: SVP Operations
Mode 269
Mode 503 Operation Target of Operation ON OFF
ON PDEV Addition - High speed High speed Virtual LVI/LUN (CVS) All LDEVs in a PG No format No format Virtual LVI/LUN (CVS) Some LDEVs in a PG No format No format Format PG is specified High speed High speed Format All LDEVs in a PG High speed Low speed Format Some LDEVs in a PG Low speed Low speed OFF PDEV Addition - High speed High speed Virtual LVI/LUN (CVS) All LDEVs in a PG High speed Low speed Virtual LVI/LUN (CVS) Some LDEVs in a PG Low speed Low speed Format PG is specified High speed High speed Format All LDEVs in a PG High speed Low speed Format Some LDEVs in a PG Low speed Low speed
3-20 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

Host Modes and Host Mode Options

The Universal Storage Platform V/VM supports connection of multiple server hosts of different platforms to each of its ports. When your system is configured, the hosts connected to each port are grouped by host group or by target. For example, if Solaris and Windows hosts are connected to a fibre port, a host group is created for the Solaris hosts, another host group is created for the Windows hosts, and the appropriate host mode and host mode options are assigned to each host group. The host modes and host mode options provide enhanced compatibility with supported platforms and environments.
The host groups, host modes, and host mode options are configured using the LUN Manager software on Storage Navigator. For further information on host groups, host modes, and host mode options, refer to the LUN Manager User’s Guide (MK-96RD615).
Functional and Operational Characteristics 3-21
Hitachi Universal Storage Platform V/VM User and Reference Guide

Mainframe Operations

Mainframe Compatibility and Functionality

In addition to full System-Managed Storage (SMS) compatibility, the Universal Storage Platform V and VM provide the following functionalities and support in the mainframe environment:
Sequential data striping
Cache fast write (CFW) and DASD fast write (DFW)
Enhanced dynamic cache management
Extended count key data (ECKD) commands
Multiple Allegiance
Concurrent Copy (CC)
Peer-to-Peer Remote Copy (PPRC)
FlashCopy
Parallel Access Volume (PAV)
Enhanced CCW
Priority I/O queuing
Red Hat Linux for IBM S/390
®
and zSeries®

Mainframe Operating System Support

Table 3-4 lists the mainframe operating systems currently supported by the Universal Storage Platform V and VM storage systems. Please contact your Hitachi Data Systems account team for the latest information on mainframe operating system support.
Table 3-4 Mainframe Operating System Support
Vendor Operating System(s) Document
IBM OS/390 MVS/ESA, MVS/XA VM/ESA, VSE/ESA z/OS, z/OS.e, z/VM, z/VSE Red Hat Linux for IBM S/390 and zSeries Fujitsu MSP
Mainframe Host Attachment and Operations Guide, MK-96RD645
3-22 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

Mainframe Configuration

After physical installation of the Universal Storage Platform V or VM storage system has been completed, the user configures the storage system for mainframe operations with assistance as needed from the Hitachi Data Systems representative.
Please refer to the following user documents for information and instructions on configuring your USP V/VM storage system for mainframe operations:
The Mainframe Host Attachment and Operations Guide (MK-96RD645)
describes and provides instructions for configuring the USP V/VM for mainframe operations, including FICON and ESCON attachment, hardware definition, cache operations, and device operations.
For detailed information on FICON connectivity, FICON/Open intermix configurations, and supported HBAs, switches, and directors for the USP V and VM, please contact your Hitachi Data Systems account team.
The Storage Navigator User’s Guide (MK-96RD621) provides instructions
for installing, configuring, and using Storage Navigator to perform resource and data management operations on the USP V/VM storage system(s).
The Virtual LVI/LUN and Volume Shredder User’s Guide (MK-96RD630)
provides instructions for converting single volumes (LVIs) into multiple smaller volumes to improve data access performance.
Functional and Operational Characteristics 3-23
Hitachi Universal Storage Platform V/VM User and Reference Guide

Open-Systems Operations

Open-Systems Compatibility and Functionality

The Universal Storage Platform V/VM supports and offers many features and functions for the open-systems environment, including:
Multi-initiator I/O configurations in which multiple host systems are
attached to the same fibre-channel interface
Fibre-channel arbitrated-loop (FC-AL) and fabric topologies
Command tag queuing
Industry-standard failover and logical volume management software
SNMP remote storage system management
The Universal Storage Platform V/VM’s global cache enables any fibre-channel port to have access to any LU in the storage system. In the USP V/VM, each LU can be assigned to multiple fibre-channel ports to provide I/O path failover and/or load balancing (with the appropriate middleware support, such as HGLAM) without sacrificing cache coherency.
The user should plan for path failover (alternate pathing) to ensure the highest data availability. The LUs can be mapped for access from multiple ports and/or multiple target IDs. The number of connected hosts is limited only by the number of FC ports installed and the requirement for alternate pathing within each host. If possible, the primary path and alternate path(s) should be attached to different channel cards.
3-24 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

Open-Systems Host Platform Support

Table 3-5 lists the open-systems host platforms supported by the USP V/VM and the corresponding Configuration Guide for each host platform. The Configuration Guides provide information and instructions on configuring the USP V/VM disk devices for open-systems operations.
Table 3-5 Open-Systems Platforms and Configuration Guides
Platform Configuration Guide
UNIX-Based Platforms
IBM AIX * MK-96RD636 HP-UX® MK-96RD638 Sun Solaris MK-96RD632 SGI IRIX MK-96RD651 HP Tru64 UNIX MK-96RD654 HP OpenVMS MK-96RD653
PC Server Platforms
Windows MK-96RD639 Novell NetWare MK-96RD652
Linux Platforms
Red Hat Linux MK-96RD640 SuSE Linux MK-96RD650
VMware MK-96RD649
*Note: The AIX ODM updates are included on the Product Documentation Library (PDL) CDs that come with the Hitachi USP V and VM.
Functional and Operational Characteristics 3-25
Hitachi Universal Storage Platform V/VM User and Reference Guide

Open-Systems Configuration

After physical installation of the Universal Storage Platform V/VM has been completed, the user configures the storage system for open-systems operations with assistance as needed from the Hitachi Data Systems representative.
Please refer to the following user documents for information and instructions on configuring your USP V/VM storage system for open-systems operations:
The Configuration Guides for Host Attachment (listed in Table 3-5 above)
provide information and instructions on configuring the USP V/VM storage system and disk devices for attachment to the open-systems hosts.
Note: The queue depth and other parameters may need to be adjusted for the USP V/VM devices. Refer to the appropriate Configuration Guide for queue depth and other requirements.
The Storage Navigator User’s Guide (MK-96RD621) provides instructions
for installing, configuring, and using Storage Navigator to perform resource and data management operations on the USP V/VM storage system(s).
The Hitachi LUN Manager User’s Guide (MK-96RD645) describes and
provides instructions for configuring the USP V/VM for host operations, including FC port configuration, LUN mapping, host groups, host modes and host mode options, and LUN Security.
Each fibre-channel port on the USP V/VM provides addressing capabilities for up to 2,048 LUNs across as many as 255 host groups, each with its own LUN 0, host mode, and host mode options. Multiple host groups are supported using LUN Security.
The Hitachi SNMP Agent User and Reference Guide (MK-96RD620)
describes the SNMP API interface for the USP V/VM storage systems and provides instructions for configuring and performing SNMP operations.
The Virtual LVI/LUN and Volume Shredder User’s Guide (MK-96RD630)
provides instructions for configuring multiple custom volumes (LUs) under single LDEVs on the USP V/VM storage system.
The LUN Expansion User’s Guide (MK-96RD616) provides instructions for
configuring size-expanded LUs on the USP V/VM storage system by concatenating multiple LUs to form individual large LUs.
3-26 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide

Battery Backup Operations

Figure 3-6 illustrates the two types of backup operations: backup mode and destage mode.
Backup Mode (USP V and VM)
When backup mode is chosen and a power failure occurs, storage system operations will continue normally for 200 milliseconds. If the power failure exceeds 200 ms, the storage system uses power from the batteries to back up the cache memory and shared memory for 36 hours or 48 hours, depending on the amount of cache memory.
Destage Mode (USP V only)
When destage mode is chosen and a power failure occurs, storage system operations will continue normally for 1 minute. If the power failure exceeds 1 minute, the storage system uses power from the batteries to destage the data from cache memory onto the disk drives and back up the cache memory and shared memory for 18 hours or 24 hours, depending on the amount of cache memory.
Destage mode requires the 56V battery option (DKC-F610I-ABX). Note: Destage mode is not supported in the following cases:
When external storage is connected When Cache Residency Manager BIND mode is applied.
Battery Charge
If the battery is not fully charged when a power failure occurs, the backup processes are affected as follows:
Backup mode: The backup time becomes short (less than 36/48 hours). Destage mode: The destage process may not be possible until the
battery charging is complete, and the backup time may also become short (less than 18/24 hours).
Functional and Operational Characteristics 3-27
Hitachi Universal Storage Platform V/VM User and Reference Guide
A
C Input Power Failure Occurs
r
Backup Mode
USP V/VM
Detection of Power Failure
200 ms
Power Failure
Storage
System
Operating
1 minute
Ç*1
Universal Storage Pl atform V:
48 hours: CM up to 128 GB 36 hours: CM 132 GB to 256 GB
CM and SM are backed up by battery.
8 minutes
5 minutes
Universal Sto
36 hours
24 hours (CM up to 128 GB) 18 hours (CM 132 GB to 256 GB)
age Platform VM:
Destage Mode
USP V only
(battery option
required)
Notes:
4. When power is recovered after a failure while backup power is being supplied by battery, the storage system operates depending on the status of the Auto-Power-On JP on the Operator panel:
ENABLE: The storage system is powered on automat ically. DISABLE: The storage system is powered on by operating the Power ON/OFF switch or the PCI.
5. When power is recovered after a failure during the destage process, the destage and power-off processes are executed.
Power Failure
Storage
System
Operating
Ç*2
Data destage process
CM and SM are backed up by battery.
Power off process
Figure 3-6 Battery Backup Processes for Power Failure
3-28 Functional and Operational Characteristics
Hitachi Universal Storage Platform V/VM User and Reference Guide
4

Troubleshooting

This chapter provides basic troubleshooting information for the Universal Storage Platform V/VM and instructions for calling technical support.
General TroubleshootingService Information MessagesCalling the Hitachi Data Systems Support Center
Troubleshooting 4-1
Hitachi Universal Storage Platform V/VM User and Reference Guide

General Troubleshooting

The Hitachi Universal Storage Platform V and VM storage systems are not expected to fail in any way that would prevent access to user data. The READY LED on the control panel must be ON when the storage system is operating online.
Table 4-1 lists potential error conditions and provides recommended actions for resolving each condition. If you are unable to resolve an error condition, contact your Hitachi Data Systems representative, or call the Hitachi Data Systems Support Center for assistance.
Table 4-1 Troubleshooting
Error Condition Recommended Action
Error message displayed.
General power failure. Call the Hitachi Data Systems Support Center for assistance.
Fence message is displayed on the console.
READY LED does not go on, or there is no power supplied.
Emergency (fire, earthquake, flood, etc.)
ALARM LED is on.
Determine the type of error (refer to the SIM codes section. If possible, remove the cause of the error. If you cannot correct the error condition, call the Hitachi Data Systems Support Center for assist ance.
WARNING:
touch any of the controls. Determine if there is a failed storage path. If so, toggle the RESTART switch,
and retry the operation. If the fence message is displayed again, call the Hitachi Data Systems Support Center for assistance.
Call the Hitachi Data Systems Support Center for assistance.
WARNING:
touch any of the controls. Pull the emergency power-off (EPO) switch. You must call the Hitachi Data
Systems Support Center to have the EPO switch reset.
If there is a temperature problem in the area, power down the storage system, lower the room temperature to the specifi ed operating range, and power on the storage system. Call the Hitachi Data Systems Support Center for assistance with power off/on operations. If the area temperature is not the cause of the alarm, call the Hitachi Data Systems Support Center for assistance.
Do not open the Universal Storage Platform V control frame or
Do not open the Universal Storage Platform V control frame or
4-2 Troubleshooting
Hitachi Universal Storage Platform V/VM User and Reference Guide

Service Information Messages

The Universal Storage Platform V and VM generate service information messages (SIMs) to identify normal operations (for example, TrueCopy pair status change) as well as service requirements and errors or failures. For assistance with SIMs, please call the Hitachi Data Systems Support Center.
SIMs can be generated by the front-end and back-end directors and by the SVP. All SIMs generated by the USP V/VM are stored on the SVP for use by Hitachi Data Systems personnel, logged in the SYS1.LOGREC dataset of the mainframe host system, displayed by the Storage Navigator software, and reported over SNMP to the open-system host. The SIM display on Storage Navigator enables users to remotely view the SIMs reported by the attached storage systems. Each time a SIM is generated, the amber Message LED on the control panel turns on. The Hi-Track remote maintenance tool also reports all SIMs to the Hitachi Data Systems Support Center.
SIMs are classified according to severity: service, moderate, serious, or acute. The service and moderate SIMs (lowest severity) do not require immediate attention and are addressed during routine maintenance. The serious and acute SIMs (highest severity) are reported to the mainframe host(s) once every eight hours.
Note: If a serious or acute-level SIM is reported, call the Hitachi Data Systems Support Center immediately to ensure that the problem is being addressed.
Figure 4-1 illustrates a typical 32-byte SIM from the USP V/VM. SIMs are displayed by reference code (RC) and severity. The six-digit RC, which is composed of bytes 22, 23, and 13, identifies the possible error and determines the severity. The SIM type, located in byte 28, indicates which component experienced the error.
Byte
SIM SSB
0123456 89107 11121314151617181920 212223 2524 26 27 28 29 30 31
0090 10 00 00 00E0 44 1000 0C 69 00000000 02 05 10 42 C000 020004 0400 80 30 70 F1F8
Indicates SIM.
SSB13
RC = 307080
SSB22, 23
SIM type F1: DKC SIM F2: CACHE SIM FE: DEVICE SIM FF: MEDIA SIM
Figure 4-1 Typical SIM Showing Reference Code and SIM Type
Troubleshooting 4-3
Hitachi Universal Storage Platform V/VM User and Reference Guide

Calling the Hitachi Data Systems Support Center

If you need to call the Hitachi Data Systems Support Center, make sure to provide as much information about the problem as possible, including:
The circumstances surrounding the error or failure
The exact content of any error messages displayed on the host system(s)
The error code(s) displayed on the Storage Navigator
The service information messages (SIMs) displayed on the Storage
Navigator and the reference codes and severity levels of the recent SIMs
The Hitachi Data Systems customer support staff is available 24 hours/day, seven days a week. If you need technical support, please call:
United States: (800) 446-0744
Outside the United States: (858) 547-4526
4-4 Troubleshooting
Hitachi Universal Storage Platform V/VM User and Reference Guide
A

Units and Unit Conversions

Table A-1 provides conversions for standard (U.S.) and metric units of measure associated with the Universal Storage Platform V/VM storage systems.
Table A-1 Conversions for Standard (U.S.) and Metric Units of Measure
From Multiply By: To Get:
British thermal units (BTU) 0.251996 Kilocalories (kcal) British thermal units (BTU) 0.000293018 Kilowatts (kW) Inches (in) 2.54000508 Centimeters (cm) Feet (ft) 0.3048006096 Meters (m) Square feet (ft2) 0.09290341 Square meters (m2) Cubic feet per minute (ft3/min) 0.028317016 Cubic meters per minute (m3/min) Pound (lb) 0.4535924277 Kilogram (kg) Kilocalories (kcal) 3.96832 British thermal units (BTU) Kilocalories (kcal) 1.16279 × 10-3 Kilowatts (kW) Kilowatts (kW) 3412.08 British thermal units (BTU) Kilowatts (kW) 859.828 Kilocalories (kcal) Millimeters (mm) 0.03937 Inches (in) Centimeters (cm) 0.3937 Inches (in) Meters (m) 39.369996 Inches (in) Meters (m) 3.280833 Feet (ft) Square meters (m2) 10.76387 Square feet (ft2) Cubic meters per minute (m3/min) 35.314445 Cubic feet per minute (ft3/min) Kilograms (kg) 2.2046 Pounds (lb) Ton (refrigerated) 12,000 BTUs per hour (BTU/hr) Degrees Fahrenheit (°F) First subtract 32, then multiply:
°C = (°F - 32) × 0.555556
Degrees Celsius (°C) First multiply, then add 32:
°F = (°C × 1.8) + 32 Degrees Fahrenheit per hour (°F/hour) 0.555555 Degrees Celsius per hour (°C/hour) Degrees Celsius per hour (°C/hour) 1.8 Degrees Fahrenheit per hour (°F/hour)
Degrees Celsius (°C)
Degrees Fahrenheit (°F)
Units and Unit Conversions A-1
Hitachi Universal Storage Platform V/VM User and Reference Guide
A-2 Units and Unit Conversions
Hitachi Universal Storage Platform V/VM User and Reference Guide

Acronyms and Abbreviations

A ampere
ACP
array control processor (another name for back-end director)
American Society for Testing and Materials
ASTM ATA Advanced Technology Attachment standard
average
AVE
business continuity
BC
back-end director
BED
basic (power) supply
BS
bus adapter
BSA
British thermal unit
BTU
degrees Celsius
°C
cache
ca
Concurrent Copy
CC
Command Control Interface
CCI
compact disk
CD
Canadian Electroacoustic Community
CEC
cache fast write
CFW
channel
CH
channel adapter
CHA
client-host interface processor (another name for front-end director)
CHIP
channel
CHL
channel processor (MPs on the FED features) or channel path
CHP CHPID CIFS CKD CL CLI CLPR cache logical partition CMA cache memory adapter CPU CSA CSW CU CV custom volume
channel path identifier
common internet file system
count key data
cluster
command line interface
central processing unit Canadian Standards Association
cache switch, cache switch card
control unit
Acronyms and Abbreviations Acronyms-1
Hitachi Universal Storage Platform V/VM User and Reference Guide
CVS Custom Volume Size (another name for Virtual LVI/LUN)
direct access storage device
DASD
decibel (A-weighted)
dB(A) DFDSS DFSMS DFW DKA DKC
Data Facility Dataset Services
Data Facility System Managed Storage
DASD fast write disk adapter disk controller (DKC610 = USP V, DKC615 = USP VM)
DKP disk processor (microprocessors on the BED features)
disk unit
DKU
data lifecycle management
DLM
domain name system
DNS
drive
dr
dynamic random access memory
DRAM
Device Support Facilities
DSF DTDS+
ECKD EOF EMI EPO EREP ESA ESCON ESS ExSA
Disaster Tolerant Storage System Plus
Extended Count Key Data
end of field
electromagnetic interference
emergency power-off
Error Reporting
Enterprise Systems Architecture
Enterprise System Connection (IBM trademark for optical channels)
Enterprise Storage Server
Extended Serial Adapter
®
File Access Library (part of the Cross-OS File Exchange software)
FAL
fixed-block architecture
FBA
fibre-channel
FC
fibre-channel arbitrated loop
FC-AL
Federal Communications Commission
FCC
fibre-channel protocol
FCP
File Conversion Utility (part of the Cross-OS File Exchange software)
FCU
Fast Dump/Restore
FDR
front-end director
FED FICON F/M FWD FX
g
Gb GB Gbps, Gb/s GLM
Fiber Connection
format/message
fast wide differential
Hitachi Cross-OS File Exchange
acceleration of gravity (9.8 m/ s
gigabit
gigabyte (see Convention for Storage Capacity Values)
gigabit per second
gigabyte link module
2
) (unit used for vibration and shock)
GLPR global logical partition
graphical user interface
GUI HACMP
HBA HCD HDLM
High Availability Cluster Multi-Processing
host bus adapter
hardware configuration definition
HiCommand Dynamic Link Manager
Acronyms-2 Acronyms and Abbreviations
Hitachi Universal Storage Platform V/VM User and Reference Guide
HDS Hitachi Data Systems
hard disk unit
HDU HGLAM HiCommand Global Link Availability Manager Hi-Star HSN HWM Hz
Hierarchical Star Network
Hierarchical Star Network
high-water mark
Hertz
ICKDSF IDCAMS IML in. IO, I/O IOCP
JCL KB
kcal kg km kVA kW
LAN lb LD LDEV LED LPAR LCP LRU LU LUN LVI LVM LW
A DSF command used to perform media maintenance
access method services (a component of Data Facility Product)
initial microprogram load
inch(es)
input/output (operation or device)
input/output configuration program
job control language
kilobyt e (see Convention for Storage Capacity Values)
kilocalorie
kilogram
kilometer
kilovolt-ampere
kilowatt
local area network
pound
logical device
logical device
light-emitting diode
logical partition
link control processor, local control port
least recently used
logical unit
logical unit number, logical unit
logical volume image
logical volume manager, Logical Volume Manager
long wavelength
meter
m
megabyte (see Convention for Storage Capacity Values)
MB
missing interrupt handler
MIH
millimeter
mm
microprocessor
MP
Multi-Path Locking Facility
MPLF
magnetoresistive
MR ms, msec MVS NBU NEC NFS NIS NTP NVS
Acronyms and Abbreviations Acronyms-3
millisecond
Mult iple Virtual Storage (including MVS/ESA, MVS/XA)
NetBackup (a VERITAS software product) National Electrical Code network file system
network in formation service
network time protocol
nonvolatile storage
Hitachi Universal Storage Platform V/VM User and Reference Guide
ODM Object Data Manager
original equipment manufacturer
OEM
open fibre control
OFC
online read margin
ORM
operating system
OS
Parallel Access Volume
PAV PB petabyte (see
personal computer system
PC
power control interface
PCI P/DAS
PPRC/dynamic address switching (IBM mainframe software function)
Convention for Storage Capacity Values)
PDEV physical device
Product Documentation Library
PDL PG parity group
Peer-to-Peer Remote Copy (an IBM mainframe host software function)
PPRC
power supply
PS
RAID Advisory Board
RAB
redundant array of independent disks
RAID
random-access memory
RAM
reference code
RC
reduced instruction-set computer
RISC
read/write
R/W
IBM System/390 architecture
S/390
storage-area network
SAN SATA serial Advanced Technology Attachment standard
small computer system interface
SCSI
state-change pending
SCP
second
sec.
sequential
seq. SFP small form-factor pluggable
Silicon Graphics, Inc.
SGI
ShadowImage
SI
service information message
SIM
ShadowImage for z/OS
SIz SLPR storage logical partition SMA shared memory adapter
System Managed Storage
SMS
simple network management protocol
SNMP
storage system identification
SSID
service processor
SVP
switch, short wavelength
SW
terabyte (see Convention for Storage Capacity Values)
TB
Thomas & Betts
T&B
TrueCopy
TC
TrueCopy for z/OS
TCz
target ID
TID
Transaction Processing Facility
TPF
Time Sharing Option (an IBM mainframe operating system option)
TSO
unit control block
UCB
Acronyms-4 Acronyms and Abbreviations
Hitachi Universal Storage Platform V/VM User and Reference Guide
UIM unit information module
Underwriters’ Laboratories
UL μm USP V
Hitachi Universal Storage Platform V
micron, micrometer
USP VM Hitachi Universal Storage Platform VM
volt-ampere
VA
volts AC
VAC
VERITAS Cluster Server
VCS
Verband Deutscher Elektrotechniker
VDE VDEV virtual device
Virtual Machine (an IBM mainframe system control program)
VM VOLID volser VSE VTOC
W WLM
XA XDF XRC
volume ID
volume serial number
Virtual Storage Extension (an IBM mainframe operating system)
volume table of contents
watt
Workload Manager (an IBM mainframe host software function)
System/370 Extended Architecture
Extended Distance Feature (for ExSA channels) Extended Remote Copy (an IBM mainframe host software function)
Acronyms and Abbreviations Acronyms-5
Hitachi Universal Storage Platform V/VM User and Reference Guide
Acronyms-6 Acronyms and Abbreviations
Hitachi Universal Storage Platform V/VM User and Reference Guide
A
alternate pathing
scheme, 2-4 arbitrated-loop topology, 2-9 array
domain, 2-10
group, 3-2

Index

F
fixed-block-architecture, 2-11
H
hdd storage capacities, 2-11 Hierarchical Star Network, 2-3
B
back-end director, 2-9
C
cache
global, 3-24
memory battery backup, 2-6
switch cards, 2-3 canister mount, 3-2 commands
cache fast write, 2-6
write inhibit, 2-14 conversion
fba to ckd, 2-7 copy functions
dasd on raid, 1-12
D
data
sequential striping, 3-4
striping, 3-2 data transmission rates
ficon, 2-7 DKA pair
buffers, 2-9
device emulation, 2-17 dynamic path switching, 2-7
E
error conditions, 4-2 extended count key data, 3-22
I
intermix
virtual lvi/lun, 2-17
J
Java applet, 3-9
L
LDEV mapping
scsi address, 3-3
LU types, 3-8
M
mainframe
lvi supported types, 3-7
multiple LUs
concatenate, 3-8
P
parity
data, 3-2 groups, 3-2
paths
data and command, 2-3
power supply, 2-12
R
RAID-5 array group, 3-3 RAID-6 array group, 3-4 Red Hat Linux, 3-22
Index Index-1
Hitachi Universal Storage Platform V/VM User and Reference Guide
S
scrubbing
background, 2-11
service processor, 2-11 shared memory
nonvolatile, 2-6
single points of failure, 2-4 spare disk drives
max qty, 2-11
storage
clusters, 2-4
U
USP advanced features and functions, 1-8 USP hardware architecture, 2-5
W
write penalty, 3-4
Index-2 Index
Hitachi Universal Storage Platform V/VM User and Reference Guide
Hitachi Universal Storage Platform V/VM User and Reference Guide
Hitachi Data Systems Hitachi Data Systems Corporate Headquarters Corporate Headquarters
750 Central Expressway 750 Central Expressway
Santa Clara, California 95050-2627 Santa Clara, California 95050-2627
U.S.A. U.S.A.
Phone: 1 408 970 1000 Phone: 1 408 970 1000
www.hds.com www.hds.com
info@hds.com info@hds.com
Asia Pacific and Americas Asia Pacific and Americas
750 Central Expressway 750 Central Expressway
Santa Clara, California 95050-2627 Santa Clara, California 95050-2627
U.S.A. U.S.A.
Phone: 1 408 970 1000 Phone: 1 408 970 1000
info@hds.com info@hds.com
Europe Headquarters Europe Headquarters
Sefton Park Sefton Park
Stoke Poges Stoke Poges
Buckinghamshire SL2 4HD Buckinghamshire SL2 4HD
United Kingdom United Kingdom
Phone: + 44 (0)1753 618000 Phone: + 44 (0)1753 618000
info.eu@hds.com info.eu@hds.com
MK-96RD635-04
Loading...