No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any
purpose without the express written permission of Hitachi, Ltd. and Hitachi Data Systems Corporation
(hereinafter referred to as “Hitachi”).
Hitachi, Ltd. and Hitachi Data Systems reserve the right to make changes to this document at any time
without notice and assume no responsibility for its use. Hitachi, Ltd. and Hitachi Data Systems products and
services can only be ordered under the terms and conditions of Hitachi Data Systems' applicable agreements.
All of the features described in this document may not be currently available. Refer to the most recent
product announcement or contact your local Hitachi Data Systems sales office for information on feature and
product availability.
Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of
Hitachi Data Systems’ applicable agreements. The use of Hitachi Data Systems products is governed by the
terms of your agreements with Hitachi Data Systems.
Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi in the United States and other countries.
All other trademarks, service marks, and company names are properties of their respective owners.
Export authorization is required for the AMS 2000 Data At Rest Encryption
•Import/Use regulations may restrict export of the AMS2000 SED to certain countries
•China – AMS2000 is eligible for import but the License Key and SED may not be sent to China
•France – Import pending completion of registration formalities
•Hong Kong – Import pending completion of registration formalities
•Israel – Import pending completion of registration formalities
•Russia – Import pending completion of notification formalities
•Distribution Centers – IDC, EDC and ADC cleared for exports
ii
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 9
Preface
This document provides instructions for planning, setting up, and
operating TrueCopy Extended Distance.
This preface includes the following information:
Intended audience
Product version
Release notes and readme
Changes in this release
Changes in this release
Document organization
Document conventions
Convention for storage capacity values
Related documents
Getting help
Prefaceix
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 10
Intended audience
This document is intended for system administrators, Hitachi Data Systems
representatives, and Authorized Service Providers who install, configure,
and operate Hitachi Adaptable Modular System (AMS) 2000 family storage
systems.
Product version
This document applies to Hitachi AMS 2000 Family firmware version
08D1/Bor later.
Release notes and readme
Read the release notes and readme file before installing and using this
product. They may contain requirements or restrictions that are not fully
described in this document and/or updates or corrections to this document
Product Abbreviations
Product Abbreviation
ShadowImageShadowImage In-system Replication
SnapshotCopy-on-Write Snapshot
TrueCopy RemoteTrueCopy Remote Replication
TCETrueCopy Extended Distance
TCMDTrueCopy Modular Distributed
Windows ServerWindows Server 2003, Windows Server 2008,
and Windows Server 2012.
Product Full Name
xPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 11
Changes in this release
•In Table 5-2 (page 5-3), added the parameter Remote Copy over iSCSI
in the WAN environment.
Document organization
Thumbnail descriptions of the chapters are provided in the following table.
Click the chapter title in the first column to go to that chapter. The first page
of every chapter or appendix contains links to the contents.
Chapter/Appendix
Title
Chapter 1, OverviewProvides descriptions of TrueCopy Extended Distance
components and how they work together.
Chapter 2, Plan and
design — sizing data
pools and bandwidth
Chapter 3, Plan and
design — remote path
Chapter 4, Plan and
design—arrays,
volumes and operating
systems
Chapter 5,
Requirements and
specifications
Chapter 6, Installation
and setup
Chapter 7, Pair
operations
Chapter 8, Example
scenarios and
procedures
Chapter 9, Monitoring
and maintenance
Chapter 10,
Troubleshooting
Appendix A,
Operations using CLI
Appendix B,
Operations using CCI
Appendix C, Cascading
with SnapShot
Appendix D, Installing
TCE when Cache
Partition Manager is in
use
Provides instructions for measuring write-workload,
calculating data pool size and bandwidth.
Provides supported iSCSI and Fibre Channel
configurations, with information on WDM and dark fibre.
Discusses the arrays and volumes you can use for TCE.
Provides TCE system requirements and specifications.
Provides procedures for installing and setting up the TCE
system and creating the initial copy.
Provides information and procedures for TCE operations.
Provides backup, data moving, and disaster recovery
scenarios and procedures.
Provides monitoring and maintenance information.
Provides troubleshooting information.
Provides detailed Command Line Interface instructions for
configuring and using TCE.
Provides detailed Command Line Interface instructions for
configuring and using TCE.
Provides supported configurations, operations, etc. for
cascading TCE with SnapShot.
Provides required information when using Cache Partition
Manager.
Description
Prefacexi
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 12
Chapter/Appendix
Title
Description
Appendix E,
Wavelength
Provides a discussion of WDM and dark fibre for channel
extender.
Division
Multiplexing (WDM)
and dark fibre
GlossaryProvides definitions for terms and acronyms found in this
document.
IndexProvides links and locations to specific information in this
document.
xiiPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 13
Document conventions
This document uses the following symbols to draw attention to important
safety and operational information.
SymbolMeaningDescription
TipTips provide helpful information, guidelines, or suggestions for
NoteNotes emphasize or supplement important points of the main
CautionCautions indicate that failure to take a specified action could
The following typographic conventions are used in this document.
ConventionDescription
BoldIndicates text on a window, other than the window title, including
menus, menu options, buttons, fields, and labels. Example: Click OK.
ItalicIndicates a variable, which is a placeholder for actual text provided by
the user or system. Example: copy source-file target-file
Angled brackets (< >) are also used to indicate variables.
screen/codeIndicates text that is displayed on screen or entered by the user.
Example: # pairdisplay -g oradb
< > angled
brackets
Indicates a variable, which is a placeholder for actual text provided by
the user or system. Example: # pairdisplay -g <group>
performing tasks more effectively.
text.
result in damage to the software or hardware.
Italic font is also used to indicate variables.
[ ] square
brackets
{ } bracesIndicates required or expected values. Example: { a | b } indicates that
| vertical bar Indicates that you have a choice between two or more options or
underlineIndicates the default value. Example: [ a | b ]
Indicates optional values. Example: [ a | b ] indicates that you can
choose a, b, or nothing.
you must choose either a or b.
arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.
Prefacexiii
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 14
Convention for storage capacity values
Physical storage capacity values (e.g., disk drive capacity) are calculated
based on the following values:
Physical capacity unitValue
1 KB1,000 bytes
1 MB1,000 KB or 1,000
1 GB1,000 MB or 1,0003 bytes
1 TB1,000 GB or 1,0004 bytes
1 PB1,000 TB or 1,0005 bytes
1 EB1,000 PB or 1,000
Logical storage capacity values (e.g., logical device capacity) are calculated
based on the following values:
Logical capacity unitValue
1 block512 bytes
1 KB1,024 (210) bytes
1 MB1,024 KB or 1024
1 GB1,024 MB or 10243 bytes
1 TB1,024 GB or 10244 bytes
1 PB1,024 TB or 1024
1 EB1,024 PB or 10246 bytes
2
bytes
6
bytes
2
bytes
5
bytes
Related documents
The AMS 2000 Family user documentation is available on the Hitachi Data
Systems Portal: https://portal.hds.com. Please check this site for the most
current documentation, including important updates that may have been
made after the release of the product.
This documentation set consists of the following documents.
Release notes
•Adaptable Modular Storage System Release Notes
•Storage Navigator Modular 2 Release Notes
Please read the release notes before installing and/or using this product.
They may contain requirements and/or restrictions not fully described in
this document, along with updates and/or corrections to this document.
xivPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 15
Installation and getting started
The following documents provide instructions for installing an AMS 2000
Family storage system. They include rack information, safety information,
site-preparation instructions, getting-started guides for experienced users,
and host connectivity information. The symbol
that contain initial configuration information about Hitachi AMS 2000 Family
storage systems.
identifies documents
AMS2100/2300 Getting Started Guide, MK-98DF8152
Provides quick-start instructions for getting an AMS 2100 or AMS 2300
storage system up and running as quickly as possible.
AMS2500 Getting Started Guide,
MK-97DF8032 Provides quick-start instructions for getting an AMS 2500
storage system up and running as quickly as possible
AMS 2000 Family Site Preparation Guide, MK-98DF8149
Contains site planning and pre-installation information for AMS 2000
Family storage systems, expansion units, and high-density expansion
units. This document also covers safety precautions, rack information,
and product specifications.
AMS 2000 Family Fibre Channel Host Installation Guide,
MK-08DF8189
Describes how to prepare Hitachi AMS 2000 Family Fibre Channel
storage systems for use with host servers running supported operating
systems.
AMS 2000 Family iSCSI Host Installation Guide, MK-08DF8188
Describes how to prepare Hitachi AMS 2000 Family iSCSI storage
systems for use with host servers running supported operating systems.
Storage and replication features
The following documents describe how to use Storage Navigator Modular 2
(Navigator 2) to perform storage and replication activities.
Contains advanced information about launching and using Navigator 2
in various operating systems, IP addresses and port numbers, server
certificates and private keys, boot and restore options, outputting
configuration information to a file, and collecting diagnostic information.
Prefacexv
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Describes how to use Navigator 2 to configure and manage storage on
an AMS 2000 Family storage system.
AMS 2000 Family Dynamic Provisioning Configuration Guide,
MK-09DF8201
Describes how to use virtual storage capabilities to simplify storage
additions and administration.
Storage Navigator 2 Storage Features Reference Guide for AMS,
MK-97DF8148
Contains concepts, preparation, and specifications for Account
Authentication, Audit Logging, Cache Partition Manager, Cache
Residency Manager, Data Retention Utility, LUN Manager, Performance
Monitor, SNMP Agent, and Modular Volume Migration.
AMS 2000 Family Copy-on-write SnapShot User Guide, MK-97DF8124
Describes how to create point-in-time copies of data volumes in AMS
2100, AMS 2300, and AMS 2500 storage systems, without impacting
host service and performance levels. Snapshot copies are fully read/
write compatible with other hosts and can be used for rapid data
restores, application testing and development, data mining and
warehousing, and nondisruptive backup and maintenance procedures.
AMS 2000 Family ShadowImage In-system Replication User Guide,
MK-97DF8129
Describes how to perform high-speed nondisruptive local mirroring to
create a copy of mission-critical data in AMS 2100, AMS 2300, and AMS
2500 storage systems. ShadowImage keeps data RAID-protected and
fully recoverable, without affecting service or performance levels.
Replicated data volumes can be split from host applications and used for
system backups, application testing, and data mining applications while
business continues to operate at full capacity.
AMS 2000 Family TrueCopy Remote Replication User Guide,
MK-97DF8052
Describes how to create and maintain multiple duplicate copies of user
data across multiple AMS 2000 Family storage systems to enhance your
disaster recovery strategy.
AMS 2000 Family TrueCopy Extended Distance User Guide,
MK-97DF8054 — this document
Describes how to perform bi-directional remote data protection that
copies data over any distance without interrupting applications, and
provides failover and recovery capabilities.
xviPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 17
AMS 2000 Data Retention Utility User’s Guide, MK-97DF8019
Describes how to lock disk volumes as read-only for a certain period of
time to ensure authorized-only access and facilitate immutable, tamperproof record retention for storage-compliant environments. After data is
written, it can be retrieved and read only by authorized applications or
users, and cannot be changed or deleted during the specified retention
period.
Storage Navigator Modular 2 online help
Provides topic and context-sensitive help information accessed through
the Navigator 2 software.
Hardware maintenance and operation
The following documents describe how to operate, maintain, and administer
an AMS 2000 Family storage system. They also provide a wide range of
technical information and specifications for the AMS 2000 Family storage
systems. The symbol
configuration information about Hitachi AMS 2000 Family storage systems.
identifies documents that contain initial
AMS 2100/2300 Storage System Hardware Guide, MK-97DF8010
Provides detailed information about installing, configuring, and
maintaining an AMS 2100/2300 storage system.
AMS 2500 Storage System Hardware Guide, MK-97DF8007
Provides detailed information about installing, configuring, and
maintaining an AMS 2500 storage system.
AMS 2000 Family Storage System Reference Guide,
MK-97DF8008
Contains specifications and technical information about power cables,
system parameters, interfaces, logical blocks, RAID levels and
configurations, and regulatory information about AMS 2100, AMS 2300,
and AMS 2500 storage systems. This document also contains remote
adapter specifications and regulatory information.
AMS 2000 Family Storage System Service and Upgrade Guide,
MK-97DF8009
Provides information about servicing and upgrading AMS 2100, AMS
2300, and AMS 2500 storage systems.
AMS 2000 Family Power Savings User Guide, MK-97DF8045
Describes how to spin down volumes in selected RAID groups when they
are not being accessed by business applications to decrease energy
consumption and significantly reduce the cost of storing and delivering
information.
Prefacexvii
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 18
Command and Control (CCI)
The following documents describe how to install the Hitachi AMS 2000
Family Command Control Interface (CCI) and use it to perform TrueCopy
and ShadowImage operations.
AMS 2000 Family Command Control Interface (CCI) Installation
Guide, MK-97DF8122
Describes how to install CCI software on open-system hosts.
AMS 2000 Family Command Control Interface (CCI) Reference
Guide, MK-97DF8121
Contains reference, troubleshooting, and maintenance information
related to CCI operations on AMS 2100, AMS 2300, and AMS 2500
storage systems.
AMS 2000 Family Command Control Interface (CCI) User’s Guide,
MK-97DF8123
Describes how to use CCI to perform TrueCopy and ShadowImage
operations on AMS 2100, AMS 2300, and AMS 2500 storage systems.
Command Line Interface (CLI)
The following documents describe how to use Hitachi Storage Navigator
Modular 2 to perform management and replication activities from a
command line.
Describes how to interact with all Navigator 2 bundled and optional
software modules by typing commands at a command line.
Storage Navigator 2 Command Line Interface Replication Reference
Guide for AMS, MK-97DF8153
Describes how to interact with Navigator 2 to perform replication
activities by typing commands at a command line.
xviiiPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 19
Dynamic Replicator documentation
The following documents describe how to install, configure, and use Hitachi
Dynamic Replicator to provide AMS Family storage systems with continuous
data protection, remote replication, and application failover in a single,
easy-to-deploy and manage platform.
If you need to contact the Hitachi Data Systems support center, please
provide as much information about the problem as possible, including:
Comments
•The circumstances surrounding the error or failure.
•The exact content of any messages displayed on the host systems.
•The exact content of any messages displayed on Storage Navigator
Modular 2.
•The Storage Navigator Modular 2 configuration information. This
information is used by service personnel for troubleshooting purposes.
The Hitachi Data Systems customer support staff is available 24 hours a
day, seven days a week. If you need technical support, please log on to the
Hitachi Data Systems Portal for contact information: https://portal.hds.com
Ple as e se nd u s y o ur co m me nt s on th i s d oc u me nt : doc.comments@hds.com.
Include the document title, number, and revision, and refer to specific
sections and paragraphs whenever possible.
Thank you! (All comments become the property of Hitachi Data Systems.)
Prefacexix
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 20
xxPreface
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 21
1
Overview
This manual provides instructions for designing, planning,
implementing, using, monitoring, and troubleshooting TrueCopy
Extended Distance (TCE). This chapter consists of:
How TCE works
Typical environment
TCE interfaces
Overview1–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 22
How TCE works
With TrueCopy Extended Distance (TCE), you create a copy of your data at
a remote location. After the initial copy is created, only changed data
transfers to the remote location.
You create a TCE copy when you:
•Select a volume on the production array that you want to replicate
•Create a volume on the remote array that will contain the copy
•Establish a Fibre Channel or iSCSI link between the local and remote
arrays
•Make the initial copy across the link on the remote array.
During and after the initial copy, the primary volume on the local side
continues to be updated with data from the host application. When the host
writes data to the P-VOL, the local array immediately returns a response to
the host. This completes the I/O processing. The array performs the
subsequent processing independently from I/O processing.
Updates are periodically sent to the secondary volume on the remote side
at the end of the “update cycle”. This is a time period established by the
user. The cycle time is based on the recovery point objective (RPO), which
is the amount of data in time (2-hours’ worth, 4 hour’s worth) that can be
lost after a disaster, until the operation is irreparably damaged. If the RPO
is two hours, the business must be able to recover all data up to two hours
before the disaster occurred.
When a disaster occurs, storage operations are transferred to the remote
site and the secondary volume becomes the production volume. All the
original data is available in the S-VOL, from the last completed update. The
update cycle is determined by your RPO and by measuring write-workload
during the TCE planning and design process.
For a detailed discussion of the disaster recovery process using TCE, please
refer to Process for disaster recovery on page 8-11.
Typical environment
A typical configuration consists of the following elements. Many but not all
require user set up.
•Two AMS arrays—one on the local side connected to a host, and one on
the remote side connected to the local array. Connections are made via
Fibre Channel or iSCSI.
•A primary volume on the local array that is to be copied to the
secondary volume on the remote side.
•A differential management LU on local and remote arrays, which hold
TCE information when the array is powered down
1–2 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 23
•Interface and command software, used to perform TCE operations.
Command software uses a command device (volume) to communicate
with the arrays.
Figure 1-1 shows a typical TCE environment.
•
Volume pairs
When the initial TCE copy is completed, the production and backup volumes
are said to be “Paired”. The two paired volumes are referred to as the
primary volume (P-VOL) and secondary volume (S-VOL). Each TCE pair
consists of one P-VOL and one S-VOL. When the pair relationship is
established, data flows from the P-VOL to the S-VOL.
While in the Paired status, new data is written to the P-VOL and then
periodically transferred to the S-VOL, according to the user-defined update
cycle.
When a pair is “split”, the data flow between the volumes stops. At this time,
all the differential data that has accumulated in the local array since the last
update is copied to the S-VOL. This insures that its data is the same as the
P-VOL’s and is consistent and usable data.
During normal TCE operations, the P-VOL remains available for read/write
from the host. When the pair is split, the S-VOL also is available for read/
write operations from a host.
Figure 1-1: Typical TCE Environment
Overview1–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 24
Data pools
Data from the host is continually updated to the P-VOL, as it occurs. The
data pool on the local side stores the changed data that accumulates before
the next the update cycle. The local data pool is used to update the S-VOL.
Data that accumulates in the data pool is referred to as differential data
because it contains the difference data between the P-VOL and S-VOL.
The data in the S-VOL following an update is complete, consistent, and
usable data. When the next update is to begin, this consistent data is copied
to the remote data pool. This data pool is used to maintain previous pointin-time copies of the S-VOL, which are used in the event of failback.
Guaranteed write order and the update cycle
S-VOL data must have the same order in which the host updates the P-VOL.
When write order is guaranteed, the S-VOL has data consistency with the
P-VOL.
As explained in the previous section, data is copied from the P-VOL and local
data pool to the S-VOL following the update cycle. When the update is
complete, S-VOL data is identical to P-VOL data at the end of the cycle.
Since the P-VOL continues to be updated while and after the S-VOL is being
updated, S-VOL data and P-VOL data are not identical.
However, the S-VOL and P-VOL can be made identical when the pair is split.
During this operation, all differential data in the local data pool is
transferred to the S-VOL, as well as all cached data in host memory. This
cached data is flushed to the P-VOL, then transferred to the S-VOL as part
of the split operation, thus ensuring that the two are identical.
If a failure occurs during an update cycle, the data in the update is
inconsistent. Write order in the S-VOL is nevertheless guaranteed — at the
point-in-time of the previous update cycle, which is stored in the remote
data pool.
Figure 1-2 shows how S-VOL data is maintained at one update cycle back
of P-VOL data.
1–4 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 25
•
Extended update cycles
If inflow to the P-VOL increases, all of the update data may not be sent
within the cycle time. This causes the cycle to extend beyond the userspecified cycle time.
As a result, more update data in the P-VOL accumulates to be copied at the
next update. Also, the time difference between the P-VOL data and S-VOL
data increases, which degrades the recovery point value. In Figure 1-2, if a
failure occurs at the primary site immediately before time T3, for example,
data consistency in the S-VOL during takeover is P-VOL data at time T1.
When inflow decreases, updates again complete within the cycle time. Cycle
time should be determined according to a realistic assessment of write
workload, as discussed in Chapter 2, Plan and design — sizing data pools
and bandwidth.
Figure 1-2: Update Cycles and Differential Data
Overview1–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 26
Consistency groups
Application data often spans more than one volume. With TCE, it is possible
to manage operations spanning multiple volumes as a single group. In a
consistency group (CTG), all primary logical volumes are treated as a single
entity.
Managing primary volumes as a consistency group allows TCE operations to
be performed on all volumes in the group concurrently. Write order in
secondary volumes is guaranteed across application logical volumes.
Figure 1-3 shows TCE operations with a consistency group.
•
Figure 1-3: TCE Operations with Consistency Groups
In this illustration, observe the following:
•The P-VOLs belong to the same consistency group. The host updates
the P-VOLs as required (1).
•The local array identifies the differential data in the P-VOLs when the
cycle is started (2) in an atomic manner. The differential data of the
group of the P-VOLs are determined at time T2.
•The local array transfers the differential data to the corresponding SVOLs (3). When all differential data is transferred, each S-VOL is
identical to its P-VOL at time T2 (4).
•If pairs are split or deleted, the local array stops the cycle update for
the consistency group. Differential data between P-VOLs and S-VOLs is
determined at that time. All differential data is sent to the S-VOLs, and
the split or delete operations on the pairs completes. S-VOLs maintain
1–6 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 27
data consistency across pairs in the consistency group. The pair that is
using different data pool can be belongs to the same consistency group.
Differential Management LUs (DMLU)
The DMLU is an exclusive volume used for storing TrueCopy information
when the local or remote array is powered down. The DMLU is hidden from
a host. User setup is required on the local and remote arrays.
TCE interfaces
TCE can be setup, used and monitored using of the following interfaces:
•The GUI (Hitachi Storage Navigator Modular 2 Graphical User
Interface), which is a browser-based interface from which TCE can be
setup, operated, and monitored. The GUI provides the simplest method
for performing operations, requiring no previous experience. Scripting
is not available.
•CLI (Hitachi Storage Navigator Modular 2 Command Line Interface),
from which TCE can be setup and all basic pair operations can be
performed—create, split, resynchronize, restore, swap, and delete. The
GUI also provides these functionalities. CLI also has scripting capability.
•CCI (Hitachi Command Control Interface (CCI), which is used to display
volume information and perform all copying and pair-managing
operations. CCI provides a full scripting capability which can be used to
automate replication operations. CCI requires more experience than
the GUI or CLI. CCI is required for performing failover and fall back
operations, and, on Windows 2000 Server, mount/unmount operations.
HDS recommends using the GUI to begin operations for new users with no
experience with CLI or CCI. Users who are new to replication software but
have CLI experience in managing arrays may want to continue using CLI,
though the GUI is an option. The same recommendation applies to CCI
users.
Overview1–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 28
1–8 Overview
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 29
2
Plan and design — sizing
data pools and bandwidth
This chapter provides instructions for measuring write-workload
and sizing data pools and bandwidth.
Plan and design workflow
Assessing business needs — RPO and the update cycle
Measuring write-workload
Calculating data pool size
Determining bandwidth
Plan and design — sizing data pools and bandwidth2–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 30
Plan and design workflow
You design your TCE system around the write-workload generated by your
host application. Data pools and bandwidth must be sized to accommodate
write-workload. This chapter helps you perform these tasks as follows:
•Assess business requirements regarding how much data your operation
must recover in the event of a disaster.
•Measure write-workload. This metric is used to ensure that data pool
size and bandwidth are sufficient to hold and pass all levels of I/O.
•Calculate data pool size. Instructions are included for matching data
pool capacity to the production environment.
•Calculate remote path bandwidth: This will make certain that you can
copy your data to the remote site within your update cycle.
Assessing business needs — RPO and the update cycle
In a TCE system, the S-VOL will contain nearly all of the data that is in the
P-VOL. The difference between them at any time will be the differential data
that accumulates during the TCE update cycle.
This differential data accumulates in the local data pool until the update
cycle starts, then it is transferred over the remote data path.
Update cycle time is a uniform interval of time during which differential data
copies to the S-VOL. You will define the update cycle time when creating the
TCE pair.
The update cycle time is based on:
•the amount of data written to your P-VOL
•the maximum amount of data loss your operation could survive during
a disaster.
The data loss that your operation can survive and remain viable determines
to what point in the past you must recover.
An ho ur ’s wo rth of da ta l os s me an s t hat yo ur re cov er y p oi nt i s o ne hou r a go .
If disaster occurs at 10:00 am, upon recovery your restart will resume
operations with data from 9:00 am.
Fifteen minutes worth of data loss means that your recovery point is 15
minutes prior to the disaster.
You must determine your recovery point objective (RPO). You can do this by
measuring your host application’s write-workload. This shows the amount
of data written to the P-VOL over time. You or your organization’s decisionmakers can use this information to decide the number of business
transactions that can be lost, the number of hours required to key in lost
data and so on. The result is the RPO.
2–2Plan and design — sizing data pools and bandwidth
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 31
Measuring write-workload
Bandwidth and data pool size are determined by understanding the writeworkload placed on the primary volume from the host application.
•After the initial copy, TCE only copies changed data to the S-VOL.
•Data is changed when the host application writes to storage.
•Write-workload is a measure of changed data over a period of time.
When you know how much data is changing, you can plan the size of your
data pools and bandwidth to support your environment.
Collecting write-workload data
Workload data is collected using your operating system’s performance
monitoring feature. Collection should be performed during the busiest time
of month, quarter, and year so you can be sure your TCE implementation
will support your environment when demand is greatest. The following
procedure is provided to help you collect write-workload data.
To collect workload data
1. Using your operating system’s performance monitoring software, collect
the following:
- Disk-write bytes-per-second for every physical volume that will be
replicated.
- Collect this data at 10 minute intervals and over as long a period
as possible. Hitachi recommends a 4-6 week period in order to
accumulate data over all workload conditions including times when
the demands on the system are greatest.
•
2. At the end of the collection period, convert the data to MB/second and
import into a spreadsheet tool. In Figure 2-1, Write-Workload
Spreadsheet, column C shows an example of collected raw data over 10-
minute segments.
Plan and design — sizing data pools and bandwidth2–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 32
•
Figure 2-1: Write-Workload Spreadsheet
Fluctuations in write-workload can be seen from interval to interval. To
calculate data pool size, the interval data will first be averaged, then used
in an equation. (Your spreadsheet at this point would have only rows B and
C populated.)
Calculating data pool size
In addition to write-workload data, cycle time must be known. Cycle time is
the frequency that updates are sent to the remote array. This is a userdefined value that can range from 30 seconds to 1 hour. The default cycle
time is 5-minutes (300 seconds). If consistency groups are used, the
minimum must be 30 seconds for one CTG, increasing 30 seconds for each
additional CTG, up to 16. Since the data pool stores all updated data that
accumulates during the cycle time, the longer the cycle time, the larger the
data pool must be. For more information on cycle time, see the discussion
in Assessing business needs — RPO and the update cycle on page 2-2, and
also Changing cycle time on page 9-8.
To calculate TCE data pool capacity
1. Using write-workload data imported into a spreadsheet tool and your
cycle time, calculate write rolling-averages, as follows. (Most
spreadsheet tools have an average function.)
- If cycle time is 1 hour, then calculate 60 minute rolling averages.
Do this by arranging the values in six 10-minute intervals.
- If cycle time is 30 minutes, then calculate 30 minute rolling
averages, arranging the values in three 10-minute intervals.
2–4Plan and design — sizing data pools and bandwidth
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 33
Example rolling-average procedure for cycle time in Microsoft
Excel
Cycle time in the following example is 1 hour; rolling averages are
calculated using six 10-minute intervals.
a. After converting workload data into the spreadsheet (Figure 2-1,
Write-Workload Spreadsheet), in cell E4 type, =average(b2:b7),
and press Enter.
This instructs the tool to calculate the average value in cells B2
through B7 (six 10-minute intervals) and populate cell E4 with that
data. (The calculations used here are for example purposes only. Base your calculations on your cycle time.)
b. Copy the value that displays in E4.
c. Highlight cells E5 to the E cell in the last row of workload data in the
spreadsheet.
d. Right-click the highlighted cells and select the Paste option.
Excel maintains the logic and increments the formula values initially
entered in E4. It then calculates all the 60-minute averages for every
10-minute increment, and populates the E cells, as shown in
Figure 2-2.
•
•
Figure 2-2: Rolling Averages Calculated Using 60 Minute Cycle Time
Plan and design — sizing data pools and bandwidth2–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 34
For another perspective, you can graph the data, as shown in
Figure 2-3.
•
Figure 2-3: 60-Minute Rolling Averages Graphed Over Raw Data
2. From the spreadsheet or graph, locate the largest value in the E column.
This is your Peak Rolling Average (PRA) value. Use the PRA to calculate
the cumulative peak data change over cycle time. The following formula
calculates the largest expected data change over the cycle time. This will
ensure that you do not overflow your data pool.
(PRA in MB/sec) x (cycle time seconds) = (Cumulative peak data
change)
For example, if the PRA is 3 MB/sec, and the cycle time is 3600 seconds
(1 hour), then:
3MB/sec x 3600 seconds = 10,800 MB
This shows the maximum amount of changed data (pool data) that you
can expect in a 60 minute time period. This is the base data pool size
required for TCE.
3. Hitachi recommends a 20-percent safety factor for data pools. Calculate
a safety factor with the following formula:
(Combined base data pool size) x 1.2. For example:
529,200 MB x 1.2 = 635,040 MB
2–6Plan and design — sizing data pools and bandwidth
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 35
4. It is also recommended that annual increases in data transactions be
factored into data pool sizing. This is done to minimize reconfiguration
in the future. Do this by multiplying the pool size with safety factor by
the percentage of expected annual growth. For example:
635,040 MB x 1.2 (20 percent growth rate for per year)
= 762,048 MB
Repeat this step for each year the solution will be in place.
5. Convert to gigabytes, dividing by 1,000. For example:
762,048 MB / 1,000 = 762 GB
This is the size of the example data pool with safety and growth (2nd
year) factored in.
Data pool key points
•Data pools must be set up on the local array and the remote array.
•The data pool must be on the same controller as the P-VOL and VVOL(s).
•Up to 64 LUs can be assigned to a data pool.
•Plan for highest workload and multi-year growth.
•For set up information, see Setting up data pools on page 6-12.
Plan and design — sizing data pools and bandwidth2–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 36
Determining bandwidth
The purpose of this section is to ensure that you have sufficient bandwidth
between the local and remote arrays to copy all your write data in the timeframe you prescribe. The goal is to size the network so that it is capable of
transferring estimated future write workloads.
TCE requires two remote paths, each with a minimum bandwidth of 1.5 Mbs.
To determine the bandwidth
1. Graph the data in column “C” in the Write-Workload Spreadsheet on
page 2-4.
2. Locate the highest peak. Based on your write-workload measurements,
this is the greatest amount of data that will need to be transferred to the
remote array. Bandwidth must accommodate maximum possible
workload to insure that the system does not become subject to its
capacity being exceeded. This would cause further problems, such as the
new write data backing up in the data pool, update cycles becoming
extended, and so on.
3. Though the highest peak in your workload data should be used for
determining bandwidth, you should also take notice of extremely high
peaks. In some cases a batch job, defragmentation, or other process
could be driving workload to abnormally high levels. It is sometimes
worthwhile to review the processes that are running. After careful
analysis, it may be possible to lower or even eliminate some spikes by
optimizing or streamlining high-workload processes. Changing the
timing of a process may lower workload.
4. Although bandwidth can be increased, Hitachi recommends that
projected growth rate be factored over a 1, 2, or 3 year period.
Table 2-1 shows TCE bandwidth requirements.
Table 2-1: Bandwidth Requirements
Average InflowBandwidth RequirementsWAN Types
.08 - .149 MB/s1.5 Mb/s or moreT1
.15 - .299 MB/s3 Mb/s or moreT1 x two lines
.3 - .599 MB/s6 Mb/s or moreT2
.6 - 1.199 MB/s12 Mb/s or moreT2 x two lines
1.2 - 4.499 MB/s45 Mb/s or moreT3
4.500 - 9.999 MB/s100 Mb/s or moreFast Ethernet
2–8Plan and design — sizing data pools and bandwidth
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 37
3
Plan and design — remote
path
A remote path is required for transferring data from the local
array to the remote array. This chapter provides network and
bandwidth requirements, and supported remote path
configurations.
Remote path requirements
Remote path configurations
Using the remote path — best practices
Plan and design — remote path3–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 38
Remote path requirements
The remote path is the connection used to transfer data between the local
array and remote array. TCE supports Fibre Channel and iSCSI port
connectors and connections. The connections you use must be either one or
the other: they cannot be mixed.
The following kinds of networks are used with TCE:
•Local Area Network (LAN), for system management. Fast Ethernet is
required for the LAN.
•Wide Area Network (WAN) for the remote path. For best performance:
- A Fibre Channel extender is required.
- iSCSI connections may require a WAN Optimization Controller
(WOC).
Figure 3-1 shows the basic TCE configuration with a LAN and WAN.
•
Figure 3-1: Remote Path Configuration
Requirements are provided in the following:
•Management LAN requirements on page 3-3
•Remote data path requirements on page 3-3
•WAN optimization controller (WOC) requirements on page 3-4
•Fibre channel extender connection on page 3-9.
3–2Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 39
Management LAN requirements
Fast Ethernet is required for an IP LAN.
Remote data path requirements
This section discusses the TCE remote path requirements for a WAN
connection. This includes the following:
•Types of lines
•Bandwidth
•Distance between local and remote sites
•WAN Optimization Controllers (WOC) (optional)
For instructions on assessing your system’s I/O and bandwidth
requirements, see:
•Measuring write-workload on page 2-3
•Determining bandwidth on page 2-8
Table 3-1 provides remote path requirements for TCE. A WOC may also be
required, depending on the distance between the local and remote sites and
other factors listed in Table 3-3.
•
Table 3-1: Remote Data Path Requirements
ItemRequirements
Bandwidth•Bandwidth must be guaranteed.
•Bandwidth must be 1.5 Mb/s or more for each pair.
100 Mb/s recommended.
•Requirements for bandwidth depend on an average
inflow from the host into the array.
•See Table 2-1 on page 2-8 for bandwidth
requirements.
Remote Path Sharing•The remote path must be dedicated for TCE pairs.
•When two or more pairs share the same path, a
WOC is recommended for each pair
•
•
.
Table 3-2 shows types of WAN cabling and protocols supported by TCE and
those not supported.
•
Table 3-2: Supported, Not Supported WAN Types
WAN Types
Supported•Dedicated Line (T1, T2, T3 etc)
Not-supported•ADSL, CATV, FTTH, ISDN
Plan and design — remote path3–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 40
WAN optimization controller (WOC) requirements
WAN Optimization Controller (WOC) is a network appliance that enhances
WAN performance by accelerating long-distance TCP/IP communications.
TCE copy performance over longer distances is significantly increased when
WOC is used. A WOC guarantees bandwidth for each line.
•Use Table 3-3 to determine whether your TCE system requires the
addition of a WOC.
•Table 3-4 shows the requirements for WOCs.
•
Table 3-3: Conditions Requiring a WOC
ItemCondition
Latency, Distance•If round trip time is 5 ms or more, or distance
between the local site and the remote site is 100
miles (160 km) or further, WOC is highly
recommended.
WAN Sharing•If two or more pairs share the same WAN, A WOC
is recommended for each pair.
•
Table 3-4: WOC Requirements
ItemRequirements
LAN Interface•Gigabit Ethernet or fast Ethernet must be
supported.
Performance•Data transfer capability must be equal to or more
than bandwidth of WAN.
Functions•Traffic shaping, bandwidth throttling, or rate
limiting must be supported. These functions reduce
data transfer rates to a value input by the user.
•Data compression must be supported.
•TCP acceleration must be supported.
3–4Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 41
Remote path configurations
TCE supports both Fibre Channel and iSCSI connections for the remote
path.
•Two remote paths must be set up between, one per controller. This
ensures that an alternate path is available in the event of link failure
during copy operations.
•Paths can be configured from:
- Local controller 0 to remote controller 0 or 1
- Local controller 1 to remote controller 0 or 1
•Paths can connect a port A with a port B, and so on. Hitachi
recommends making connections between the same controller/port,
such as port 0B to 0B, and 1 B to 1 B, for simplicity. Ports can be used
for both host I/O and replication data.
The following sections describe supported Fibre Channel and iSCSI path
configurations. Recommendations and restrictions are included.
Fibre channel
The Fibre Channel remote data path can be set up in the following
configurations:
•Direct connection
•Single Fibre Channel switch and network connection
•Double FC switch and network connection
•Wavelength Division Multiplexing (WDM) and dark fibre extender
The array supports direct or switch connection only. Hub connections are
not supported.
General recommendations
The following is recommended for all supported configurations:
•TCE requires one path between the host and local array. However, two
paths are recommended; the second path can be used in the event of a
path failure.
Plan and design — remote path3–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 42
Direct connection
Figure 3-2 illustrates two remote paths directly connecting the local and
remote arrays. This configuration can be used when distance is very short,
as when creating the initial copy or performing data recovery while both
arrays are installed at the local site.
•
Figure 3-2: Direct FC Connection
3–6Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 43
Single FC switch, network connection
Switch connections increase throughput between the arrays. Figure 3-3
illustrates two remote paths routed through one FC switch and one FC
network to make the connection to the remote site.
•
Figure 3-3: Single FC Switch, Network Connection
Recommendations
•While this configuration may be used, it is not recommended since
failure in an FC switch or the network would halt copy operations.
•Separate switches should be set up for host I/O to the local array and
for data transfer between arrays. Using one switch for both functions
results in deteriorated performance.
Plan and design — remote path3–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 44
Double FC switch, network connection
Figure 3-4 illustrates two remote paths using two FC switches and two FC
networks to make the connection to the remote site.
•
Figure 3-4: Double FC Switches, Networks Connection
Recommendations
•Separate switches should be set up for host I/O to the local array and
for data transfer between arrays. Using one switch for both functions
results in deteriorated performance.
3–8Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 45
Fibre channel extender connection
Channel extenders convert Fibre Channel to FCIP or iFCP, which allows you
to use IP networks and significantly improve performance over longer
distances.
Figure 3-5 illustrates two remote paths using two FC switches, Wavelength
Division Multiplexor (WDM) extender, and dark fibre to make the connection
to the remote site.
•
Figure 3-5: Fibre Channel Switches, WDM, Dark Fibre Connection
Recommendations
•Only qualified components are supported.
For more information on WDM, see Appendix E, Wavelength Division
Multiplexing (WDM) and dark fibre.
Plan and design — remote path3–9
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 46
Port transfer rate for Fibre channel
The communication speed of the Fibre Channel port on the array must
match the speed specified on the host port. These two ports—Fibre Channel
port on the array and host port—are connected via the Fibre Channel cable.
Each port on the array must be set separately.
•
Table 3-5: Setting Port Transfer Rates
Set the remote array
port to
Manual mode
Auto mode
If the host port is set to
1 Gbps1 Gbps
2 Gbps2 Gbps
4 Gbps4 Gbps
8 Gbps8 Gbps
2 GbpsAuto, with max of 2 Gbps
4 GbpsAuto, with max of 4 Gbps
8 GbpsAuto, with max of 8 Gbps
Maximum speed is ensured using the manual settings.
You can specify the port transfer rate using the Navigator 2 GUI, on the Edit FC Port screen (Settings/FC Settings/port/Edit Port button).
•
NOTE: If your remote path is a direct connection, make sure that the
array power is off when modifying the transfer rate to prevent remote path
blockage.
Find details on communication settings in the Hitachi AMS 2100/2300 Storage System Hardware Guide.
3–10Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 47
iSCSI
The iSCSI remote data path can be set up in the following configurations:
•Direct connection
•Local Area Network (LAN) switch connections
•Wide Area Network (WAN) connections
•WAN Optimization Controller (WOC) connections
Recommendations
The following is recommended for all supported configurations:
•Two paths should be configured from the host to the array. This
provides a backup path in the event of path failure.
Direct connection
Figure 3-6, illustrates two remote paths directly connecting the local and
remote arrays. Direct connections are used when the local and remote
arrays are set up at the same site. In this case, category 5e or 6 copper LAN
cable is recommended.
•
Figure 3-6: Direct iSCSI Connection
Recommendations
•When a large amount of data is to be copied to the remote site, the
initial copy between local side and remote systems may be performed
at the same location. In this case, category 5e or 6 copper LAN cable is
recommended.
Plan and design — remote path3–11
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 48
Single LAN switch, WAN connection
Figure 3-7, illustrates two remote paths using one LAN switch and network
to the remote array.
•
Figure 3-7: Single-Switch Connection
Recommendations
•This configuration is not recommended because a failure in a LAN
switch or WAN would halt operations.
•Separate LAN switches and paths should be used for host-to-array and
array-to-array, for improved performance.
3–12Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 49
Multiple LAN switch, WAN connection
Figure 3-8, illustrates two remote paths using multiple LAN switches and
WANs to make the connection to the remote site.
•
Figure 3-8: Multiple-Switch and WAN Connection
Recommendations
•Separate LAN switches and paths should be used for the host-to-array
and the array-to-array paths for better performance and to provide a
backup.
Plan and design — remote path3–13
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 50
Single LAN switch, WOC, WAN connection
WOCs may be required for TCE, depending on your system’s bandwidth,
latency, and so on. Use of a WOC improves performance. See WAN
optimization controller (WOC) requirements on page 3-4 for more
information.
Figure 3-9, illustrates two remote paths using a single LAN switch, WOC,
and WAN to make the connection to the remote site.
•
Figure 3-9: Single Switch, WOC, and WAN Connection
3–14Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 51
Multiple LAN switch, WOC, WAN connection
Figure 3-10, illustrates two remote connections using multiple LAN
switches, WOCs, and WANs to make the connection to the remote site.
•
Figure 3-10: Connection Using Multiple Switch, WOC, WAN
Recommendations
•If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
•Using separate LAN switch, WOC and WAN for each remote path
ensures that data copy automatically continues on the second path in
the event of a path failure.
Plan and design — remote path3–15
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 52
Multiple array, LAN switch, WOC connection with single WAN
Figure 3-11, shows two local arrays connected to two remote arrays, each
via a LAN switch and WOC.
Figure 3-11: Multiple Array Connection Using Single WAN
Recommendations
•If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
•You can reduce the number of switches by using a switch with VLAN
capability. If a VLAN switch is used, port 0B of local array 1 and the
WOC1 should be in one LAN (VLAN1); port 0B of local array 2 and
WOC3 should be in another LAN (VLAN2). Connect the VLAN2 port
directly to Port 0B of the local array 2 and WOC3.
3–16Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 53
Multiple array, LAN switch, WOC connection with two WANs
Figure 3-12, shows two local arrays connected to two remote arrays, each
via two LAN switches, WANs, and WOCs.
•
Figure 3-12: Multiple Array Connection Using Two WANs
Recommendations
•If a Gigabit Ethernet port (1000BASE-T) is provided on the WOC, the
LAN switch to the WOC is not required. Connect array ports 0B and 1B
to the WOC directly. If your WOC does not have 1Gbps ports, the LAN
switch is required.
•You can reduce the number of switches by using a switch with VLAN
capability. If a VLAN switch is used, port 0B of local array 1 and WOC1
should be in one LAN (VLAN1); port 0B of local array 2 and WOC3
should be in another LAN (VLAN2). Connect the VLAN2 port directly to
Port 0B of the local array 2 and WOC3.
Plan and design — remote path3–17
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 54
Supported connections between various types of arrays
Hitachi AMS2100, AMS2300, or AMS2500 can be connected with Hitachi
WMS100, AMS200, AMS500, or AMS1000. The following table shows the
supported connections between various types of arrays.
Table 3-6: Supported Connections between Various Types of Arrays
Notes when Connecting Hitachi AMS2000 Series to other arrays
•The maximum number of pairs that can be created is limited to the
maximum number of pairs supported by the arrays, whichever is fewer.
•The firmware version of AMS500/1000 must be 0780/A or later when
connecting with AMS2100, AMS2300, or AMS2500 of the H/W Rev. is
0100.
Not
supported
• The firmware version of AMS500/1000 must be 0786/A or later when
connecting with AMS2100, AMS2300, or AMS2500 of the H/W Rev. is
0200.
•The firmware version of AMS2100, AMS2300, or AMS2500 must be
08B7/B or later when connecting with HUS100.
•If a Hitachi Unified Storage as the local array connects to an AMS2010,
AMS2100, AMS2300, or AMS2500 with under 08B7/B as the remote
array, the remote path will be blocked along with the following
message:
• For Fibre Channel connection:
The target of remote path cannot be connected(Port-xy)
Path alarm(Remote-X,Path-Y)
• For iSCSI connection:
•Path Login failed
3–18Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 55
•The bandwidth of the remote path to AMS500/1000 must be 20 Mbps
or more.
•The pair operation of AMS500/1000 cannot be done from Navigator 2.
•Because AMS500 or AMS1000 has only one data pool per controller, the
user cannot specify which data pool to use. For that reason, when
connecting AMS500 or AMS1000 with AMS2100, AMS2300, or
AMS2500, the specifications about the data pools become the
followings:
- When AMS500 or AMS1000 is the local array, the data pool 0 is
selected if the LUN of the S-VOL is even, and the data pool 1 is
selected if it is odd. In the configuration that the LU numbers of
the S-VOL include odd pairs and even pairs, both data pool 0 and
data pool 1 are required.
- When AMS2100, AMS2300, or AMS2500 is the local array, the data
pool number is ignored even if specified. The data pool 0 is
selected the owner controller of the S-VOL is 0, and data pool 1 is
selected if it is 1.
•AMS500 or AMS1000 cannot use the functions that are newly
supported by AMS2100, AMS2300, or AMS2500.
•AMS2100, AMS2300, or AMS2500 cannot use the functions that are
newly supported by HUS100.
Plan and design — remote path3–19
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 56
Using the remote path — best practices
The following best practices are provided to reduce and eliminate path
failure.
•If both arrays are powered off, power-on the remote array first.
•When powering down both arrays, turn off the local array first.
•Before powering off the remote array, change pair status to Split. In
Paired or Synchronizing status, a power-off results in Failure status on
the remote array.
•If the remote array is not available during normal operations, a
blockage error results with a notice regarding SNMP Agent Support
Function and TRAP. In this case, follow instructions in the notice.
Path blockage automatically recovers after restarting. If the path
blockage is not recovered when the array is READY, contact Hitachi
Customer Support.
•Power off the arrays before performing the following operation:
- Setting or changing the fibre transfer rate
3–20Plan and design — remote path
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 57
4
Plan and design—arrays,
volumes and operating
systems
This chapter provides the information you need to prepare your
arrays and volumes for TCE operations
Planning arrays—moving data from earlier AMS models
Planning logical units for TCE volumes
Operating system recommendations and restrictions
Maximum supported capacity
Plan and design—arrays, volumes and operating systems4–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 58
Planning workflow
Planning a TCE system consists of determining business requirements for
recovering data, measuring production write-workload and sizing data pools
and bandwidth, designing the remote path, and planning your arrays and
volumes. This chapter discusses arrays and volumes as follows:
•Requirements and recommendations for using previous versions of AMS
with the AMS 2000 Family.
•Logical unit set up: LUs must be set up on the arrays before TCE is
implemented. Volume requirements and specifications are provided.
•Operating system considerations: Operating systems have specific
restrictions for replication volumes pairs. These restrictions plus
recommendations are provided.
•Maximum Capacity Calculations: Required to make certain that your
array has enough capacity to support TCE. Instructions are provided for
calculating your volumes’ maximum capacity.
Planning arrays—moving data from earlier AMS models
Logical units on AMS 2100, 2300, and 2500 systems can be paired with
logical units on AMS 500 and AMS 1000 systems. Any combination of these
arrays may be used on the local and remote sides.
TCE pairs with WMS 100 and AMS 200 are not supported with AMS2100,
2300, or 2500.
When using the earlier model arrays, please observe the following:
•The bandwidth of the remote path to AMS 500 or AMS 1000 must be 20
Mbps or more.
•The maximum number of pairs between different model arrays is
limited to the maximum number of pairs supported by the smallest
array.
•The firmware version of AMS 500 or AMS 1000 must be 0780/A or later
when pairing with an AMS 2100, 2300, or 2500 where the hardware
Rev is 0100.
•The firmware version of AMS 500 or AMS 1000 must be 0786/A or later
when pairing with an AMS 2010, 2100, 2300, or 2500 where the
hardware Rev is 0200.
•Pair operations for AMS 500 and AMS 1000 cannot be performed using
the Navigator 2 GUI.
•AMS500 and AMS1000 cannot use functions that are newly supported
by AMS2100 or AMS2300.
•Because AMS 500 or AMS 1000 can have only one data pool per
controller, you are not able to specify which data pool to use. Because
of this, the data pool that is used is determined as follows:
- When AMS 500 or AMS 1000 is the local array, data pool 0 is used
if the S-VOL LUN is even; data pool 1 is used if the S-VOL LUN is
odd.
4–2Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 59
- When an AMS 2100, 2300, or 2500 is the local array, the data pool
number is ignored even if specified. Data pool 0 is used if the SVOL owner-controller is 0, and data pool 1 is selected if the S-VOL
owner-controller is 1.
•The AMS 500 or AMS 1000 cannot use the functions that are newly
supported by AMS 2010, 2100, 2300, or 2500.
Planning logical units for TCE volumes
Please review the recommendations in the following sections before setting
up TrueCopy volumes. Also, review Requirements and specifications on
page 5-1.
Volume pair and data pool recommendations
•The P-VOL and S-VOL must be identical in size, with matching block
count. To check block size, in the Navigator 2 GUI, navigate to Groups/
RAID Groups/Logical Units tab. Click the desired LUN. On the popup
window that appears, review the Capacity field. This shows block size.
•The number of volumes within the same RAID group should be limited.
Pair creation or resynchronization for one of the volumes may impact
I/O performance for the others because of contention between drives.
When creating two or more pairs within the same RAID group,
standardize the controllers for the LUs in the RAID group. Also, perform
pair creation and resynchronization when I/O to other volumes in the
RAID group is low.
•Assign primary and secondary volumes and data pools to a RAID group
consisting of SAS drives, SAS7.2K drives, SSD drives, or SAS (SED)
drives to achieve best possible performance. SATA drives can be used,
however.
•When cascading TrueCopy and SnapShot pairs, assign a volume of the
SAS drives, the SAS7.2K drives, the SSD drives, or the SAS (SED)
drives and assign four or more disks to a data pool.
•Assign an LU consisting of four or more data disks, otherwise host and
copying performance may be lowered.
•Limit the I/O load on both local and remote arrays to maximize
performance. Performance on each array also affects performance on
the other array, as well as data pool capacity and the synchronization of
volumes. Therefore, it is best to assign a volume of SAS drives,
SAS7.2K drives, SSD drives, or SAS (SED) drives and assign four or
more disks (which have higher performance) than SATA drives, to a
data pool.
Plan and design—arrays, volumes and operating systems4–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 60
Operating system recommendations and restrictions
The following sections provide operating system recommendations and
restrictions.
Host time-out
I/O time-out from the host to the array should be more than 60 seconds.
You can figure I/O time-out by increasing the remote path time limit times
6. For example, if the remote path time-out value is 27 seconds, set host I/
O time-out to 162 seconds (27 x 6) or more.
P-VOL, S-VOL recognition by same host on VxVM, AIX®, LVM
VxVM, AIX®, and LVM do not operate properly when both the P-VOL and SVOL are set up to be recognized by the same host. The P-VOL should be
recognized one host on these platforms, and the S-VOL recognized by a
different host.
HP server
When MC/Service Guard is used on a HP server, connect the host group
(Fibre Channel) or the iSCSI Target to HP server as follows:
For Fibre Channel interfaces
1. In the Navigator 2 GUI, access the array and click Host Groups in the
Groups tree view. The Host Groups screen displays.
2. Click the check box for the Host Group that you want to connect to the
HP server.
WARNING! Your host group changes will be applied to multiple ports. This
change will delete existing host group mappings and corresponding Host
Group IDs, corrupting or removing data associated with the host groups. To
keep specified host groups you do not want to remove, please cancel this
operation and make changes to only one host group at a time.
4–4Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 61
3. Click Edit Host Group.
•
Plan and design—arrays, volumes and operating systems4–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 62
The Edit Host Group screen appears.
4. Select the Options tab.
5. From the Platform drop-down list, select HP-UX. Doing this causes
Enable HP-UX Mode, Enable PSUE Read Reject Mode, and Enable
PSUE Read Reject Mode to be selected in the Additional Setting box.
6. Click OK. A message appears, click Close.
4–6Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 63
For iSCSI interfaces
1. In the Navigator 2 GUI, access the array and click iSCSI Targets in the
Groups tree view. The iSCSI Targets screen displays.
2. Click the check box for the iSCSI Targets that you want to connect to the
HP server.
3. Click Edit Target. The Edit iSCSI Target screen appears.
4. Select the Options tab.
5. From the Platform drop-down list, select HP-UX. Doing this causes
“Enable HP-UX Mode” and “Enable PSUE Read Reject Mode” to be
selected in the Additional Setting box.
6. Click OK. A message appears, click Close.
Windows Server 2000
•A P-VOL and S-VOL cannot be made into a dynamic disk on Windows
Server 2000 and Windows Server™ 2008.
•Native OS mount/dismount commands can be used for all platforms,
except Windows Server 2000. The native commands on this
environment do not guarantee that all data buffers are completely
flushed to the volume when dismounting. In these instances, you must
use CCI to perform volume mount/unmount operations. For more
information on the CCI mount/unmount commands, see the Hitachi AMS Command Control Interface (CCI) Reference Guide.
Windows Server 2003/2008
•A P-VOL and S-VOL can be made into a dynamic disk on Windows
Server 2003.
•In Windows Server™ 2008, refer to the Hitachi Adaptable Modular
Storage Command Control Interface (CCI) Reference Guide for the
restrictions when the mount/unmount command is used.
®
•Windows
resynchronized while retaining data on the S-VOL on the server
memory, the compatible backup cannot be collected. Therefore,
execute the CCI sync command immediately before re-synchronizing
the pair for the un-mounted S-VOL.
•In Windows Server™ 2008, set only the P-VOL of TCE to be recognized
by the host and let another host recognize the S-VOL.
•(CCI only) If a path detachment is caused by controller detachment or
Fibre Channel failure, and the detachment continues for longer than
one minute, the command device may not be recognized when
recovery occurs. In this case, execute the “re-scanning of the disks” in
Windows. If Windows cannot access the command device, though CCI
recognizes the command device, restart CCI.
•Volumes to be recognized by the same host: If you recognize the P-VOL
and S-VOL on Windows Server 2008 at the same time, it may cause an
error because the P-VOL and S-VOL have the same disk signature.
When the P-VOL and S-VOL have the same data, split the pair and then
may write for the un-mounted volume. If a pair is
Plan and design—arrays, volumes and operating systems4–7
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 64
rewrite the disk signature so that they can retain different disk
signatures. You can use the uniqueid command to rewrite a disk
signature. See the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User's Guide for details.
Identifying P-VOL and S-VOL LUs on Windows
In Navigator 2, the P-VOL and S-VOL are identified by their LU number. In
Windows 2003 Server, LUs are identified by HLUN. To map LUN to HLUN on
Windows, proceed as follows. These instructions provide procedures for
iSCSI and Fibre Channel interfaces.
1. Identify the HLUN of your Windows disk.
a. From the Windows Server 2003 Control Panel, select Computer
Management>Disk Administrator.
b. Right-click the disk whose HLUN you want to know, then select
Properties. The number displayed to the right of “LUN” in the dialog
window is the HLUN.
2. Identify HLUN-to-LUN Mapping for the iSCSI interface as follows. (If
using Fibre Channel, skip to Step 3.)
a. In the Navigator 2 GUI, select the desired array.
b. In the array tree that displays, click the Group icon, then click the
iSCSI Targets icon in the Groups tree.
c. On the iSCSI Target screen, select an iSCSI target.
d. On the target screen, select the Logical Units tab. Find the
identified HLUN. The LUN displays in the next column.
e. If the HLUN is not present on a target screen, on the iSCSI Target
screen, select another iSCSI target and repeat Step 2d.
3. Identify HLUN-to-LUN Mapping for the Fibre Channel interface, as
follows:
a. In Navigator 2, select the desired array.
b. In the array tree that displays, click the Groups icon, then click the
Host Groups icon in the Groups tree.
WARNING! Your host group changes will be applied to multiple ports. This
change will delete existing host group mappings and corresponding Host
Group IDs, corrupting or removing data associated with the host groups. To
keep specified host groups you do not want to remove, please cancel this
operation and make changes to only one host group at a time.
c. On the Host Groups screen, select a Host group.
d. On the host group screen, select the Logical Units tab. Find the
identified H-LUN. The LUN displays in the next column.
e. If the HLUN is not present on a host group target screen, on the Host
Groups screen, select another Host group and repeat Step 3d.
4–8Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 65
Windows 2000 or Windows Server and TCE Configuration
•Volume mount:
In order to make a consistent backup using a storage-based replication such
as TCE, you must have a way to flush the data residing on the server
memory to the array, so that the source volume of the replication has the
complete data.
You can flush the date on the server memory using the umount command
of CCI to unmount the volume. When using the umount command of CCI
for unmount, use the mount command of CCI for mount.
When using Window® 2000, do not use the mountvol command attached
to Windows® 2000 as standard and use the mount/umount command of
CCI even if you are using GUI or CLI of Navigator 2 for the pair operation.
If you are using Windows Server™ 2003, mountvol /P to flush data on the
server memory when un-mounting the volume is supported. Please
understand the specification of the command and run sufficient test before
you use it for your operation.
In Windows Server™ 2008, refer to the Hitachi Adaptable Modular Storage
Command Control Interface (CCI) Reference Guide for the restrictions when
mount/unmount command is used.
Windows® may write for the un-mounted volume. If a pair is
resynchronized while remaining the data to the S-VOL on the memory of the
server, the compatible backup cannot be collected. Therefore, execute the
sync command of CCI immediately before re-synchronizing the pair for the
un-mounted S-VOL.
For more detail about the CCI commands, see the Hitachi Adaptable
Modular Storage Command Control Interface (CCI) Reference Guide.
•Volumes to be recognized by the same host
If you recognize the P-VOL and S-VOL on Windows Server™ 2008 at the
same time, it may cause an error because the P-VOL and S-VOL have the
same disk signature. When the P-VOL and S-VOL have the same data, split
the pair and then rewrite the disk signature so that they can retain different
disk signatures. You can use the uniqueid command to rewrite a disk
signature. See the Hitachi Adaptable Modular Storage Command Control
Interface (CCI) User's Guide for the detail
•Command devices:
When a remote path detachment, which is caused by a controller
detachment or Fibre channel failure, continues for longer than one minute,
the command device may be unable to be recognized when recovery from
the remote path detachment is made. To make the recovery, execute the
"re-scanning of the disks" of Windows®. When Windows® cannot access
the command device, although CCI becomes able to recognize the
command device, restart CCI.
Plan and design—arrays, volumes and operating systems4–9
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 66
Dynamic Disk in Windows 2000/Windows Server
n an environment of the Windows Server 2000/Windows Server, you cannot
use TCE pair volumes as dynamic disk. The reason for this restriction is
because in this case if you restart Windows or use the Rescan Disks
command after creating or re-synchronizing a TCE pair, there are cases
where the S-VOL is displayed as Foreign in Disk Management and become
inaccessable.
VMWare and TCE Configuration
When creating a backup of the virtual disk in the vmfs format using TCE,
shutdown the virtual machine that accesses the virtual disk, and then split
the pair.
If one LU is shared by multiple virtual machines, shutdown all the virtual
machines that share the LU when creating a backup. It is not recommended
to share one LU by multiple virtual machines in the configuration that
creates a backup using TCE.
4–10Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 67
Concurrent Use of Dynamic Provisioning
•When the array firmware version is less than 0893/A, the DP-VOLs
created by Dynamic Provisioning cannot be set for a P-VOL or an S-VOL
of TCE. Moreover, when the array firmware version is less than 0893/A,
the DP-VOLs cannot be added to the data pool used by SnapShot and
TCE.
•Depending on the installed cache memory, Dynamic Provisioning and
TCE may not be unlocked at the same time. To unlock Dynamic
Provisioning and TCE at the same time, add cache memories. For the
capacity of the supported cache memory, refer to User Data Area of
Cache Memory on page 4-16.
•The data pool used by SnapShot and TCE cannot be used as a DP pool
of Dynamic Provisioning. Moreover, the DP pool used by Dynamic
Provisioning cannot be used as data pools of SnapShot and TCE.
When the array firmware version is 0893/A or more, the DP-VOLs created
by Dynamic Provisioning can be set for a P-VOL, an S-VOL, or a data pool
of TCE. H o w eve r, th e n ormal LU and the DP-VOL cannot coexi s t i n the same
data pool.
The points to keep in mind when using TCE and Dynamic Provisioning
together are described here. Refer to the Hitachi Adaptable Modular Storage
Dynamic Provisioning User's Guide for detailed information about Dynamic
Provisioning. Hereinafter, the LU created in the RAID group is called a
normal LU and the LU created in the DP pool that is created by Dynamic
Provisioning is called a DP-VOL.
•When using a DP-VOL as a DMLU
Check that the free capacity (formatted) of the DP pool to which the DP-VOL
belongs is 10 GB or more, and then set the DP-VOL as a DMLU. If the free
capacity of the DP pool is less than 10 GB, the DP-VOL cannot be set as a
DMLU.
•LU type that can be set for a P-VOL, an S-VOL, or a data pool of TCE
The DP-VOL created by Dynamic Provisioning can be used for a P-VOL, an
S-VOL, or a data pool of TCE. Table 4-1 and Table 4-2 show a combination
of a DP-VOL and a normal LU that can be used for a P-VOL, an S-VOL, or a
data pool of TCE.
Table 4-1: Combination of a DP-VOL and a Normal LU
TCE P-VOLTCE S-VOLContents
DP-VOLDP-VOLAvailable. The P-VOL and S-VOL capacity can be
reduced compared to the normal LU.
DP-VOLNormal LUAvailable.
Normal LUDP-VOLAvailable. When the pair status is Split, the S-VOL
capacity can be reduced compared to the normal
LU by deleting 0 data.
Plan and design—arrays, volumes and operating systems4–11
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 68
Table 4-2: Combination of a DP-VOL for Data Pool and a Normal
LU
P-VOL Data
Pool
DP-VOLDP-VOLAvailable. The data pool consumed capacity can
be reduced compared to the normal LU on both
local side and remote side.
DP-VOLNormal LUAvailable. The data pool consumed capacity can
be reduced compared to the normal LU on local
side.
Normal LUDP-VOLAvailable. The data pool consumed capacity can
be reduced compared to the normal LU on remote
side.
When creating a TCE pair using the DP-VOLs, in the P-VOL, the S-VOL
or the data pool specified at the time of the TCE pair creation, the DPVOLs whose Full Capacity Mode is enabled and disabled cannot be
mixed.
When creating the data pool using multiple DP-VOLs, the data pool
cannot be created by combining the DP-VOLs that have different setting
of Enabled/Disabled for Full Capacity Mode.
•Assigning the controlled processor core of a P-VOL and a data pool that
uses the DP-VOL
When the controlled processor core of the DP-VOL used for a P-VOL (SVOL) or used for a data pool of TCE differs as well as the normal LU,
switch the P-VOL (S-VOL) controlled processor core assignment to the
data pool controlled processor core automatically and create a pair. (In
case of AMS2500)
•DP pool designation of a P-VOL (S-VOL) and a data pool which uses the
DP-VOL
When using the DP-VOL created by Dynamic Provisioning for a P-VOL (SVOL) or a data pool of TCE, using the DP-VOL designated in separate DP
pool of a P-VOL (S-VOL) and a data pool is recommended considering
the performance.
•Setting the capacity when placing the DP-VOL in the data pool
When the pair status is Split, the old data is copied to the data pool while
writing to the P-VOL. When using the DP-VOL created by Dynamic
Provisioning as the data pool of TCE, the consumed capacity of the DPVOL in the data pool is increased by storing the old data in the data pool.
If the DP-VOL of more than or equal to the DP pool capacity is created
and used for the data pool, this processing may deplete the DP pool
capacity. When using the DP-VOL for the data pool of TCE, it is
recommended to set the capacity making the over provisioning ratio
100% or less so that the DP pool capacity does not deplete.
4–12Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 69
Furthermore, the threshold value of the data pool of TCE and the
threshold value of the DP pool differ. Even if the data pool use rate of
TCE shows 10% or less, the DP pool consumed capacity may have
exceeded Depletion Alert. Check whether the actual use rate falls below
the respective threshold values of the data pool and DP pool of TCE.
•Pair status at the time of DP pool capacity depletion
When the DP pool is depleted after operating the TCE pair that uses the
DP-VOL created by Dynamic Provisioning, the pair status of the pair
concerned may be a Failure. Table 4-3 on page 4-13 shows the pair
statuses before and after the DP pool capacity depletion. When the pair
status becomes a Failure caused by the DP pool capacity depletion, add
the DP pool capacity whose capacity is depleted, and execute the pair
operation again.
Table 4-3: Pair Statuses before the DP Pool Capacity Depletion and
Pair Statuses after the DP Pool Capacity Depletion
Pair Statuses after the
Pair Statuses before the DP
Pool Capacity Depletion
SimplexSimplexSimplex
Reverse SynchronizingFailureReverse Synchronizing
PairedPaired
SplitSplitSplit
FailureFailureFailure
DP Pool Capacity
Depletion belonging to
P-VOL
Failure (See Note)
Pair Statuses after the DP
Pool Capacity Depletion
belonging to data pool
Failure (See Note)
Failure
NOTE: When write is performed to the P-VOL to which the capacity
depletion DP pool belongs, the copy cannot be continued and the pair status
becomes a Failure.
•DP pool status and availability of pair operation
When using the DP-VOL created by Dynamic Provisioning for a P-VOL (SVOL) or a data pool of the TCE pair, the pair operation may not be
executed depending on the status of the DP pool to which the DP-VOL
belongs. Table 4-4 on page 4-14 and Table 4-5 on page 4-14 show the
DP pool status and availability of the TCE pair operation. When the pair
operation fails due to the DP pool status, correct the DP pool status and
execute the pair operation again.
Plan and design—arrays, volumes and operating systems4–13
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 70
Table 4-4: DP Pool for P-VOL Statuses and Availability of Pair Operation
DP Pool Statuses, DP Pool Capacity Statuses, and DP Pool Optimization
(1) Refer to the status of the DP pool to which the DP-VOL of the S-VOL
belongs. If the status exceeds the DP pool capacity belonging to the S-VOL
by the pair operation, the pair operation cannot be performed.
(2) Refer to the status of the DP pool to which the DP-VOL of the P-VOL
belongs. If the status exceeds the DP pool capacity belonging to the P-VOL
by the pair operation, the pair operation cannot be performed.
(3) When the DP pool was created or the capacity was added, the
formatting operates for the DP pool. If pair creation, pair resynchronization,
or swapping is performed during the formatting, depletion of the usable
capacity may occur. Since the formatting progress is displayed when
checking the DP pool status, check if the sufficient usable capacity is
secured according to the formatting progress, and then start the operation.
•Operation of the DP-VOL during TCE use
When using the DP-VOL created by Dynamic Provisioning for a P-VOL,
an S-VOL, or a data pool of TCE, any of the operations among the
capacity growing, capacity shrinking, LU deletion, and Full Capacity
4–14Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 71
Mode changing of the DP-VOL in use cannot be executed. To execute the
operation, delete the TCE pair of which the DP-VOL to be operated is in
use, and then perform it again.
•Operation of the DP pool during TCE use
When using the DP-VOL created by Dynamic Provisioning for a P-VOL,
an S-VOL, or a data pool of TCE, the DP pool to which the DP-VOL in use
belongs cannot be deleted. To execute the operation, delete the TCE pair
of which the DP-VOL is in use belonging to the DP pool to be operated,
and then execute it again. The attribute edit and capacity addition of the
DP pool can be executed usually regardless of the TCE pair.
•Availability of TCE pair creation between different firmware versions
When the firmware versions of the array differ on the local side and the
remote side, if the firmware version of the array including the DP-VOL is
0893/A or more, a TCE pair can be created (Figure 4-1). Table 4-6
shows pair creation availability when the firmware version of the array
is 0893/A or more or less than 0893/A (including AMS500/1000).
•
Figure 4-1: Availability of TCE Pair Creation between Different
Firmware Versions
Plan and design—arrays, volumes and operating systems4–15
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 72
Table 4-6: Availability of TCE Pair Creation between Different Firmware
Versions
P-VOLS-VOL
Normal LUNormal LU000
Normal LUDP-VOL0x0
DP-VOLNormal LU00x
DP-VOLDP-VOL0xx
Local: 0893/A or
more
Remote: 0893/A
or more
Local: 0893/A or
more
Remote: Less than
0893
Local: Less than
0893/A
Remote: 0893/A
or more
•Cascade connection
A cascade can be performed on the same conditions as the normal LU.
However, the firmware version of the array including the DP-VOL needs
to be 0893/A or more.
User Data Area of Cache Memory
If TCE is used to secure a part of the cache memory, the user data area of
the cache memory decreases. Also, by using SnapShot and Dynamic
Provisioning together, the user data area may further decrease. Table 4-7
through Table 4-13 on page 4-20 show the cache memory secured capacity
and the user data area when using the program product. For Dynamic
Provisioning, the user data area differs depending on DP Capacity Mode.
Refer to the Hitachi Adaptable Modular Storage Dynamic Provisioning User's Guide for detailed information.
Table 4-7: Supported Capacity of the Regular Capacity Mode (H/
W Rev. is 0100)
Array TypeCache Memory
AMS21001 GB/CTL80 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
AMS23001 GB/CTL140 mb-
2 GB/CTL512 MB
4 GB/CTL2 GB
8 GB/CTL4 GB
(1 of 2)
Management
Capacity for
Dynamic
Provisioning
Capacity
Secured for
SnapShot or TCE
4–16Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 73
Table 4-7: Supported Capacity of the Regular Capacity Mode (H/
W Rev. is 0100)
(1 of 2)
Management
Array TypeCache Memory
Capacity for
Dynamic
Provisioning
AMS25002 GB/CTL300 MB512 MB
4 GB/CTL1.5 GB
6 GB/CTL3 GB
8 GB/CTL4 GB
10 GB/CTL5 GB
12 GB/CTL6 GB
16 GB/CTL8 GB
Capacity
Secured for
SnapShot or TCE
Table 4-8: Supported Capacity of the Regular Capacity Mode (H/
Array Type
(the H/W
Rev. is
0100)
W Rev. is 0100)
Capacity
Secured for
Dynamic
Provisioning
and TCE or
SnapShot
User Data
Area when
Dynamic
Provisioning,
TCE, and
SnapShot are
Disabled
(2 of 2)
User Data
Area when
Provisioning
Using
Dynamic
User Data
Area when
Using
Dynamic
Provisioning
and TCE or
SnapShot
AMS2100-590 MB590 MBN/A
580 MB1,520 MB1,440 MB940 MB
2,120 MB3,520 MB3,460 MB1,400 MB
AMS2300-500 MB500 MBN/A
660 MB1,440 MB1,300 MB780 MB
2,200 MB3,280 MB3,120 MB1,080 MB
4,240 MB7,160 MB7,020 MB2,920 MB
AMS2500800 MB1,150 MB850 MBN/A
1,830 MB2,960 MB2,660 MB1,130 MB
3,360 MB4,840 MB4,560 MB1,480 MB
4,400 MB6,740 MB6,440 MB2,340 MB
5,420 MB8,620 MB8,320 MB3,200 MB
6,440 MB10,500 MB10,200 MB4,060 MB
8,480 MB14,420 MB14,120 MB5,940 MB
Plan and design—arrays, volumes and operating systems4–17
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 74
Table 4-9: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0100) (1 of 2)
Management
Array Type Cache Memory
Capacity for
Dynamic
Provisioning
AMS21001 GB/CTL210 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
AMS23001 GB/CTL310 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
8 GB/CTL4 GB
AMS25002 GB/CTL520 MB512 MB
4 GB/CTL1.5 GB
6 GB/CTL3 GB
8 GB/CTL4 GB
10 GB/CTL5 GB
12 GB/CTL6 GB
16 GB/CTL8 GB
Capacity
Secured for
SnapShot or TCE
Table 4-10: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0100) (2 of 2)
Capacity
Secured for
Array
Type
Dynamic
Provisioning
and TCE or
SnapShot
AMS2100-590 MBN/AN/A
710 MB1,520 MB1,310 MB810 MB
2,270 MB3,520 MB3,310 MB1,250 MB
AMS2300-500 MBN/AN/A
830 MB1,440 MB1,130 MB610 MB
2,350 MB3,280 MB2,970 MB930 MB
4,410 MB7,160 MB6,850 MB2,750 MB
User Data
Area when
Dynamic
Provisioning,
TCE, and
SnapShot are
Disabled
User Data
Area when
Using
Dynamic
Provisioning
User Data
Area when
Using
Dynamic
Provisioning
and TCE or
SnapShot
4–18Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 75
Table 4-10: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0100) (2 of 2)
Capacity
Secured for
Array
Type
Dynamic
Provisioning
and TCE or
SnapShot
AMS25001,022 MB1,150 MBN/AN/A
2,208 MB2,960 MBN/AN/A
3,600 MB4,840 MB4,320 MB1,240 MB
4,620 MB6,740 MB6,220 MB2,120 MB
5,640 MB8,620 MB8,100 MB2,980 MB
6,660 MB10,500 MB9,980 MB3,840 MB
8,700 MB14,420 MB13,900 MB5,720 MB
User Data
Area when
Dynamic
Provisioning,
TCE, and
SnapShot are
Disabled
User Data
Area when
Using
Dynamic
Provisioning
User Data
Area when
Using
Dynamic
Provisioning
and TCE or
SnapShot
Table 4-11: Supported Capacity of the Regular Capacity Mode
H/W Rev. is 0200) (1 of 2)
(
Management
Array Type Cache Memory
Capacity for
Dynamic
Provisioning
AMS21001 GB/CTL80 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
AMS23001 GB/CTL140 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
8 GB/CTL4 GB
AMS25002 GB/CTL300 MB512 MB
4 GB/CTL1.5 GB
6 GB/CTL3 GB
8 GB/CTL4 GB
10 GB/CTL5 GB
12 GB/CTL6 GB
16 GB/CTL8 GB
Capacity
Secured for
SnapShot or TCE
Plan and design—arrays, volumes and operating systems4–19
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 76
Table 4-12: Supported Capacity of the Regular Capacity Mode
H/W Rev. is 0200) (2 of 2)
(
Capacity
Secured for
Dynamic
Provisioning
and TCE or
SnapShot
AMS2100-590 MB590 MBN/A
580 MB1,390 MB1,310 MB810 MB
2,120 MB3,360 MB3,280 MB1,220 MB
AMS2300-500 MB500 MBN/A
660 MB1,340 MB1,200 MB680 MB
2,200 MB3,110 MB2,970 MB930 MB
4,240 MB6,940 MB6,800 MB2,700 MB
AMS2500800 MB1,150 MB850 MBN/A
1,830 MB2,780 MB2,480 MB950 MB
3,360 MB4,660 MB4,360 MB1,280 MB
4,400 MB6,440 MB6,140 MB2,040 MB
5,420 MB8,320 MB8,020 MB2,900 MB
6,440 MB9,980 MB9,680 MB3,540 MB
8,480 MB14,060 MB13,760 MB5,580 MB
User Data
Area when
Dynamic
Provisioning,
TCE, and
SnapShot are
Disabled
User Data
Area when
Using
Dynamic
Provisioning
User Data
Area when
Provisioning
and TCE or
SnapShot
Using
Dynamic
Table 4-13: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0200) (1 of 2)
Management
Array Type Cache Memory
Capacity for
Dynamic
Provisioning
AMS21001 GB/CTL210 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
AMS23001 GB/CTL310 MB-
2 GB/CTL512 MB
4 GB/CTL2 GB
8 GB/CTL4 GB
4–20Plan and design—arrays, volumes and operating systems
Capacity
Secured for
SnapShot or TCE
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 77
Table 4-13: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0200) (1 of 2)
Management
Array Type Cache Memory
Capacity for
Dynamic
Provisioning
AMS25002 GB/CTL520 MB512 MB
4 GB/CTL1.5 GB
6 GB/CTL3 GB
8 GB/CTL4 GB
10 GB/CTL5 GB
12 GB/CTL6 GB
16 GB/CTL8 GB
Capacity
Secured for
SnapShot or TCE
Table 4-14: Supported Capacity of the Maximum Capacity Mode
(H/W Rev. is 0200) (2 of 2)
User Data
Area when
Using
Dynamic
Provisioning
and TCE or
SnapShot
Array
Type
Capacity
Secured for
Dynamic
Provisioning
and TCE or
SnapShot
User Data
Area when
Dynamic
Provisioning,
TCE, and
SnapShot are
Disabled
User Data
Area when
Using
Dynamic
Provisioning
AMS2100-590 MBN/AN/A
710 MB1,390 MB1,180 MB680 MB
2,270 MB3,360 MB3,150 MB1,090 MB
AMS2300-500 MBN/AN/A
830 MB1,340 MB1,030 MB510 MB
2,350 MB3,110 MB2,800 MB760 MB
4,410 MB6,940 MB6,630 MB2,530 MB
AMS25001,022 MB1,090 MBN/AN/A
2,078 MB2,780 MBN/AN/A
3,600 MB4,660 MB4,140 MB1,060 MB
4,620 MB6,440 MB5,920 MB1,820 MB
5,640 MB8,320 MB7,800 MB2,680 MB
6,660 MB9,980 MB9,460 MB3,320 MB
8,700 MB14,060 MB13,540 MB5,360 MB
Plan and design—arrays, volumes and operating systems4–21
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 78
Formatting the DMLU in the Event of a Drive Failure
When the DMLU is in a RAID group or DP pool with RAID5 or RAID6 and a
drive failure occurs on the RAID group or DP pool with no redundancy, the
data in the DMLU will be incomplete and unusable.
At that time, for the firmware version of 08C3/F and later, the DMLU will
automatically become unformatted, so make sure to format the DMLU.
For less than 08C3/F, even though the DMLU will not automatically become
unformatted, make sure to format the DMLU.
It is possible to format a DMLU without having to release the DMLU.
4–22Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 79
Maximum supported capacity
The capacity you can assign to replication volumes per controller is limited,
for the following reasons:
•The TCE P-VOL and S-VOL, and the SnapShot P-VOL if used, share
common data pool resources. Therefore, data pool capacity is limited.
•The maximum capacity supported by a TCE pair depends on the P-VOL
capacity of SnapShot (if used), data pool capacity, and cache memory
capacity.
•When using other copy systems and TCE together, the maximum
supported capacity of the P-VOL may be restricted further.
In addition to this, capacity is managed by the AMS array in blocks of 15.75
KB for data volumes and 3.2 KB for data pools. For example, when a P-VOL
block’s actual size is 16 KB, the array manages it as two blocks of 15.75 KB,
or 31.5 KB. Data pool capacity is managed in the same way but at 3.2 KB
per block.
This section provides formulas for calculating your existing or planned TCE
volume capacity and comparing it to the maximum supported capacity for
your particular controller and its cache memory size.
TCE capacity must be calculated for both of the following:
1. The ratio of TCE and SnapShot (if used) capacity to data pool capacity.
Capacity is calculated using the following volumes:
- TCE P-VOLs and S-VOLs
- SnapShot P-VOLs (if used)
- All data pools
2. Concurrent use of TCE and ShadowImage. If SnapShot is used
concurrently also, it is included in this calculation. Capacity is calculated
using the following volumes:
- TCE P-VOLs
- SnapShot P-VOLs
- ShadowImage S-VOLs
•
NOTE: When SnapShot is enabled, a portion of cache memory is assigned
to it for internal operations. Hitachi recommends that you review the
appendix on SnapShot and Cache Partition Manager in the Hitachi AMS 2000 Family Copy-on-Write SnapShot User Guide.
Plan and design—arrays, volumes and operating systems4–23
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 80
TCE and SnapShot capacity
Because capacity is managed by the array in blocks of 15.75 KB for of data
volumes and 3.2 KB for data pools, the capacity of your array’s TCE and
SnapShot volumes must be specially calculated.
All formulas, tables, graphs and examples pertain to one controller. On dual
controller arrays, you must perform calculations for both controllers.
Managed capacity is calculated here, per controller, using the following
formula:
Size of all TCE P-VOLs +
Size of all TCE S-VOLs +
Size all SnapShot P-VOLs (if used)
/ 5
+ size of all data pool volumes
< Max. Sup. Capacity
Maximum supported capacity is shown in Table 4-15.
•
Table 4-15: Maximum Supported Capacities, per
Cache Memory
per Controller
(TCE P-VOLs and S-VOLs, SnapShot P-VOLs, Data
Controller Cache Size
Maximum Capacity
Pools)
AMS2100AMS2300AMS2500
2 GB per CTL1.4 TBNot supportedNot supported
4 GB per CTL6.2 TB6.2 TBNot supported
8 GB per CTLNot supported12.0 TB12.0 TB
16 GB per CTLNot supportedNot supported24.0 TB
•
NOTE: In a dual-controller array, the calculations must be performed for
both controllers.
Example:
In this example, the array and cache memory per controller is
AMS 2300/4 GB.
1. List the size of each TCE P-VOL and S-VOL on the array, and of each
SnapShot P-VOL (if present) in the array. For example:
3. Add the total managed capacity of P-VOLs and S-VOLs. For example:
Total TCE P-VOL and S-VOL managed capacity = 221 GB
Total SnapShot P-VOL capacity = 63 GB
221 GB + 63 GB = 284 GB
4. For each P-VOL and S-VOL, list the data pools and their sizes. For
example:
TCE P-VOL1 has 1 data pool whose capacity = 70 GB
TCE S-VOL1 has 1 data pool whose capacity = 70 GB
SnapShot P-VOL1 has 1 data pool whose capacity = 30 GB
5. Calculate managed data pool capacity, using the formula:
6. Add total data pool managed capacity. For example:
71 GB + 71 GB + 32 GB = 175 GB
7. Calculate total managed capacity using the following equation:
ROUNDUP Total TCE/SnapShot managed capacity / 5 + total data
pool managed capacity < maximum supported capacity
For example:
Divide the total TCE/SnapShot capacity by 5.
284 GB / 5 = 57 GB
8. Add the quotient to data pool managed capacity. For example:
57 GB + 176 GB = 234 GB
9. Compare managed capacity to maximum supported capacity for the 4
GB cache per controller for AMS 2300, which is 6.2 TB. The managed
capacity is well below maximum supported capacity.
Table 4-20 on page 4-31 through Table 4-23 on page 4-32 show how
closely capacity between data volumes and data pool volumes must be
managed. These tables are provided for your information. Also, Figure 4-4
on page 4-32 shows a graph of how data volume-to-data pool volume
relates to maximum supported capacity.
Plan and design—arrays, volumes and operating systems4–25
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 82
TCE, SnapShot, ShadowImage concurrent capacity
If ShadowImage is used on the same controller as TCE, capacity for
concurrent use must also be calculated and compared to maximum
supported capacity. If SnapShot is used also, it is included in concurrent-use
calculations.
Concurrent-use capacity is calculated using the following formula:
Maximum TCE supported capacity of P-VOL and S-VOL (TB)
= TCE maximum single capacity
- (Total ShadowImage S-VOL capacity / 51)
- (Total SnapShot P-VOL capacity / 3)
TCE maximum single capacity is shown in Table 4-16.
•
Table 4-16: TCE Maximum Single Capacity per Controller
Equipment Type
AMS21001 GB per CTL–
AMS23001 GB per CTL–
AMS25002 GB per CTL10
Mounted Memory
Capacity
2 GB per CTL15
4 GB per CTL18
2 GB per CTL14
4 GB per CTL38
8 GB per CTL77
4 GB per CTL38
6 GB per CTL54
8 GB per CTL70
10 GB per CTL93
12 GB per CTL116
16 GB per CTL140
Single Maximum Capacity
(TB)
Example
In this example, the array and cache memory per controller is
AMS2100 and 2 GB per CTL.
Maximum TCE supported capacity of P-VOL and S-VOL (TB)
= TCE maximum single capacity
- (Total ShadowImage S-VOL capacity / 51)
- (Total SnapShot P-VOL capacity / 3)
1. TCE Maximum single capacity = 15 TB
4–26Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
5. Subtract the quotient from the remaining TCE maximum single capacity.
For example:
14,921 GB - 268 GB = 14,653 GB, the capacity left for TCE
P-VOLs and S-VOLs on the controller.
If your system’s managed capacity exceeds the maximum supported
capacity, you can do one or more of the following:
•Change the P-VOL size
•Reduce the number of P-VOLs
•Change the data pool size
•Reduce ShapShot and ShadowImage P-VOL/S-VOL size
Plan and design—arrays, volumes and operating systems4–27
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 84
Maximum Supported Capacity of P-VOL and Data Pool
Table 4-17 to Table 4-19 show the maximum supported capacities of the P-
VOL and the data pool for each cache memory capacity, and the formula for
calculating the previous capacities.
.
Table 4-17: Formula for Calculating Maximum Supported
Capacity Value for P-VOL/Data Pool (AMS2100)
Capacity of Cache Memory
Installed
1 GB/CTLNot supported.
2 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
4 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
Capacity Spared for the Differential
Data (Shared by SnapShot and TCE)
of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB
of TCE capacity ÷ 5 + Total data pool capacity < 6.2 TB
Table 4-18: Formula for Calculating Maximum Supported
Capacity Value for P-VOL/Data Pool (AMS2300)
Capacity of Cache Memory
Installed
1 GB/CTLNot supported.
2 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
4 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
8 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
Capacity Spared for the Differential
Data (Shared by SnapShot and TCE)
of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB
of TCE capacity ÷ 5 + Total data pool capacity < 6.2 TB
of TCE capacity ÷ 5 + Total data pool capacity < 12.0 TB
Table 4-19: Formula for Calculating Maximum Supported
Capacity Value for P-VOL/Data Pool (AMS2500)
Capacity of Cache Memory
Installed
2 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
4 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
6 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
Capacity Spared for the Differential
Data (Shared by SnapShot and TCE)
of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB
of TCE capacity ÷ 5 + Total data pool capacity < 4.7 TB
of TCE capacity ÷ 5 + Total data pool capacity < 9.4 TB
4–28Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 85
Table 4-19: Formula for Calculating Maximum Supported
Capacity Value for P-VOL/Data Pool (AMS2500)
Capacity of Cache Memory
Installed
8 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
10 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
12 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
16 GB/CTLTotal P-VOL of SnapShot and P-VOL (S-VOL)
No SnapShot-TCE cascade configuration
In no SnapShot-TCE cascade configuration, you need to add all volumes of
SnapShot and TCE in Table 4-17 to Table 4-19 .
1 TB×4 LU ÷ 5 + less than 0.6 TB < 1.4 TB
Capacity Spared for the Differential
Data (Shared by SnapShot and TCE)
of TCE capacity ÷ 5 + Total data pool capacity < 12.0 TB
of TCE capacity ÷ 5 + Total data pool capacity < 15.0 TB
of TCE capacity ÷ 5 + Total data pool capacity < 18.0 TB
of TCE capacity ÷ 5 + Total data pool capacity < 24.0 TB
•
Figure 4-2: No SnapShot-TCE cascade configuration
Plan and design—arrays, volumes and operating systems4–29
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 86
SnapShot-TCE cascade configuration
In a SnapShot-TCE cascade configuration, you do not need to add volumes
of TCE in Table 4-17 to Table 4-19. You need only to add volumes of
SnapShot in the formula.
1 TB×2 LU ÷ 5 + less than 1 TB < 1.4 TB
•
Figure 4-3: SnapShot-TCE cascade configuration
4–30Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 87
Cache limitations on data and data pool volumes
This section provides comparisons in capacity between the data volumes
and the data pool volumes under the limitations of the AMS controllers’
cache memory. The values in the tables and graph in this section are
calculated from the formulas and maximum supported capacity in TCE and
SnapShot capacity on page 4-24.
•
NOTE: “Data volumes” in this section consist of TCE P-VOLs and S-VOLs
and SnapShot P-VOLs (if used).
•
Table 4-20: P-VOL to Data Pool Capacity Ratio on
AMS 2100 when Cache Memory is 2 GB/Controller
Ratio of All P-VOL Capacity
to All Data Pool Capacity
1:0.14.6 : 0.4
1:0.32.8 : 0.8
1:0.52.0 : 1.0
•
All P-VOL Capacity
to All Data Pool Capacity
(TB)
Table 4-21: P-VOL to Data Pool Capacity Ratio on
AMS 2300/2100 when Cache Memory is 4 GB per CTL
Ratio of All P-VOL Capacity
to All Data Pool Capacity
AMS 2100/2300/2500AMS 2100/2300
1:0.520.6 : 2.0
1:112.4 : 3.7
1:38.8 : 4.4
All P-VOL Capacity
to All Data Pool Capacity (TB)
Table 4-22: P-VOL to Data Pool Capacity Ratio on
AMS 2500/2300 when Cache Memory is 8 GB per CTL
Ratio of All P-VOL Capacity
to All Data Pool Capacity
1:0.140.0 : 4.0
1:0.324.0 : 7.2
1:0.517.1 : 8.5
All P-VOL Capacity
to All Data Pool Capacity
(TB)
Plan and design—arrays, volumes and operating systems4–31
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 88
Table 4-23: P-VOL to Data Pool Capacity Ratio on
AMS 2500 when Cache Memory is 16 GB per CTL
Ratio of All P-VOL Capacity
to All Data Pool Capacity
1:0.180.0 : 8.0
1:0.348.0 : 14.4
1:0.534.2 : 17.1
•
All P-VOL Capacity
to All Data Pool Capacity
(TB)
Figure 4-4: Relation of Data Volume, Data Pool Capacities to Cache Size
per Controller
Cautions for Reconfiguring the Cache Memory
The cautions for the cache memory reconfiguration processing in the
installation, un-installation, or invalidation/validation operation occur when
the firmware version of array is 0897/A or more.
•I/O processing performance
The I/O performance, in case of the sequential write pattern,
deteriorates approximately 20% to 30% by releasing a part of the user
data area in the cache memory and performing the memory
reconfiguration of the management information storage area for TCE. In
other patterns, the I/O performance deteriorates less than 10%.
•Time-out for memory reconfiguration processing
4–32Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 89
If the I/O inflow is large, data saving to the drives of the cache data
takes time and may time out in 10 to 15 minutes (internal processing
time is 10 minutes). In this case, the processing can be continued by
executing it again at the time when I/O inflow is small.
•Inhibiting the memory reconfiguration processing performance while
executing other functions
In the following items, the memory reconfiguration processing is
inhibited to increase the data amount to the cache. Perform the memory
reconfiguration processing again after completing the operation of other
functions or recovering the failure.
- Other than master cache partition (partition 0 and partition 1) in
use
- Cache partition in changing
- DP pool in optimization
- RAID group in growing
- LU ownership in changing
- Cache Residency LU in operation
- Remote path and/or pair of TCE in operation
- SnapShot Logical Units or Data Pools in operation
- DMLU in operation
- Logical Unit in formatting
- Logical Unit in parity correction
- IP address for maintenance or management in operation
- SSL information in operation
- Array firmware in updating
- Power OFF of array in operation
- Spin-down or spin-up by Power Saving feature in operation
•Inhibiting the operation of other functions during memory
reconfiguration
- When the memory reconfiguration processing fails on the way due
to the factors other then the time-out
- RAID group grown operation
- Replication Pair operation
- Dynamic Provisioning operation
- Cache Residency Manager setting operation
- Logical Unit formatting operation
- Logical Unit parity correction operation
- Cache Partition Manager operation
- Modular Volume Migration operation
- Array firmware updating operation
Plan and design—arrays, volumes and operating systems4–33
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 90
- Installing, uninstalling, enabling, or disabling of extra-cost option
operation
- Logical Unit operation
- Logical Unit unifying operation
•Un-installation/invalidation before the memory reconfiguration
When performing the un-installation or invalidation of TCE and SnapShot
before the memory reconfiguration, the status of only the option
operated at last is displayed in the Reconfigure Memory Status.
Table 4-24 shows the Memory Reconfiguring Statuses displayed on
Navigator 2.
Table 4-24: Memory Reconfiguring Statuses
StatusesMeaning
NormalIndicates that the memory reconfiguration processing
is completed normally.
PendingIndicates the status which is waiting for the memory
reconfiguration. Even if the memory reconfiguration
instruction is executed and the message indicating the
inoperable status is output, it is changed to this status
because the instruction is received.
Reconfiguring(nn%)Indicates the status that the memory reconfiguration
is operating. (nn%) shows reconfiguring as a percent.
N/AIndicates that it is out of the memory reconfiguration
target.
Failed(Code-nn: error
message)
Indicates the status that the memory reconfiguration
failed because failures and others have occurred inside
the array. Recover the status according to the following
troubleshooting for each error code and each error
message. If it still fails, call the Support Center.
Failed(Code-01: Time out)
Code-01 occurs when the access from the host is
frequent or the amount of the unwritten data in the
cache memory is large. Execute the memory
reconfiguration operation again when the access from
the host decreases.
Failed(Code-02: Failure of Reconfigure Memory)
Code-02 occurs when the drive restoration processing
starts in the background. Execute the memory
reconfiguration operation again after the drive
restoration processing is completed.
Failed(Code-03: Failure of Reconfigure Memory)
Code-03 occurs when the copy of the management
information in the cache memory fails. The controller
replacement is required. Call the Support Center.
Failed(Code-04: Failure of Reconfigure Memory)
Code-04 occurs when the unwritten data in the cache
memory cannot be saved to the drive. The restart of
the array is required.
Note: If the firmware version of the array is less than
0897/A, the memory reconfiguration without restart of
the array is unsupported.
4–34Plan and design—arrays, volumes and operating systems
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 91
5
Requirements and
specifications
This chapter provides TCE system requirements and
specifications. Cautions and restrictions are also provided.
TCE system requirements
TCE system specifications
Requirements and specifications5–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 92
TCE system requirements
Table 5-1 describes the minimum TCE requirements.
•
•
•
Table 5-1: TCE Requirements
ItemMinimum Requirements
AMS firmware versionVersion 0832/B or later is required for AMS 2100 or
Navigator 2 versionVersion 3.21 or higher is required for the
AMS 2300 arrays with hardware Rev. 0100.
Version 0840/A or later is required for AMS2500
arrays with hardware Rev. 0100.
Version 0890/A or later is required for AMS2100,
2300, or 2500 arrays with hardware Rev. 0200.
Firmware version 0890/A or more is required on
both local side and remote side arrays when
connecting Rev. 0200 hardware.
management PC for AMS 2100 or 2300 arrays
where the hardware Rev. is 0100.
Version 4.00 or higher is required for the
management PC for an AMS2500 array where the
hardware Rev. is 0100.
Version 9.00 or higher is required for the
management PC for AMS 2100, 2300, or 2500
where the hardware Rev. is 0200.
CCI version01-21-03/06 or later is required for Windows host
only when CCI is used for the operation of TCE.
Number of AMS arrays2
Supported array AMS modelsAMS2100/2300/2500
TCE license keys One per array.
Number of controllers:2 (dual configuration)
Volume sizeS-VOL block count = P-VOL block count.
Command devices per array
(CCI only)
Max. 128. The command device is required only
when CCI is used. The command device volume
size must be greater than or equal to 33 MB.
5–2Requirements and specifications
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 93
Displaying the hardware revision number
The hardware revision (Rev.) can be displayed when an individual array is
selected from the Arrays list using Navigator 2, version 9.00 or later.
•
TCE system specifications
Table 5-2 describes the TCE specifications.
•
Table 5-2: TCE Specifications
ParameterTCE Specification
User interface•Navigator 2 GUI
•Navigator 2 CLI
•CCI
Controller configurationConfiguration of dual controller is required.
Cache memory•AMS2100: 2 GB/controller
•AMS2300: 2, 4 GB/controller
•AMS2500: 2, 4, 6, 8 GB/controller
Host interfaceAMS 2100, 2300, and 2500: Fibre channel or iSCSI (cannot mix)
Remote path•One remote path per controller is required—totaling two for a pair.
•The interface type of multiple remote paths between local and
remote arrays must be the same.
Number of hosts when
remote path is iSCSI
Maximum number of connectable hosts per port: 239.
Requirements and specifications5–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 94
Table 5-2: TCE Specifications (Continued)
ParameterTCE Specification
Data pool•Recommended minimum size: 20 GB
•Maximum # of data pools per array: 64
•Maximum # of LUs that can be assigned to one data pool: 64
•Maximum # of LUs that can be used as data pools: 128.
•When the array firmware version is less than 0852/A, a unified LU
cannot be assigned to a data pool. If 0852/A or higher, a unified
LU can be assigned to a data pool.
•Data pools must set up for both the P-VOL and S-VOL.
Port modesInitiator and target intermix mode. One port may be used for host I/O
and TCE at the same time.
Bandwidth•Minimum: 1.5 M.
•Recommended: 100M or more.
•When low bandwidth is used:
-The time limit for execution of CCI commands and host I/O
must be extended.
-Response time for CCI commands may take several seconds.
LicenseKey is required.
Command device (CCI
only)
DMLU•Required.
Maximum # of LUs that
can be used for TCE pairs
Pair structureOne S-VOL per P-VOL.
Supported RAID level•RAID 1 (1D+1D), RAID 5 (2D+1P to 15D+1P)
Combination of RAID
levels
Size of LUThe LU size must always be P-VOL = S-VOL.
Types of drive for P-VOL,
S-VOL, and data pool
Supported capacity value
of P-VOL and S-VOL
Copy paceUser adjustable rate that data is copied to remote array. See the copy
•If setting up two DMLUs on an array, they should belong to
different RAID groups.
•AMS2100: 1,022
•AMS2300: 2,046
•AMS2500: 2,046
The maximum when different types of arrays are used for TCE (i.e.
AMS500 and AMS2100) is the array with the smallest maximum.
•RAID 1+0 (2D+2D to 8D+8D)
•RAID 6 (2D+2P to 28D+2P)
Local RAID level can be different than remote level. The number of data
disks does not have to be the same.
The max LU size is 128 TB.
If the drive types are supported by the array, they can be set for a
P-VOL, an S-VOL, and data pools. SAS, SAS7.2K, SSD, or SAS (SED)
drives are recommended. Set all configured LUs using the same drive
type.
Capacity is limited. See Maximum supported capacity on page 4-23.
pace step on page 7-6 for more information.
5–4Requirements and specifications
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 95
Table 5-2: TCE Specifications (Continued)
ParameterTCE Specification
Consistency Group (CTG)•Maximum allowed: 16
•Maximum # of pairs allowed per consistency group:
-AMS2100: 1,022
-AMS2300: 2,046
-AMS2500: 2,046
Management of LUs while
using TCE
Pair creation using unified
LUs
Restriction during RAID
group expansion
Unified LU for data pool Not allowed.
Differential dataWhen pair status is Split, data sent to the P-VOL and S-VOL are
Host access to a data poolA data pool LU is hidden from a host.
Expansion of data pool
capacity
Reduction of data pool
capacity
Failures•When the copy operation from P-VOL to S-VOL fails, TCE suspends
Data pool usage at 100%When data pool usage is 100%, the status of any pair using the pool
Array restart at TCE
installation
TCE use with TrueCopyNot Allowed.
TCE use with SnapShotSnapShot can be cascaded with TCE or used separately.
TCE use with
ShadowImage
TCE use with LUN
Expansion
A TCE pair must be deleted before the following operations:
•Deleting the pair’s RAID group, LU, or data pool
•Formatting an LU in the pair
•Growing or shrinking an LU in the pair
•A TCE pair can be created using a unified LU.
-When array firmware is less than 0852/A, the size of each LU
making up the unified LU must be 1 GB or larger.
-When the array firmware is 0852/A or later, there are no
restrictions on the LUs making up the unified LU.
•LUs that are already in a P-VOL or S-VOL cannot be unified.
•Unified LUs that are in a P-VOL or S-VOL cannot be released.
A RAID group in which a TCE P-VOL or data pool exists can be expanded
only when pair status is Simplex or Split.
If the TCE data pool is shared with SnapShot, the SnapShot pairs must
be in Simplex or Paired status.
managed as differential data.
•Data pools can be expanded by adding an LU.
•Mixing SAS/SSD and SATA drives in a data pool is not supported.
Set all configured LUs using the same drive type.
Yes. The pairs associated with a data pool must be deleted before the
data pool can be reduced.
the pair (Failure). Because TCE copies data to the remote S-VOL
regularly, data is restored to the S-VOL from the update
immediately before the occurrence of the failure.
•A drive failure does not affect TCE pair status because of the RAID
architecture.
becomes Pool Full. P-VOL data cannot be updated to the S-VOL.
The array is restarted after installation to set the data pool, unless it is
also used by SnapShot. Then there is no restart.
Only a SnapShot P-VOL can be cascaded with TCE.
Although TCE can be used at the same time as a ShadowImage system,
it cannot be cascaded with ShadowImage.
When firmware version is less than 0852/A, it is not allowed to create
a TCE pair using unified LUs, which unify the LU with 1 GB or less
capacity.
Requirements and specifications5–5
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 96
Table 5-2: TCE Specifications (Continued)
ParameterTCE Specification
TCE use with Data
Retention Utility
TCE use with Cache
Residency Manager
TCE use with Cache
Partition Manager
TCE use with SNMP AgentAllowed. A trap is transmitted for the following:
TCE use with Volume
Migration
TCE use with Power
Saving
Reduction of memoryThe memory cannot be reduced when the ShadowImage, SnapShot,
Load balancing Not supported.
LU assigned to data poolAn LU consisting of a SAS, SSD drives, and an LU with a SATA drive
Extension of influence of
the TCE function
installation
Remote Copy over iSCSI
in the WAN environment
Allowed.
•When S-VOL Disable is set for an LU, a pair cannot be created
using the LU as the S-VOL.
•S-VOL Disable can be set for an LU that is currently an S-VOL, if
pair status is Split.
Allowed. However, an LU specified by Cache Residency Manager cannot
be used as a P-VOL, S-VOL, or data pool.
•TCE can be used together with Cache Partition Manager.
•Make the segment size of LUs to be used as a TCE data pool no
larger than the default, (16 KB).
•See Appendix D, Installing TCE when Cache Partition Manager is in
use for details on initialization.
•Remote path failure.
•Threshold value of the data pool is exceeded.
•Actual cycle time exceeds the default or user-specified value.
•Pair status changes to:
-Pool Full
-Failure.
-Inconsistent because the data pool is full or because of a
failure.
Allowed. However, a Volume Migration P-VOL, S-VOL, or Reserved LU
cannot be used as a TCE P-VOL or S-VOL.
Allowed, however, pair operations are limited to split and delete.
TCE, or Volume Migration function is validated. Make the reduction after
invalidating the function.
cannot coexist in a data pool. Set all the LUs configured with the same
drive type.
When the firmware version of the array is less than 0897/A, you must
restart the array to ensure the data pool resource. However, the restart
is not required when the data pool is already used by SnapShot because
TCE and SnapShot share the data pool.
We recommend using TrueCopy in the WAN environment of MTU1500
or more. However, if TCE needs to be implemented in the WAN
environment of less than MTU1500, change the maximum segment size
(MSS) of the WAN router to a value less than 1500, and then create a
remote path. The data length transmitted from TCE of HUS100 changes
to the specified value less than 1500.
When creating a remote path without changing the MSS value or not
creating the remote path after changing the MSS value again, a data
transfer error occurs because TCE of HUS100 transmits the MTU1500
data. To change the MSS value, request the customer or the WAN
router provider.
5–6Requirements and specifications
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 97
6
Installation and setup
This chapter provides TCE installation and setup procedures using
the Navigator 2 GUI. Instructions for CLI and CCI can be found in
the appendixes.
Installation procedures
Setup procedures
Installation and setup6–1
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 98
Installation procedures
The following sections provide instructions for installing, enabling/disabling,
and uninstalling TCE. Please note the following:
•TCE must be installed on the local and remote arrays.
•Before proceeding, verify that the array is operating in a normal state.
Installation/un-installation cannot be performed if a failure has
occurred.
In cases where the DKN-200-NGW1 (NAS unit) is connected to the disk
array, check the following items in advance.
1. Prior to this operation, execute Correspondence when connecting the
NAS unit if each of the following three items apply to the disk array.
- NAS unit is connected to the disk array. Ask the disk array
administrator to confirm whether the NAS unit is connected or not.
- NAS unit is in operation. Ask the NAS unit administrator to confirm
whether the NAS service is operating or not.
- A failure has not occurred on the NAS unit. Ask the NAS unit
administrator to check whether failure has occurred or not by
checking with the NAS administration software, NAS Manager GUI,
List of RAS Information, etc. In case of failure, execute the
maintenance operation together with the NAS maintenance
personnel.
2. Correspondence when connecting the NAS unit.
- If the NAS unit is connected, ask the NAS unit administrator for
termination of NAS OS and planned shutdown of the NAS unit.
3. Points to be checked after completing this operation:
- Ask the NAS unit administrator to reboot the NAS unit. After
rebooting, ask the NAS unit administrator to refer to “Recovering
from FC path errors” in the
check the status of the Fibre Channel path (FC path in short) and
to recover the FC path if it is in a failure status.
- In addition, if there are any personnel for the NAS unit
maintenance, ask the NAS unit maintenance personnel to reboot
the NAS unit.
Hitachi NAS Manager User's Guide and
6–2Installation and setup
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 99
Installing TCE
Prerequisites
•A key code or key file is required to install or uninstall TCE. If you do
not have the key file or code, you can obtain it from the download page
on the HDS Support Portal, http://support.hds.com.
•The array may require a restart at the end of the installation procedure.
If SnapShot is enabled at the time, no restart is necessary.
•If restart is required, it can be done either when prompted or at a later
time.
•TCE cannot be installed if more than 239 hosts are connected to a port
on the array.
To install TCE without rebooting
1. In the Navigator 2 GUI, click the check box for the array where you want
to install TCE, then click Show & Configure Array.
2. Under Common Array Tasks, click Install License. The Install License
screen displays.
•
3. Select the Key File or Key Code radio button, then enter the file name
or key code. You may browse for the Key File.
4. Click OK.
5. Click Confirm on the screen requesting a confirmation to install the TCE
option.
6. Click Reconfigure Memory to install the TCE option.
7. Click Close. The Licenses list appears.
8. Confirm TC-EXTENDED on the Name column of the Installed Storage
Features list, and Pending on the Reconfigure Memory Status
column.
9. Check the check box of TC-EXTENDED, and click Reconfigure
Memory.
10.Click Confirm in the Reconfigure Memory menu and then click Close.
The Licenses list appears.
11.Confirm the Reconfigure Memory Status is Reconfiguring(nn%) or
Normal.
12.When the Reconfigure Memory Status is Reconfiguring(nn%),
click Refresh Information after waiting for a while, and confirm the
Reconfigure Memory Status changes to Normal.
13.When the Reconfigure Memory Status is Failed(Code-01:Timeout), click Install License, and re-execute steps 6 to 13.
Code-01 occurs when the access from the host is frequent or the amount
of the unwritten data in the cache memory is large.
Installation and setup6–3
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Page 100
14.When the Reconfigure Memory Status is Failed(Code-02: Failure
of Reconfigure Memory), perform steps 9 to 13.
Code-02 occurs when the drive restoration processing starts in the
background.
15.When the Reconfigure Memory Status is Failed(Code-04: Failure of Reconfigure Memory), click the Resource of the Explorer menu,
then Arrays to return to the Arrays screen.
Code-04 occurs when the unwritten data in the cache memory cannot
be saved to the drive.
16.Select the array in which you will install TCE, click Reboot Array.
17.When the Reconfigure Memory Status is Failed(Code-03: Failure of Reconfigure Memory), ask the Support Center to solve the
problem.
Code-03 occurs when the copy of the management information in the
cache memory fails.
18.Installation of TCE is now complete.
To install TCE with rebooting
1. In the Navigator 2 GUI, click the check box for the array where you want
to install TCE, then click Show & Configure Array.
2. Under Common Array Tasks, click Install License. The Install
License screen displays.
•
3. Select the Key File or Key Code radio button, then enter the file name
or key code. You may browse for the Key File.
4. Click OK.
5. Click Confirm on the screen that appears, requesting a confirmation to
install TCE option.
6. Click Reboot Array to install the TCE option.
7. A message appears confirming that this optional feature is installed.
Mark the check box and click Reboot Array.
The restart is not required at this time if it is done when validating the
function. However, in the case where the installation of TCE was
completed before TCE is installed, the dialog box, which asks whether or
not to do the restart, is not displayed at this time because the restart
was done to ensure the resource for the data pool in the cache memory.
When the restart is not needed, the installation of the TCE function is
completed.
If the array restarts because of a spin-down instruction when Power
Saving, the spin down may fail if the instruction is received immediately
after the array restarts. When the spin-down fails, perform the spin-
6–4Installation and setup
Hitachi AMS 2000 Family TrueCopy Extended Distance User Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.