User Capacity (Logical Capacity)............................................................................................................................19
Hot Spares.............................................................................................................................................................24
Data Protection............................................................................................................................. 26
Data Block Guard ..................................................................................................................................................26
Disk Drive Patrol....................................................................................................................................................28
Fast Recovery ........................................................................................................................................................31
User Authentication ..............................................................................................................................................52
Power Consumption Visualization .........................................................................................................................61
Device Time Synchronization.................................................................................................................................68
Power Control ............................................................................................................................... 69
Power Synchronized Unit.......................................................................................................................................69
Remote Power Operation (Wake On LAN) .............................................................................................................70
Server Linkage Functions .............................................................................................................. 87
Oracle VM Linkage ................................................................................................................................................87
Microsoft Linkage..................................................................................................................................................92
LAN Connection .......................................................................................................................... 107
LAN for Operation Management (MNT Port) .......................................................................................................107
LAN for Remote Support (RMT Port)....................................................................................................................109
LAN Control (Master CM/Slave CM)......................................................................................................................112
Network Communication Protocols .....................................................................................................................114
Power Supply Connection
Input Power Supply Lines ....................................................................................................................................116
List of Supported Protocols.......................................................................................................... 143
Target Pool for Each Function/Volume List
Target RAID Groups/Pools of Each Function.........................................................................................................144
Target Volumes of Each Function ........................................................................................................................144
Figure 8 Example of a RAID Group .........................................................................................................................21
Figure 10 Hot Spares................................................................................................................................................24
Figure 11 Data Block Guard......................................................................................................................................26
Figure 12 Disk Drive Patrol.......................................................................................................................................28
Figure 13 Redundant Copy Function ........................................................................................................................29
Figure 15 Fast Recovery ...........................................................................................................................................31
Figure 39 Device Time Synchronization....................................................................................................................68
Figure 40 Power Synchronized Unit..........................................................................................................................69
Figure 41 Wake On LAN ...........................................................................................................................................70
Figure 42 Example of Advanced Copy ......................................................................................................................71
Figure 45 Targets for the Multi-Copy Function .........................................................................................................75
Figure 56 Microsoft Linkage.....................................................................................................................................92
Figure 58 RAID Configuration Example (When 12 SSDs Are Installed) .....................................................................99
Figure 59 RAID Configuration Example (When 15 SAS Disks Are Installed) ............................................................101
Figure 60 Single Path Connection (When a SAN Connection Is Used — Direct Connection) .....................................105
Figure 61 Single Path Connection (When a SAN Connection Is Used — Switch Connection) ....................................105
Figure 62 Multipath Connection (When a SAN Connection Is Used — Basic Connection Configuration)...................106
Figure 63 Multipath Connection (When a SAN Connection Is Used — Switch Connection).......................................106
Figure 64 Connection Example without a Dedicated Remote Support Port ............................................................108
Figure 65 Connection Example When the IP Address of the Slave CM Is Set (and a Dedicated Remote Support Port Is
Not Used)...............................................................................................................................................108
Figure 66 Overview of the AIS Connect Function ....................................................................................................109
Table 3 Formula for Calculating User Capacity for Each RAID Level .......................................................................19
Table 4 User Capacity per Drive.............................................................................................................................20
Table 5 RAID Group Types and Usage....................................................................................................................21
Table 6 Recommended Number of Drives per RAID Group ....................................................................................22
Table 7 Volumes That Can Be Created...................................................................................................................23
Table 8 Hot Spare Selection Criteria .....................................................................................................................25
Table 9 TPP Maximum Number and Capacity........................................................................................................37
Table 10 Chunk Size According to the Configured TPP Capacity...............................................................................37
Table 11 Levels and Configurations for a RAID Group That Can Be Registered in a TPP...........................................37
Table 14 Optimization of Volume Configurations....................................................................................................42
Table 15 Available Functions for Default Roles .......................................................................................................51
Table 16 Client Public Key (SSH Authentication).....................................................................................................52
Table 21 Control Software (Advanced Copy) ...........................................................................................................71
Table 22 List of Functions (Copy Methods) .............................................................................................................72
Table 23 Characteristics of SnapOPC/SnapOPC+ Operations with Each Type of Copy Destination Logical Volume .......
Table 24 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 1 Followed by Session 2) ..
Table 25 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 2 Followed by Session 1) ..
Table 26 Available Stripe Depth..............................................................................................................................80
Table 27 Specifications for Paths and Volumes between the Local Storage System and the External Storage System
Table 28 Volume Types That Can Be Used with Veeam Storage Integration............................................................91
Table 29 Guideline for the Number of Drives and User Capacities (When 1.92TB SSDs Are Installed) .....................96
Table 30 Guideline for the Number of Drives and User Capacities (When 1.2TB SAS Disks Are Installed)................99
Table 32 LAN Port Availability...............................................................................................................................114
Table 34 Number of Installable Drives..................................................................................................................133
Table 35 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX60 S4) .....................................140
Table 36 Hot Swap and Hot Expansion Availability for Components (ETERNUS DX60 S3) .....................................141
Table 37 List of Supported Protocols.....................................................................................................................143
Table 38 Combinations of Functions That Can Be Executed Simultaneously (1/2) ................................................146
Table 39 Combinations of Functions That Can Be Executed Simultaneously (2/2) ................................................146
To avoid damaging the ETERNUS storage system, pay attention to the
following points when cleaning the ETERNUS storage system:
Warning layout ribbon
Example warning
- Make sure to disconnect the power when cleaning.
- Be car
eful that no liquid seeps into the ETERNUS storage system
when using cleaners, etc.
- Do not use alcohol or other solvents to clean the ETERNUS storage system.
CAUTION
Do
Preface
Warning Signs
Warning signs are shown throughout this manual in order to prevent injury to the user and/or material damage.
These signs are composed of a symbol and a message describing the recommended level of caution. The following explains the symbol, its level of caution, and its meaning as used in this manual.
The following symbols are used to indicate the type of warnings or cautions being described.
This symbol indicates the possibility of serious or fatal injury if the ETERNUS DX is not used
properly.
This symbol indicates the possibility of minor or moderate personal injury, as well as damage to the
ETERNUS DX and/or to other users and their property, if the ETERNUS DX is not
used properly.
This symbol indicates IMPORTANT information for the user to note when using the ETERNUS
DX.
The triangle emphasizes the urgency of the WARNING and CAUTION contents. Inside the
triangle and above it are details concerning the symbol (e.g. Electrical Shock).
The barred "Do Not..." circle warns against certain actions. The action which must be
avoided is both illustrated inside the barred circle and written above it (e.g. No Disassembly).
The black "Must Do..." circle indicates actions that must be taken. The required action is
both illustrated inside the black disk and written above it (e.g. Unplug).
How Warnings are Presented in This Manual
A message is written beside the symbol indicating the caution level. This message is marked with a vertical ribbon in the left margin, to distinguish this warning from ordinary descriptions.
A function that migrates data between ETERNUS storage systems.
Non-disruptive data relocation
A function that migrates data between ETERNUS storage systems without stopping the business server.
Information linkage (function linkage with servers)
Functions that cooperate with a server to improve performance
in a virtualized environment. Beneficial effects such as centralized management of the entire storage system and a reduction
of the load on servers can be realized.
Simple configuration
A wizard that simplifies the configuration of Thin Provisioning.
Parity for data A to D: P A, B, C, D
Parity for data E to H: P E, F, G, H
Parity for data I to L: P I, J, K, L
Parity for data M to P: P M, N, O, P
1. Function
RAID Functions
● RAID1+0 (Striping of Pairs of Drives for Mirroring)
RAID1+0 combines the high I/O performance of RAID0 (striping) with the reliability of RAID1 (mirroring).
Figure 3 RAID1+0 Concept
● RAID5 (Striping with Distributed Parity)
Data is divided into blocks and allocated across multiple drives together with parity information created from
the data in order to ensure the redundancy of the data.
● RAID5+0 (Double Striping with Distributed Parity)
Multiple RAID5 volumes are RAID0 striped. For large capacity configurations, RAID5+0 provides better performance, better reliability, and shorter rebuilding times than RAID5.
Parity for data A to D: P1 A, B, C, D and P2 A, B, C, D
Parity for data E to H: P1 E, F, G, H and P2 E, F, G, H
Parity for data I to L: P1 I, J, K, L and P2 I, J, K, L
Parity for data M to P: P1 M, N, O, P and P2 M, N, O, P
1. Function
RAID Functions
● RAID6 (Striping with Double Distributed Parity)
Allocating two different parities on different drives (double parity) makes it possible to recover from up to two
drive failures.
Parity for data A, B, C: P1 A, B, C and P2 A, B, C
Parity for data D, E, F: P1 D, E, F and P2 D, E, F
Parity for data G, H, I: P1 G, H, I and P2 G, H, I
Parity for data J, K, L: P1 J, K, L and P2 J, K, L
Parity for data M, N, O: P1 M, N, O and P2 M, N, O
Parity for data P, Q, R: P1 P, Q, R and P2 P, Q, R
Parity for data S, T, U: P1 S, T, U and P2 S, T, U
Parity for data V, W, X: P1 V, W, X and P2 V, W, X
:
Fast recovery Hot Spare: FHS
1. Function
RAID Functions
● RAID6-FR (Provides the High Speed Rebuild Function, and Striping with Double Distributed Parity)
Distributing multiple data groups and reserved space equivalent to hot spares to the configuration drives makes
it possible to recover from up to two drive failures. RAID6-FR requires less build time than RAID6.
Figure 7 RAID6-FR Concept
■
Reliability, Performance, Capacity for Each RAID Level
Table 2 shows the comparison result of reliability, performance, capacity for each RAID level.
Table 2 RAID Level Comparison
RAID levelReliabilityPerformance (*1)Capacity
RAID0
RAID1
RAID1+0
RAID5
RAID5+0
RAID6◎
RAID6-FR◎
´
◎◎
¡¡
¡
¡¡¡
¡¡¡
◎△
¡¡
¡¡
◎: Very good ¡: Good △: Reasonable ´: Poor
*1: Performance may differ according to the number of drives and the processing method from the host.
Select the appropriate RAID level according to the usage.
Recommended RAID levels are RAID1, RAID1+0, RAID5, RAID5+0, RAID6, and RAID6-FR.
•
When importance is placed upon read and write performance, a RAID1+0 configuration is recommended.
•
For read only file servers and backup servers, RAID5, RAID5+0, RAID6, or RAID6-FR can also be used for higher
•
efficiency. However, if the drive fails, note that data restoration from parities and rebuilding process may result in a loss in performance.
For SSDs, a RAID5 configuration or a fault tolerant enhanced RAID6 configuration is recommended because
•
SSDs operate much faster than other types of drive. For large capacity SSDs, using a RAID6-FR configuration,
which provides excellent performance for the rebuild process, is recommended.
Using a RAID6 or RAID6-FR configuration is recommended when Nearline SAS disks that have 6TB or more are
•
used. For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more,
refer to "
Supported RAID" (page 13).
User Capacity (Logical Capacity)
User Capacity for Each RAID Level
The user capacity depends on the capacity of drives that configure a RAID group and the RAID level.
Table 3 shows the formula for calculating the user capacity for each RAID level.
Table 3 Formula for Calculating User Capacity for Each RAID Level
RAID levelFormula for user capacity computation
RAID0
RAID1
RAID1+0
RAID5
RAID5+0
RAID6
RAID6-FR
*1: "N" is the number of RAID6 configuration sets. For example, if a RAID6 group is configured with "(3D+2P)
´2+1HS", N is "2".
Drive capacity ´ Number of drives
Drive capacity ´ Number of drives ¸ 2
Drive capacity ´ Number of drives ¸ 2
Drive capacity ´ (Number of drives - 1)
Drive capacity ´ (Number of drives - 2)
Drive capacity ´ (Number of drives - 2)
Drive capacity ´ (Number of drives - (2 ´ N) - Number of hot spares) (*1)
The supported drives vary between the ETERNUS DX60 S4 and the ETERNUS DX60 S3. For details about drives,
refer to "Overview" of the currently used storage systems.
Table 4 User Capacity per Drive
Product name (*1)User capacity
200GB SSD186,624MB
400GB SSD374,528MB
800GB SSD750,080MB
960GB SSD914,432MB
1.6TB SSD1,501,440MB
1.92TB SSD1,830,144MB
3.84TB SSD3,661,568MB
300GB SAS disk279,040MB
600GB SAS disk559,104MB
900GB SAS disk839,168MB
1.2TB SAS disk1,119,232MB
1.8TB SAS disk1,679,360MB
2.4TB SAS disk2,239,744MB
1TB Nearline SAS disk937,728MB
2TB Nearline SAS disk1,866,240MB
4TB Nearline SAS disk3,733,504MB
6TB Nearline SAS disk (*2)5,601,024MB
8TB Nearline SAS disk (*2)7,468,288MB
10TB Nearline SAS disk (*2)9,341,696MB
12TB Nearline SAS disk (*2)11,210,496MB
14TB Nearline SAS disk (*2)13,079,296MB
*1: The capacity of the product names for the drives is based on the assumption that 1MB = 1,0002 bytes,
while the user capacity for each drive is based on the assumption that 1MB = 1,0242 bytes. Furthermore,
OS file management overhead will reduce the actual usable capacity.
The user capacity is constant regardless of the drive size (2.5"/3.5") or the SSD type (Value SSD and MLC
SSD).
*2: For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more, refer
A RAID group is a group of drives. It is a unit that configures RAID. Multiple RAID groups with the same RAID
level or multiple RAID groups with different RAID levels can be set together in the
group is created, RAID levels can be changed and drives can be added.
Table 5 RAID Group Types and Usage
TypeUsageMaximum capacity
RAID groupAreas to store normal data. Volumes (Standard, WSV, SDV,
Thin Provisioning Pool (TPP)
(*2)
*1: This value is for a 14TB Nearline SAS disk RAID6-FR([13D+2P]´2+1HS) configuration.
*2: For details on the number of configuration drives for each RAID level and recommended configurations,
SAS disks and Nearline SAS disks can exist together in the same RAID group. However, from a performance perspective, use the same type of disk (SAS disks or Nearline SAS disks) to configure RAID groups.
Figure 8 Example of a RAID Group
ETERNUS DX. After a RAID
Approximately 324TB (*1)
SDPV) for work and Advanced Copy can be created in a RAID
group.
RAID groups that are used for Thin Provisioning in which the
areas are managed as a Thin Provisioning Pool (TPP). Thin Provisioning Volumes (TPVs) can be created in a TPP.
1,024TB
For details on the number of configuration drives for each RAID level and recommended configurations,
refer to Table 6.
refer to Table 11.
SAS disks and Nearline SAS disks can be installed together in the same group. Note that SAS disks and Near-
•
line SAS disks cannot be installed with SSDs.
Use drives that have the same size, capacity, rotational speed, and Advanced Format support to configure
•
RAID groups.
If a RAID group is configured with drives that have different capacities, all the drives in the RAID group are
-
recognized as having the same capacity as the drive with the smallest capacity in the RAID group and the
rest of the capacity in the drives that have a larger capacity cannot be used.
If a RAID group is configured with drives that have different rotational speeds, the performance of all of
-
the drives in the RAID group is reduced to that of the drive with the lowest rotational speed.
For details on the RAID levels that can be configured with Nearline SAS disks that have 6TB or more, refer
The maximum number of volumes that can be created in the ETERNUS DX is 1,024. Volumes can be created until
the combined total for each volume type reaches the maximum number of volumes.
A volume can be expanded or moved if required. Multiple volumes can be concatenated and treated as a single
volume. For availability of expansion, displacement, and concatenation for each volume, refer to "Target Vol-
umes of Each Function
" (page 144).
The types of volumes that are listed in the table below can be created in the ETERNUS DX.
Table 7 Volumes That Can Be Created
TypeUsageMaximum capacity
Standard (Open)A standard volume is used for normal usage, such as file sys-
tems and databases. The server recognizes it as a single logical unit.
"Standard" is displayed as the type for this volume in ETERNUS
Web GUI/ETERNUS CLI and "Open" is displayed in ETERNUS SF
software.
Snap Data Volume (SDV)This area is used as the copy destination for SnapOPC/
SnapOPC+. There is a SDV for each copy destination.
Snap Data Pool Volume (SDPV)This volume is used to configure the Snap Data Pool (SDP)
area. The SDP capacity equals the total capacity of the SDPVs.
A volume is supplied from a SDP when the amount of updates
exceeds the capacity of the copy destination SDV.
Thin Provisioning Volume (TPV)This virtual volume is created in a Thin Provisioning Pool area. 128TB
Wide Striping Volume (WSV)This volume is created by concatenating distributed areas in
from 2 to 12 RAID groups. Processing speed is fast because
data access is distributed.
ODX Buffer volumeAn ODX Buffer volume is a dedicated volume that is required
to use the Offloaded Data Transfer (ODX) function of Windows
Server 2012 or later. It is used to save the source data when
data is updated while a copy is being processed.
It can be created one per ETERNUS DX.
Its volume type is Standard or TPV.
*1: When multiple volumes are concatenated using the LUN Concatenation function, the maximum capacity is
128TB.
also
*2: The capacity differs depending on the copy source volume capacity.
After a volume is created, formatting automatically starts. A server can access the volume while it is being formatted. Wait for the format to complete if high performance access is required for the volume.
In the ETERNUS DX
•
, volumes have different stripe sizes that depend on the RAID level and the stripe depth
parameter.
For details about the stripe sizes for each RAID level and the stripe depth parameter values, refer to "ETERNUS Web GUI User's Guide".
Note that the available user capacity can be fully utilized if an exact multiple of the stripe size is set for the
volume size. If an exact multiple of the stripe size is not set for the volume size, the capacity is not fully
utilized and some areas remain unused.
When a Thin Provisioning Pool (TPP) is created, a control volume is created for each RAID group that config-
•
ures the relevant TPP. Therefore, the maximum number of volumes that can be created in the ETERNUS DX
decreases by the number of RAID groups that configure a TPP.
Hot spares are used as spare drives for when drives in a RAID group fail, or when drives are in error status.
Figure 10 Hot Spares
When the RAID level is RAID6-FR, data in a failed drive can be restored to a reserved space in a RAID group
even when a drive error occurs because a RAID6-FR RAID group retains a reserved space for a whole drive in
the RAID group. If the reserved area is in use and an error occurs in another drive (2nd) in the RAID group,
then the hot spare is used as a spare.
■
Types of Hot Spares
The following two types of hot spare are available:
Global Hot Spare
•
This is available for any RAID group. When multiple hot spares are installed, the most appropriate drive is automatically selected and incorporated into a RAID group.
Dedicated Hot Spare
•
This is only available to the specified RAID group (one RAID group).
The Dedicated Hot Spare cannot be registered in a RAID group that is registered in TPPs.
Assign "Dedicated Hot Spares" to RAID groups that contain important data, in order to preferentially improve
their access to hot spares.
If a combination of SAS disks, Nearline SAS disks, and SSDs is installed in the ETERNUS DX, each different type of
drive requires a corresponding hot spare.
There are two types of rotational speeds for SAS disks; 10,000rpm and 15,000rpm. If a drive error occurs and a
hot spare is configured in a RAID group with different rotational speed drives, the performance of all the drives
in the RAID group is determined by the drive with the slowest rotational speed. When using SAS disks with different rotational speeds, prepare hot spares that correspond to the different rotational speed drives if required.
Even if a RAID group is configured with SAS disks that have different interface speeds, performance is not affected.
The capacity of each hot spare must be equal to the largest capacity of the same-type drives.
■
Selection Criteria
When multiple Global Hot Spares are installed, the following criteria are used to select which hot spare will replace a failed drive:
Table 8 Hot Spare Selection Criteria
Selection order
1A hot spare with the same type, same capacity, and same rotational speed as the failed drive
2A hot spare with the same type and same rotational speed as the failed drive but with a larger capacity (*1)
3A hot spare with the same type and same capacity as the failed drive but with a different rotational speed
4A hot spare with the same type as the failed drive but with a larger capacity and a different rotational speed (*1)
Selection criteria
*1: When there are multiple hot spares with a larger capacity than the failed drive, the hot spare with the
When a write request is issued by a server, the data block guard function adds check codes to all of the data that
is to be stored. The data is verified at multiple checkpoints on the transmission paths to ensure data integrity.
When data is written from the server, the Data Block Guard function adds eight bytes check codes to each block
(every 512 bytes) of the data and verifies the data at multiple checkpoints to ensure data consistency. This function can detect a data error when data is destroyed or data corruption occurs. When data is read from the server,
the check codes are confirmed and then removed, ensuring that data consistency is verified in the whole storage
system.
If an error is detected while data is being written to a drive, the data is read again from the data that is duplicated in the cache memory. This data is checked for consistency and then written.
If an error is detected while data is being read from a drive, the data is restored using RAID redundancy.
Figure 11 Data Block Guard
1.The check codes are added
2.The check codes are confirmed
3.The check codes are confirmed and removed
Also, the T10-Data Integrity Field (T10-DIF) function is supported. T10-DIF is a function that adds a check code to
data that is to be transferred between the Oracle Linux server and the ETERNUS DX, and ensures data integrity
at the SCSI level.
The server generates a check code for the user data in the host bus adapter (HBA), and verifies the check code
when reading data in order to ensure data integrity.
The ETERNUS DX double-checks data by using the data block guard function and by using the supported T10-DIF
to improve reliability.
Data is protected at the SCSI level on the path to the server. Therefore, data integrity can be ensured even if data
is corrupted during a check code reassignment.
By linking the Data Integrity Extensions (DIX) function of Oracle DB, data integrity can be ensured in the entire
system including the server.
The T10-DIF function can be used when connecting with HBAs that support T10-DIF with an FC interface.
The T10-DIF function can be enabled or disabled for each volume when the volumes are created. This function
cannot be enabled or disabled after a volume has been created.
The T10-DIF function can be enabled only in the Standard volume.
•
LUN concatenation cannot be performed for volumes where the T10-DIF function is enabled.
In the ETERNUS DX, all of the drives are checked in order to detect drive errors early and to restore drives from
errors or disconnect them.
The Disk Drive Patrol function regularly diagnoses and monitors the operational status of all drives that are installed in the
For drive checking, read check is performed sequentially for a part of the data in all the drives. If an error is
detected, data is restored using drives in the RAID group and the data is written back to another block of the
drive in which the error occurred.
Figure 12 Disk Drive Patrol
ETERNUS DX. Drives are checked (read check) regularly as a background process.
Read checking is performed during the diagnosis.
These checks are performed in blocks (default 2MB) for each drive sequentially and are repeated until all the
blocks for all the drives have been checked. Patrol checks are performed every second, 24 hours a day (default).
Drives that are stopped by Eco-mode are checked when the drives start running again.
The Maintenance Operation privilege is required to set detailed parameters.
Creates data from the drives other than
the maintenance target drive, and
writes data into the hot spare.
Disconnects the maintenance target drive
and switches it to the hot spare.
Disconnected
RAID5 (Redundant)
Sign of
failure
RAID5 (Redundant)
1. Function
Data Protection
Redundant Copy
Redundant Copy is a function that copies the data of a drive that shows a possible sign of failure to a hot spare.
When the Disk Patrol function decides that preventative maintenance is required for a drive, the data of the
maintenance target drive is re-created by the remaining drives and written to the hot spare. The Redundant
Copy function enables data to be restored while maintaining data redundancy.
Figure 13 Redundant Copy Function
If a bad sector is detected when a drive is checked, an alternate track is automatically assigned. This drive is
not recognized as having a sign of drive failure during this process. However, the drive will be disconnected
by the Redundant Copy function if the spare sector is insufficient and the problem cannot be solved by assigning an alternate track.
Redundant Copy speed
•
Giving priority to Redundant Copy over host access can be specified. By setting a higher Rebuild priority, the
performance of Redundant Copy operations may improve.
However, it should be noted that when the priority is high and a Redundant Copy operation is performed
for a RAID group, the performance (throughput) of this RAID group may be reduced.
Disconnects the failed drive to the ETERNUS
storage system and creates data from the drives
other than the failed drive and writes the data
into the hot spare.
Rebuild
Hot spare
Failed drive
Configures the hot spare in the RAID group.
Failure
RAID5 (No redundancy)
RAID5 (Redundant)
Disconnection
1. Function
Data Protection
Rebuild
Rebuild processes recover data in failed drives by using other drives. If a free hot spare is available when one of
the RAID group drives has a problem, data of this drive is automatically replicated in the hot spare. This ensures
data redundancy.
Figure 14 Rebuild
When no hot spares are registered, rebuilding processes are only performed when a failed drive is replaced or
when a hot spare is registered.
Rebuild Speed
•
Giving priority to rebuilding over host access can be specified. By setting a higher rebuild priority, the performance of rebuild operations may improve.
However, it should be noted that when the priority is high and a rebuild operation is performed for a RAID
group, the performance (throughput) of this RAID group may be reduced.
(Creating data and writing to the FHS area simultaneously)
Disconnect
RAID6-FR (Redundant)
A f
ailed drive is disconnected from the
ETERNUS storage system.
Data is created from the redundant data
in normal drives and written to reserved
space (FHS) in RAID6-FR.
1. Function
Data Protection
Fast Recovery
This function recovers data quickly by relocating data in the failed drive to the other remaining drives when a
drive error is detected.
For a RAID group that is configured with RAID6-FR, Fast Recovery is performed for the reserved area that is
equivalent to hot spares in the RAID group when a drive error occurs.
If a second drive fails when the reserved area is already used by the first failed drive, a normal rebuild (hot spare
rebuild in the
For data in a failed drive, redundant data and reserved space are allocated in different drives according to the
area. A fast rebuild can be performed because multiple rebuild processes are performed for different areas simultaneously.
Figure 15 Fast Recovery
ETERNUS DX) is performed.
For the Fast Recovery function that is performed when the first drive fails, a copyback is performed after the
failed drive is replaced even if the Copybackless function is enabled.
For a normal rebuild process that is performed when the reserved space is already being used and the second
drive fails, a copyback is performed according to the settings of the Copybackless function.
After replacing has been completed,
copies the data from the hot spare
to the new drive.
Hot spare
RAID5 (Redundant)
After rebuilding has been completed,
replaces the failed drive with the new drive.
Failed drive
RAID5 (Redundant)
Copyback
1. Function
Data Protection
Copyback/Copybackless
A Copyback process copies data in a hot spare to the new drive that is used to replace the failed drive.
Figure 16 Copyback
Copyback speed
•
Giving priority to Copyback over host access can be specified. By setting a higher Rebuild priority, the performance of Copyback operations may improve.
However, it should be noted that when the priority is high and a Copyback operation is performed for a
RAID group, the performance (throughput) of this RAID group may be reduced.
If copybackless is enabled, the drives that are registered in the hot spare become part of the RAID group configuration drives after a rebuild or a redundant copy is completed for the hot spare.
The failed drive is disconnected from the RAID group configuration drives and then registered as a hot spare.
Copyback is not performed for the data even if the failed drive is replaced by a new drive because the failed drive
is used as a hot spare.
The failed drive (hot spare) is
replaced by the new drive.
The replaced drive becomes a hot spare in
the storage system.
Hot spare
RAID5 (Redundant)
Failed drive
Hot spare
After rebuilding is complete, the RAID group
configuration drive is replaced
by the hot spare.
RAID5 (Redundant)
RAID5 (Redundant)
1. Function
Data Protection
A copyback operation is performed when the following conditions for the copybackless target drive (or hot
spare) and the failed drive are the same.
Drive type (SAS disks, Nearline SAS disks
•
Size (2.5" and 3.5")
•
Capacity
•
Rotational speed (15,000rpm, 10,000rpm, and 7,200rpm) (*1)
•
, and SSDs)
*1: For SAS disks or Nearline SAS disks only.
If different types of drives have been selected as the hot spare, copyback is performed after replacing the drives
even when the Copybackless function is enabled.
The Copybackless function can be enabled or disabled. This function is enabled by default.
Figure 17 Copybackless
To set the Copybackless function for each storage system, use the subsystem parameter settings. These set-
•
tings can be performed with the system management/maintenance operation privilege. After the settings
are changed, the ETERNUS DX
If the Copybackless function is enabled, the drive that is replaced with the failed drive cannot be installed in
•
the prior RAID group configuration. This should be taken into consideration when enabling or disabling the
Copybackless function.
The target drive for the Protection (Shield)
function is disconnected temporarily
and diagnosed.
Data is created from the drives
that are not the target drives
for the Protection (Shield) function and
written to the hot spare.
Suspend
If the drive is determined to be
normal after the diagnosis is
performed, the drive is reconnected
to the storage system (*1).
RAID5 (Redundant)
Particular
error message
?
1. Function
Data Protection
Protection (Shield)
The Protection (Shield) function diagnoses temporary drive errors. A drive can continue to be used if it is determined to be normal. The target drive temporarily changes to diagnosis status when drive errors are detected by
the Disk Drive Patrol function or error notifications.
For a drive that configures a RAID group, data is moved to a hot spare by a rebuild or redundant copy before the
drive is diagnosed. For a drive that is disconnected from a RAID group, whether the drive has a permanent error
or a temporary error is determined. The drive can be used again if it is determined that the drive has only a
temporary error.
The target drives of the Protection (Shield) function are all the drives that are registered in RAID groups or registered as hot spares. Note that the Protection (Shield) function is not available for unused drives.
The Protection (Shield) function can be enabled or disabled. This function is enabled by default.
Figure 18 Protection (Shield)
*1: If copybackless is enabled, the drive is used as a hot spare disk. If copybackless is disabled, the drive is
used as a RAID group configuration drive and copyback starts. The copybackless setting can be enabled or
disabled until the drive is replaced.
Continued access is available to drives
in the drive enclosure that follows the failed one.
1. Function
Data Protection
The target drives are deactivated and then reactivated during temporary drive protection. Even though a
•
system status error may be displayed during this period, this phenomenon is only temporary. The status
returns to normal after the diagnosis is complete.
The following phenomenon may occur during temporary drive protection.
The Fault LEDs (amber) on the operation panel and the drive turn on
-
An error status is displayed by the ETERNUS Web GUI and the ETERNUS CLI
-
Error or Warning is displayed as the system status
•
Error, Warning, or Maintenance is displayed as the system status
•
Target drives of the Protection (Shield) function only need to be replaced when drive reactivation fails.
•
If drive reactivation fails, a drive failure error is notified as an event notification message (such as SNMP/
REMCS). When drive reactivation is successful, an error message is not notified. To notify this message, use
the event notification settings.
To set the Protection (Shield) function for each storage system, use the subsystem parameter settings. The
•
maintenance operation privilege is required to perform this setting.
After the settings are changed, the
ETERNUS DX does not need to be turned off and on again.
Reverse Cabling
Because the ETERNUS DX uses reverse cabling connections for data transfer paths between controllers and
drives, continued access is ensured even if a failure occurs in a drive enclosure.
If a drive enclosure fails for any reason, access to drives that are connected after the failed drive can be maintained because normal access paths are secured by using reverse cabling.
1. Function
Operations Optimization (Virtualization)
Operations Optimization (Virtualization)
A single controller configuration differs from a dual controller configuration in the following ways:
The Thin Provisioning function cannot be used.
•
Thin Provisioning
The Thin Provisioning function has the following features:
Storage Capacity Virtualization
•
The physical storage capacity can be reduced by allocating the virtual drives to a server, which allows efficient
use of the storage capacity. The volumes more than the capacity of all the installed drives can be allocated by
setting the capacity required for virtual volumes in the future.
TPV Balancing
•
I/O access to the virtual volume can be distributed among the RAID groups in a pool, by relocating and balancing the physical allocation status of the virtual volume.
TPV Capacity Optimization (Zero Reclamation)
•
Data in physically allocated areas are checked in blocks and unnecessary areas (areas where 0 is allocated to
all of the data in each block) are released to unallocated areas.
Storage Capacity Virtualization
Thin Provisioning improves the usability of the drives by managing the physical drives in a pool, and sharing the
unused capacity among the virtual volumes in the pool. The volume capacity that is seen from the server is virtualized to allow the server to recognize a larger capacity than the physical volume capacity. Because a large
capacity virtual volume can be defined, the drives can be used in a more efficient and flexible manner.
Initial cost can be reduced because less drive capacity is required even if the capacity requirements cannot be
estimated. The power consumption requirements can also be reduced because a fewer number of drives are installed.
1. Function
Operations Optimization (Virtualization)
In the Thin Provisioning function, the RAID group, which is configured with multiple drives, is managed as a Thin
Provisioning Pool (TPP). When a Write request is issued, a physical area is allocated to the virtual volume. The
free space in the TPP is shared among the virtual volumes which belong to the TPP, and a virtual volume, which
is larger than the drive capacity in the
referred to as a Thin Provisioning Volume (TPV).
Thin Provisioning Pool (TPP)
•
A TPP is a physical drive pool which is configured with one or more RAID groups. TPP capacity can be expanded
in the units of RAID groups. Add RAID groups with the same specifications (RAID level, drive type, and number
of member drives) as those of the existing RAID groups.
The following table shows the maximum number and the maximum capacity of TPPs that can be registered in
the ETERNUS DX.
Table 9 TPP Maximum Number and Capacity
ItemETERNUS DX60 S4/DX60 S3
Number of pools (max.)48
Pool capacity (max.)1,024TB
The following table shows the TPP chunk size that is applied when TPPs are created.
Table 10 Chunk Size According to the Configured TPP Capacity
ETERNUS DX, can be created. A virtual volume to be created in a TPP is
Setting value of the maximum pool capacityChunk size (*1)
Up to 128TB21MB
Up to 256TB42MB
Up to 512TB84MB
Up to 1,024TB168MB
*1: Chunk size is for delimiting data. The chunk size is automatically set according to the maximum pool ca-
pacity.
The following table shows the RAID configurations that can be registered in a TPP.
Table 11 Levels and Configurations for a RAID Group That Can Be Registered in a TPP
The maximum capacity of a TPV is 128TB. Note that the total TPV capacity must be smaller than the maximum
capacity of the TPP.
When creating a TPV, the Allocation method can be selected.
Thin
-
When data is written from the host to a TPV, a physical area is allocated to the created virtual volume. The
capacity size (chunk size) that is applied is the same value as the chunk size of the TPP where the TPV is
created. The physical storage capacity can be reduced by allocating a virtualized storage capacity.
1. Function
Operations Optimization (Virtualization)
Thick
-
When creating a volume, the physical area is allocated to the entire volume area. This can be used for volumes in the system area to prevent a system stoppage due to a pool capacity shortage during operations.
In general, selecting "Thin" is recommended. The Allocation method can be changed after a TPV is created.
Perform a TPV capacity optimization if "Thick" has changed to "Thin". By optimizing the capacity, the area that
was allocated to a TPV is released and the TPV becomes usable. If
formed, the usage of the TPV does not change even after the Allocation method is changed.
The capacity of a TPV can be expanded after it is created.
For details on the number of TPVs that can be created, refer to "Volume" (page 22).
● Threshold Monitoring of Used Capacity
When the used capacity of a TPP reaches the threshold, a notification is sent to the notification destination,
(SNMP Trap, e-mail, or Syslog) specified using the [Setup Event Notification] function. There are two types of
thresholds: "Attention" and "Warning". A different value can be specified for each threshold type.
Also, ETERNUS SF Storage Cruiser can be used to monitor the used capacity.
TPP Thresholds
•
There are two TPP usage thresholds: Attention and Warning.
Attention threshold £ Warning threshold
The "Attention" threshold can be omitted.
There is only one TPV usage threshold: Attention. When the physically allocated capacity of a TPV reaches the
threshold, a response is sent to a host via a sense. The threshold is determined by the ratio of free space in the
TPP and the unallocated TPV capacity.
Table 13 TPV Thresholds
ThresholdSelectable rangeDefault
Attention1 (%) to 100 (%)80 (%)
Use of TPVs is also not recommended when the OS writes meta information to the whole LUN during file
•
system creation.
TPVs should be backed up of files as sets of their component files. While backing up a whole TPV is not
•
difficult, unallocated areas will also be backed up as dummy data. If the TPV then needs to be restored from
the backup, the dummy data is also "restored". This requires allocation of the physical drive area for the
entire TPV capacity, which negates the effects of thin provisioning.
For advanced performance tuning, use standard RAID groups.
•
Refer to the applicable OS and file system documentation before dynamically expanding the volume ca-
•
pacity because expanded volumes may not be recognized by some types and versions of server-side platforms (OSs).
If a TPP includes one or more RAID groups that are configured with Advanced Format drives, all TPVs created
•
in the relevant TPP are treated as Advanced Format volumes. In this case, the write performance may be
reduced when accessing the relevant TPV from an OS or an application that does not support Advanced Format.
RAID group #0, RAID group #1, and RAID group #2
are accessed evenly when I/O access is performed
to the allocated area in TPV#0.
RAID group #0 - #2Added RAID groups
TPPTPP
Balancing
1. Function
Operations Optimization (Virtualization)
TPV Balancing
A drive is allocated when a write is issued to a virtual volume (TPV). Depending on the order and the frequency
of writes, more drives in a specific RAID group may be allocated disproportionately. Also, the physical capacity is
unevenly allocated among the newly added RAID group and the existing RAID groups when physical drives are
added to expand the capacity.
Balancing of TPVs can disperse the I/O access to virtual volumes among the RAID groups in the Thin Provisioning
Pool (TPP).
When allocating disproportionate TPV physical capacity evenly
Balance Thin Provisioning Volume is a function that evenly relocates the physically allocated capacity of TPVs
among the RAID groups that configure the TPP.
Balancing TPV allocation can be performed for TPVs in the same TPP. TPV balancing cannot be performed at the
same time as RAID Migration to a different TPP for which the target TPV does not belong.
When a write is issued to a virtual volume, a drive is allocated. When data is written to multiple TPVs in the TPP,
physical areas are allocated by rotating the RAID groups that configure the TPP in the order that the TPVs were
accessed. When using this method, depending on the write order or frequency, TPVs may be allocated unevenly
to a specific RAID group. In addition, when the capacity of a TPP is expanded, the physical capacity is unevenly
allocated among the newly added RAID group and the existing RAID groups.
● Balancing Level
The TPV balance status is displayed by three levels; "High", "Middle", and "Low". "High" indicates that the physical capacity of TPV is allocated evenly in the RAID groups registered in the TPP. "Low" indicates that the physical
capacity is allocated unequally to a specific RAID group in the TPP.
TPV balancing may not be available when other functions are being used in the device or the target volume.
1. Function
Operations Optimization (Virtualization)
Refer to "
Combinations of Functions That Are Available for Simultaneous Executions" (page 146) for details on
the functions that can be executed simultaneously, the number of the process that can be processed simultaneously, and the capacity that can be processed concurrently.
When a TPP has RAID groups unavailable for the balancing due to lack of free space, etc., the physical allo-
•
cation capacity is balanced among the remaining RAID groups within the TPP. In this case, the balancing
level after the balancing is completed may not be "High".
By performing the TPV balancing, areas for working volumes (the migration destination TPVs with the same
•
capacity as the migration source) are secured for the TPP to which the TPVs belong. If this causes the total
logical capacity of the TPVs in all the TPPs that include these working volumes to exceed the maximum pool
capacity, a TPV balancing cannot be performed.
In addition, this may cause a temporary alarm state ("Caution" or "Warning", which indicates that the
threshold has been exceeded) in the TPP during a balancing execution. This alarm state is removed once
balancing completes successfully.
While TPV balancing is being performed, the balancing level may become lower than before balancing was
•
performed if the capacity of the TPP to which the TPVs belong is expanded.
: Physically allocated area (data other than ALL0 data)
: Physically allocated area (ALL0 data)
: Unallocated area
Check
1. Function
Operations Optimization (Virtualization)
TPV Capacity Optimization
TPV capacity optimization can increase the unallocated areas in a pool (TPP) by changing the physical areas
where 0 is allocated for all of the data to unallocated areas. This improves functional efficiency.
Once an area is physically allocated to a TPV, the area is never automatically released.
If operations are performed when all of the areas are physically allocated, the used areas that are recognized by
a server and the areas that are actually allocated might have different sizes.
The following operations are examples of operations that create allocated physical areas with sequential data to
which only 0 is allocated:
Restoration of data for RAW image backup
•
RAID Migration from Standard volumes to TPVs
•
Creation of a file system in which writing is performed to the entire area
•
The TPV capacity optimization function belongs to Thin Provisioning. This function can be started after a target
TPV is selected via ETERNUS Web GUI or ETERNUS CLI. This function is also available when the RAID Migration
destination is a TPP.
TPV capacity optimization reads and checks the data in each allocated area for the Thin Provisioning function.
This function releases the allocated physical areas to unallocated areas if data that contains all zeros is detected.
Figure 23 TPV Capacity Optimization
TPV capacity optimization may not be available when other functions are being used in the device or the target
volume.
For details on the functions that can be executed simultaneously, refer to "Combinations of Functions That Are
The ETERNUS DX allows for the expansion of volumes and RAID group capacities, migration among RAID groups,
and changing of RAID levels according to changes in the operation load and performance requirements. There
are several expansion functions.
Table 14 Optimization of Volume Configurations
Function/usageVolume expansion
RAID Migration
Logical Device Expansion
LUN Concatenation
Wide Striping
¡ (Adding capacity
during migration)
(*1)
´
¡ (Concatenating
free spaces)
´´´´
RAID group expansion
´
¡
´´´´
Migration among
RAID groups
¡¡
´
Changing the RAID
level
¡ (Adding drives to
existing RAID
groups)
Striping for RAID
groups
´
´
¡
¡: Possible, ´: Not possible
*1: For TPVs, the capacity cannot be expanded during a migration.
When volume capacity is insufficient, a volume can be moved to a RAID group that has enough free space.
This function is recommended for use when the desired free space is available in the destination.
LUN Concatenation
•
Adds areas of free space to an existing volume to expand its capacity. This uses free space from a RAID group
to efficiently expand the volume.
● Expansion of RAID Group Capacity
Logical Device Expansion
•
Adds new drives to an existing RAID group to expand the RAID group capacity. This is used to expand the existing RAID group capacity instead of adding a new RAID group to add the volumes.
● Migration among RAID Groups
RAID Migration
•
The performance of the current RAID groups may not be satisfactory due to conflicting volumes after performance requirements have been changed. Use RAID Migration to improve the performance by redistributing the
volumes amongst multiple RAID groups.
● Changing the RAID Level
RAID Migration (to a RAID group with a different RAID level)
•
Migrating to a RAID group with a different RAID level changes the RAID level of volumes. This is used to convert a given volume to a different RAID level.
Logical Device Expansion (and changing RAID levels when adding the new drives)
•
The RAID level for RAID groups can be changed. Adding drives while changing is also available. This is used to
convert the RAID level of all the volumes belonging to a given RAID group.
RAID Migration is a function that moves a volume to a different RAID group with the data integrity being guaranteed. This allows easy redistribution of volumes among RAID groups in response to customer needs. RAID Migration can be carried out while the system is running, and may also be used to switch data to a different RAID
level changing from RAID5 to RAID1+0, for example.
Volumes moved from a 300GB drive configuration to a 600GB drive configuration
•
Figure 24 RAID Migration (When Data Is Migrated to a High Capacity Drive)
•
The volume number (LUN) does not change before and after the migration. The host can access the volume
without being affected by the volume number.
The following changes can be performed by RAID migration.
•
•
Volumes moved to a different RAID level (RAID5 g
RAID1+0)
Figure 25 RAID Migration (When a Volume Is Moved to a Different RAID Level)
Changing the volume type
A volume is changed to the appropriate type for the migration destination RAID groups or pools (TPP).
Changing the number of concatenations and the Wide Stripe Size (for WSV)
When migration between RAID groups is performed, capacity expansion can also be performed at the same
time. However, the capacity cannot be expanded for TPVs.
TPV capacity optimization
•
When the migration destination is a pool (TPP), TPV capacity optimization after the migration can be set.
For details on the features of the TPV capacity optimization, refer to "
Specify unused areas in the migration destination (RAID group or pool) with a capacity larger than the migration
source volume.
RAID migration may not be available when other functions are being used in the ETERNUS DX or the target volume.
Refer to "Combinations of Functions That Are Available for Simultaneous Executions" (page 146) for details on
the functions that can be executed simultaneously, the number of the process that can be processed simultaneously, and the capacity that can be processed concurrently.
TPV Capacity Optimization" (page 41).
During RAID Migration, the access performance for the RAID groups that are specified as the RAID Migration
source and RAID Migration destination may be reduced.
Logical Device Expansion
Logical Device Expansion (LDE) allows the capacity of an existing RAID group to be dynamically expanded by
changing of the RAID level or the drive configuration of the RAID group. When this function is performed, drives
can be also added at the same time. By using this LDE function to expand the capacity of an existing RAID
group, a new volume can be added without having to add new RAID groups.
•
Expand the RAID group capacity (from RAID5(3D+1P) g RAID5(5D+1P))
Figure 26 Logical Device Expansion (When Expanding the RAID Group Capacity)
Change the RAID levels (from RAID5(3D+1P) g RAID1+0(4D+4M))
Figure 27 Logical Device Expansion (When Changing the RAID Level)
LDE works in terms of RAID group units. If a target RAID group contains multiple volumes, all of the data in the
volumes is automatically redistributed when LDE is performed. Note that LDE cannot be performed if it causes
the number of data drives to be reduced in the RAID group.
In addition, LDE cannot be performed for RAID groups in which the following conditions apply.
RAID groups that belong to TPPs
•
RAID groups in which WSVs are registered
•
RAID groups that are configured with RAID5+0 or RAID6-FR
•
LDE may not be available when other functions are being used in the
For details on the functions that can be executed simultaneously and the number of the process that can be
processed simultaneously, refer to "Combinations of Functions That Are Available for Simultaneous Executions"
(page 146).
ETERNUS DX or the target RAID group.
If drives of different capacities exist in a RAID group that is to be expanded while adding drives, the small-
•
est capacity becomes the standard for the RAID group after expansion, and all other drives are regarded as
having the same capacity as the smallest drive. In this case, the remaining drive space is not used.
If drives of different rotational speeds exist in a RAID group, the access performance of the RAID group is
reduced by the slower drives.
Since the data cannot be recovered after the failure of LDE, back up all the data of the volumes in the target
•
RAID group to another area before performing LDE.
If configuring RAID groups with Advanced Format drives, the write performance may be reduced when ac-
•
cessing volumes created in the relevant RAID group from an OS or an application that does not support Advanced Format.
LUN Concatenation is a function that is used to add new area to a volume and so expand the volume capacity
available to the server. This function enables the reuse of leftover free area in a RAID group and can be used to
solve capacity shortages.
Unused areas, which may be either part or all of a RAID group, are used to create new volumes that are then
added together (concatenated) to form a single large volume.
The capacity can be expanded during an operation.
Figure 28 LUN Concatenation
LUN Concatenation is a function to expand a volume capacity by concatenating volumes.
Up to 16 volumes with a minimum capacity of 1GB can be concatenated.
When there are concatenation source volumes in SAS disks or Nearline SAS disks, concatenation can be performed with volumes in SAS disks or Nearline SAS disks.
For SSDs, the drives for the concatenation source and destination volumes must be the same type (SSD).
From a performance perspective, using RAID groups with the same RAID level and the same drives (type, size,
capacity, and rotational speed) is recommended as the concatenation source.
A concatenated volume can be used as an OPC, EC, or QuickOPC copy source or copy destination. It can also be
used as a SnapOPC/SnapOPC+ copy source.
The LUN number stays the same before and after the concatenation. Because the server-side LUNs are not
changed, an OS reboot is not required. Data can be accessed from the host in the same way regardless of the
concatenation status (before, during, or after concatenation). However, the recognition methods of the volume
capacity expansion vary depending on the OS types.
When the concatenation source is a new volume
•
A new volume can be created by selecting a RAID group with unused capacity.
Figure 29 LUN Concatenation (When the Concatenation Source Is a New Volume)
A volume can be created by concatenating an existing volume into unused capacity.
Figure 30 LUN Concatenation (When the Existing Volume Capacity Is Expanded)
Only Standard type volumes can be used for LUN Concatenation.
LUN Concatenation may not be available when other functions are being used in the device or the target volume.
For details on the functions that can be executed simultaneously, refer to "Combinations of Functions That Are
Available for Simultaneous Executions
" (page 146).
It is recommended that the data on the volumes that are to be concatenated be backed up first.
•
Refer to the applicable OS and file system documentation before dynamically expanding the volume ca-
•
pacity because expanded volumes may not be recognized by some types and versions of server-side platforms (OSs).
When a volume that is using ETERNUS SF AdvancedCopy Manager to run backups is expanded via LUN Con-
•
catenation, the volume will need to be registered with ETERNUS SF AdvancedCopy Manager again.
When specifying a volume in the RAID group configured with Advanced Format drives as a concatenation
•
source or a concatenation destination to expand the capacity, the write performance may be reduced when
accessing the expanded volumes from an OS or an application that does not support Advanced Format.
Wide Striping is a function that concatenates multiple RAID groups by striping and uses many drives simultaneously to improve performance. This function is effective when high Random Write performance is required.
I/O accesses from the server are distributed to multiple drives by increasing the number of drives that configure
a LUN, which improves the processing performance.
Figure 31 Wide Striping
Wide Striping creates a WSV that can be concatenated across 2 to 48 RAID groups.
The number of RAID groups that are to be concatenated is defined when creating a WSV. The number of concatenated RAID groups cannot be changed after a WSV is created. To change the number of concatenated groups
or expand the group capacity, perform RAID Migration.
Other volumes (Standard, SDVs, SDPVs, or WSVs) can be created in the free area of a RAID group that is concatenated by Wide Striping.
WSVs cannot be created in RAID groups with the following conditions.
RAID groups that belong to TPPs
•
RAID groups with different stripe size values
•
RAID groups that are configured with different types of drives
•
RAID groups that are configured with RAID6-FR
•
If one or more RAID groups that are configured with Advanced Format drives exist in the RAID group that is to
be concatenated by striping to create a WSV, the write performance may be reduced when accessing the created WSVs from an OS or an application that does not support Advanced Format.
By setting which function can be
used by each user, unnecessary
access is reduced.
ETERNUS DX
1. Function
User Access Management
User Access Management
Account Management
The ETERNUS DX allocates roles and access authority when a user account is created, and sets which functions
can be used depending on the user privileges.
Since the authorized functions of the storage administrator are classified according to the usage and only minimum privileges are given to the administrator, security is improved and operational mistakes and management
hours can be reduced.
Figure 32 Account Management
Up to 60 user accounts can be set in the ETERNUS DX.
Up to 16 users can be logged in at the same time using ETERNUS Web GUI or ETERNUS CLI.
The menu that is displayed after logging on varies depending on the role that is added to a user account.
Internal Authentication and External Authentication are available as logon authentication methods. RADIUS authentication can be used for External Authentication.
The user authentication functions described in this section can be used when performing storage management
and operation management, and when accessing the
● Internal Authentication
Internal Authentication is performed using the authentication function of the ETERNUS DX.
The following authentication functions are available when the ETERNUS DX is connected via a LAN using operation management software.
User account authentication
•
User account authentication uses the user account information that is registered in the ETERNUS DX to verify
user logins. Up to 60 user accounts can be set to access the ETERNUS DX.
SSL authentication
•
ETERNUS Web GUI and SMI-S support HTTPS connections using SSL/TLS. Since data on the network is encrypted,
security can be ensured. Server certifications that are required for connection are automatically created in the
ETERNUS DX.
SSH authentication
•
Since ETERNUS CLI supports SSH connections, data that is sent or received on the network can be encrypted.
The server key for SSH varies depending on the ETERNUS DX. When the server certification is updated, the server key is updated as well.
Password authentication and client public key authentication are available as authentication methods for SSH
connections.
The supported client public keys are shown below.
Table 16 Client Public Key (SSH Authentication)
ETERNUS DX via operation management LAN.
Type of public keyComplexity (bits)
IETF style DSA for SSH v21024, 2048, and 4096
IETF style RSA for SSH v21024, 2048, and 4096
● External Authentication
External Authentication uses the user account information (user name, password, and role name) that is registered on an external authentication server. RADIUS authentication supports ETERNUS Web GUI and the ETERNUS
CLI login authentication for the ETERNUS DX
LAN using operation management software.
RADIUS authentication
•
RADIUS authentication uses the Remote Authentication Dial-In User Service (RADIUS) protocol to consolidate
authentication information for remote access.
An authentication request is sent to the RADIUS authentication server that is outside the ETERNUS system network. The authentication method can be selected from CHAP and PAP. Two RADIUS authentication servers (the
primary server and the secondary server) can be connected to balance user account information and to create
a redundant configuration. When the primary RADIUS server failed to authenticate, the secondary RADIUS
server attempts to authenticate.
, and authentication for connections to the ETERNUS DX through a
User roles are specified in the Vendor Specific Attribute (VSA) of the Access-Accept response from the server.
The following table shows the syntax of the VSA based account role on the RADIUS server.
Item
Type126Attribute number for the Vendor Specific At-
Length17 or moreAttribute size (calculated by server)
Vendor length12 or moreAttribute size described after Vendor type
Attribute-Specific1 or moreASCII charactersOne or more assignable role names for suc-
Size
(octets)
ValueDescription
tribute
(calculated by server)
cessfully authenticated users (*1)
*1: The server-side role names must be identical to the role names of the ETERNUS DX. Match the letter case
when entering the role names.
[Example] RoleName0
If RADIUS authentication fails when "Do not use Internal Authentication" has been selected for "Authentica-
•
tion Error Recovery" on ETERNUS Web GUI, ETERNUS CLI, or SMI-S, logging on to ETERNUS Web GUI or ETERNUS CLI will not be available.
When the setting to use Internal Authentication for errors caused by network problems is configured, Internal Authentication is performed if RADIUS authentication fails on both primary and secondary RADIUS servers, or at least one of these failures is due to network error.
So long as there is no RADIUS authentication response the ETERNUS DX will keep retrying to authenticate
•
the user for the entire "Timeout" period set on the "Set RADIUS Authentication (Initial)" menu. If authentication does not succeed before the "Timeout" period expires, RADIUS Authentication is considered to be a failure.
When using RADIUS authentication, if the role that is received from the server is unknown (not set) for the
Information such as
the storage system name,
the user/role,
the process time,
the process details,
and the process results
Audit log
1. Function
User Access Management
Audit Log
The ETERNUS DX can send information such as access records by the administrator and setting changes as audit
logs to the Syslog servers.
Audit logs are audit trail information that record operations that are executed for the ETERNUS DX and the response from the system. This information is required for auditing.
The audit log function enables monitoring of all operations and any unauthorized access that may affect the
system.
Syslog protocols (RFC3164 and RFC5424) are supported for audit logs.
Information that is to be sent is not saved in the ETERNUS DX
information. Two Syslog servers can be set as the destination servers in addition to the Syslog server that is used
for event notification.
Permission for Server A
LUN#0 → Volume#0 … LUN#127 → Volume#127
Permission for Server B
LUN#0 → Volume#128 … LUN#127 → V olume#255
Permission for Server C
LUN#0 → Volume#256 … LUN#127 → V olume #383
Permission for Server D
LUN#0 → Volume#384 … LUN#127 → V olume #511
Switch
Server A
LUN#0
:
LUN#127
Server B
LUN#0
LUN#127
Server D
LUN#0
:
LUN#127
Server C
LUN#0
:
LUN#127
Volume#0
:
Volume#127
Volume#128
:
Volume#255
Volume#256
:
Volume#383
Volume#384
Volume#511
:
Port
Port
:
ETERNUS DX
1. Function
Improving Host Connectivity
Improving Host Connectivity
Host Affinity
The host affinity function prevents data from being damaged due to inadvertent storage access. By defining a
server that can access the volume, security can be ensured when multiple servers are connected.
The host affinity can be set by associating "Host Groups", "CA Port Groups", and "LUN Groups".
Figure 35 Associating Host Groups, CA Port Groups, and LUN Groups
● Host Group
● CA Port Group
● LUN Group
The host affinity can also be set by directly specifying the host and the CA port without creating host groups and
CA port groups.
A host group is a group of hosts that have the same host interface type and that access the same LUN group.
HBAs in multiple hosts can be configured in a single host group.
A CA port group is a group of the same CA type ports that are connected to a specific host group. A CA port group
is configured with ports that access the same LUN group, such as ports that are used for multipath connection to
the server or for connecting to the cluster configuring server. A single CA port group can be connected to multiple host groups.
A LUN group is a group of LUNs that can be recognized by the host and the LUN group can be accessed from the
same host group and CA port groups.
A LUN group is mapping information for LUNs and volumes.
Host access must be prevented when changing or deleting already set host affinity settings. When adding a
•
new LUN to the host affinity settings, it is not necessary to stop host access.
When servers are duplicated and connected using a cluster configuration to share a single
•
among multiple servers, cluster control software is required.
For an iSCSI interface, the iSCSI authentication function can be used when the initiator accesses the target. The
iSCSI authentication function is available for host connections and remote copying.
The Challenge Handshake Authentication Protocol (CHAP) is supported for iSCSI authentication. For CHAP Authentication, unidirectional CHAP or bidirectional CHAP can be selected. When unidirectional CHAP is used, the
target authenticates the initiator to prevent fraudulent access. When bidirectional CHAP is used, the target authenticates the initiator to prevent fraudulent access and the initiator authenticates the target to prevent impersonation.
Note that the Internet Storage Name Service (iSNS) is also supported as an iSCSI name resolution.
Eco-mode is a function that reduces power consumption for limited access disks by stopping the disks rotation
during specified periods or by powering off the disks.
Disk spin-up and spin-down schedules can be set for each RAID group or TPP. These schedules can also be set to
allow backup operations.
Figure 36 Eco-mode
The Eco-mode of the ETERNUS DX is a function specialized for reducing power consumption attributed to Massive
Arrays of Idle Disks (MAID). The operational state for stopping a disk can be selected from two modes: "stop motor" or "turn off drive power".
The disks to be controlled are SAS disks and Nearline SAS disks.
Eco-mode cannot be used for the following drives:
Global Hot Spares (Dedicated Hot Spares are possible)
•
SSDs
•
Unused drives (that are not used by RAID groups)
•
The Eco-mode schedule cannot be specified for the following RAID groups or pools:
No volumes are registered
•
Configured with SSDs
•
RAID groups to which the volume with Storage Migration path belongs
For RAID groups with the following conditions, the Eco-mode schedule can be set but the disks motor cannot be
stopped or the power supply cannot be turned off:
SDPVs are registered
•
ODX Buffer volumes are registered
•
If disk access occurs while the disk motor is stopped, the disk is immediately spun up and can be accessed within
one to five minutes.
The Eco-mode function can be used with the following methods:
Schedule control
•
Controls the disk motors by configuring the Eco-mode schedule on ETERNUS Web GUI or ETERNUS CLI. The operation time schedule settings/management is performed for each RAID group and TPP.
External application control (software interaction control)
•
Disk motor is controlled for each RAID group on ETERNUS SF Software.
The disk motors are controlled by interacting with applications installed on the server side and responding to
instructions from the applications. Applications which can be interacted with are as follows:
ETERNUS SF Storage Cruiser
-
ETERNUS SF AdvancedCopy Manager
-
The following hierarchical storage management software can be also linked with Eco-mode.
When using the Eco-mode function with these products, an Eco-mode disk operating schedule does not need to
be set. A drive in a stopped condition starts running when it is accessed.
IBM Tivoli Storage Manager for Space Management
•
IBM Tivoli Storage Manager HSM for Windows
•
Symantec Veritas Storage Foundation Dynamic Storage Tiering (DST) function
•
The following table shows the specifications of Eco-mode.
Table 17 Eco-mode Specifications
ItemDescriptionRemarks
Number of registrable schedules64Up to 8 events (during disk operation) can be set for each
schedule.
Host I/O Monitoring Interval (*1)30 minutes (default)Monitoring time can be set from 10 to 60 minutes.
The monitoring interval setting can be changed by users
with the maintenance operation privilege.
Disk Motor Spin-down Limit Count
(per day)
Target driveSAS disks
25 (default)The number of times the disk is stopped can be set from
1 to 25.
When it exceeds the upper limit, Eco-mode becomes un-
available, and the disks keep running.
SSD is not supported.
Nearline SAS disks
*1: The monitoring time period to check if there is no access to a disk for a given length of time and stop the
The disk stops 10 min
after the scheduled operation
The motor starts rotating
10 min before the scheduled operation
Operation
Scheduled operation
StopSc
heduled operationStopOperation
21:009:001:00
OperationOperation
Access
Stop accessing
The disk stops 10 min after the scheduled operation
Accessible in 1 to 5 min
The motor starts rotating
10 min before the scheduled operation
1. Function
Environmental Burden Reduction
To set Eco-mode schedule, use ETERNUS Web GUI, ETERNUS CLI, ETERNUS SF Storage Cruiser, or ETERNUS SF
•
AdvancedCopy Manager. Note that schedules that are created by ETERNUS Web GUI or ETERNUS CLI and
schedules that are created by ETERNUS SF Storage Cruiser or ETERNUS SF AdvancedCopy Manager cannot be
shared. Make sure to use only one type of software to manage a RAID group.
Use ETERNUS Web GUI or ETERNUS CLI to set Eco-mode for TPPs. ETERNUS SF Storage Cruiser or ETERNUS SF
•
AdvancedCopy Manager cannot be used to set the Eco-mode for TPPs and FTRPs.
Specify the same Eco-mode schedule for the RAID groups that configure a WSV. If different Eco-mode sched-
•
ules are specified, activation of stopped disks when host access is performed occurs and the response time
may increase.
The operation time of disks varies depending on the Eco-mode schedule and the disk access.
•
Access to a stopped disk outside of the scheduled operation time period causes the motor of the stopped
-
disk to be spun up, allowing normal access in about one to five minutes. When a set time elapses since
the last access to a disk, the motor of the disk is stopped.
If a disk is activated from the stopped state more than a set amount of times in a day, the Eco-mode
-
schedule is not applied and disk motors are not stopped by the Eco-mode.
(Example 1) Setting the Eco-mode schedule via ETERNUS Web GUI
Operation schedule is set as 9:00 to 21:00 and there are no accesses outside of the scheduled period
(Example 2) Setting the Eco-mode schedule via ETERNUS Web GUI
Operation schedule is set as 9:00 to 21:00 and there are accesses outside of the scheduled period
r
consumption and
temperature data for
each storage system.
ETERNUS DX storage systems
Server
Temperature
ETERNUS SF Storage Cruiser
1. Function
Environmental Burden Reduction
Eco-mode schedules are executed according to the date and time that are set in the ETERNUS DX. To turn
•
on and turn off the disk motors according to the schedule that is set, use the Network Time Protocol (NTP)
server in the date and time setting in ETERNUS Web GUI to set automatic adjustment of the date and time.
If the number of drives that are activated in a single drive enclosure is increased, the time for system activa-
•
tion may take longer (about 1 to 5 minutes). This is because all of the disks cannot be activated at the
same time.
Even if the disk motor is turned on and off repeatedly according to the Eco-mode schedule, the failure rate
•
is not affected comparing to the case when the motor is always on.
Power Consumption Visualization
The power consumption and the temperature of the ETERNUS DX can be visualized with a graph by using the
ETERNUS SF Storage Cruiser integrated management software in a storage system environment. The
DX collects information on power consumption and the ambient temperature in the storage system. Collected
information is notified using SNMP and graphically displayed on the screens by ETERNUS SF Storage Cruiser.
Cooling efficiency can be improved by understanding local temperature rises in the data center and reviewing
the location of air-conditioning.
Understanding the drives that have a specific time to be used from the access frequency to RAID groups enables
the Eco-mode schedule to be adjusted accordingly.
1. Function
Operation Management/Device Monitoring
Operation Management/Device Monitoring
Operation Management Interface
Operation management software can be selected in the ETERNUS DX according to the environment of the user.
ETERNUS Web GUI and ETERNUS CLI are embedded in the ETERNUS DX controllers.
The setting and display functions can also be used with ETERNUS SF Web Console.
■
ETERNUS Web GUI
ETERNUS Web GUI is a program for settings and operation management that is embedded in the
and accessed by using a web browser via http or https.
ETERNUS Web GUI has an easy-to-use design that makes intuitive operation possible.
The settings that are required for the ETERNUS DX initial installation can be easily performed by following the
wizard and inputting the parameters for the displayed setting items.
SSL v3 and TLS are supported for https connections. However, when using https connections, it is required to
register a server certification in advance or self-generate a server certification. Self-generated server certifications are not already certified with an official certification authority registered in web browsers. Therefore, some
web browsers will display warnings. Once a server certification is installed in a web browser, the warning will not
be displayed again.
When using ETERNUS Web GUI to manage operations, prepare a Web browser in the administration terminal.
The following table shows the supported Web browsers.
Table 18 ETERNUS Web GUI Operating Environment
SoftwareGuaranteed operating environment
Web browserMicrosoft Internet Explorer 9.0, 10.0 (desktop version), 11.0 (desktop version)
Mozilla Firefox ESR 60
When using ETERNUS Web GUI to connect the ETERNUS DX, the default port number is 80 for http.
■
ETERNUS CLI
ETERNUS DX
ETERNUS CLI supports Telnet or SSH connections. The ETERNUS DX can be configured and monitored using commands and command scripts.
With the ETERNUS CLI, SSH v2 encrypted connections can be used. SSH server keys differ for each storage system,
and must be generated by the SSH server before using SSH.
Password authentication and client public key authentication are supported as authentication methods for SSH.
For details on supported client public key types, refer to "User Authentication
■
ETERNUS SF
ETERNUS SF can manage a Fujitsu storage products centered storage environment. An easy-to-use interface enables complicated storage environment design and setting operations, which allows easy installation of a storage
system without needing to have high level skills.
ETERNUS SF ensures stable operation by managing the entire storage environment.
1. Function
Operation Management/Device Monitoring
■
SMI-S
Storage systems can be managed collectively using the general storage management application that supports
Version 1.6 of Storage Management Initiative Specification (SMI-S). SMI-S is a storage management interface
standard of the Storage Network Industry Association (SNIA). SMI-S can monitor the
change configurations such as RAID groups, volumes, and Advanced Copy (EC/OPC/SnapOPC/SnapOPC+).
Performance Information Management
The ETERNUS DX supports a function that collects and displays the performance data of the storage system via
ETERNUS Web GUI or ETERNUS CLI. The collected performance information shows the operation status and load
status of the ETERNUS DX
ETERNUS SF Storage Cruiser can be used to easily understand the operation status and load status of the ETERNUS DX by graphically displaying the collected information on the GUI. ETERNUS SF Storage Cruiser can also
monitor the performance threshold and retain performance information for the duration that a user specifies.
When performance monitoring is operated from ETERNUS SF Storage Cruiser, ETERNUS Web GUI, or ETERNUS CLI,
performance information in each type is obtained during specified intervals (30 - 300 seconds) in the ETERNUS
DX.
The performance information can be stored and exported in the text file format, as well as displayed, from ETERNUS Web GUI. The performance information, which can be obtained, are indicated as follows.
and can be used to optimize the system configuration.
ETERNUS DX status and
● Volume Performance Information for Host I/O
Read IOPS (the read count per second)
•
Write IOPS (the write count per second)
•
Read Throughput (the amount of transferred data that is read per second)
•
Write Throughput (the amount of transferred data that is written per second)
•
Read Response Time (the average response time per host I/O during a read)
•
Write Response Time (the average response time per host I/O during a write)
•
Read Process Time (the average process time in the storage system per host I/O during a read)
•
Write Process Time (the average process time in the storage system per host I/O during a write)
•
Read Cache Hit Rate (cache hit rate for read)
•
Write Cache Hit Rate (cache hit rate for write)
•
Prefetch Cache Hit Rate (cache hit rate for prefetch)
•
● Volume Performance Information for the Advanced Copy Function
Read IOPS (the read count per second)
•
Write IOPS (the write count per second)
•
Read Throughput (the amount of transferred data that is read per second)
•
Write Throughput (the amount of transferred data that is written per second)
•
Read Cache Hit Rate (cache hit rate for read)
•
Write Cache Hit Rate (cache hit rate for write)
•
Prefetch Cache Hit Rate (cache hit rate for prefetch)
1. Function
Operation Management/Device Monitoring
Event Notification
When an error occurs in the
ETERNUS DX, the event notification function notifies the event information to the
administrator. The administrator can be informed that an error occurred without monitoring the screen all the
time.
The methods to notify an event are e-mail, SNMP Trap, syslog, remote support, and host sense.
Figure 38 Event Notification
The notification methods and levels can be set as required.
The following events are notified.
Table 19 Levels and Contents of Events That Are Notified
LevelLevel of importanceEvent contents
ErrorMaintenance is necessaryComponent failure, temperature error, end of
battery life (*1), rebuild/copyback, etc.
WarningPreventive maintenance is neces-
sary
Module warning, battery life warning (*1),
etc.
Notification (information)Device informationComponent restoration notification, user log-
in/logout, RAID creation/deletion, storage
system power on/off, firmware update, etc.
*1: Battery related events are notified only for the ETERNUS DX60 S4.
● E-Mail
When an event occurs, an e-mail is sent to the specified e-mail address.
The ETERNUS DX
supports "SMTP AUTH" and "SMTP over SSL" as user authentication. A method can be selected
from CRAM-MD5, PLAIN, LOGIN, or AUTO which automatically selects one of these methods.
1. Function
Operation Management/Device Monitoring
● Simple Network Management Protocol (SNMP)
Using the SNMP agent function, management information is sent to the SNMP manager (network management/
monitoring server).
The ETERNUS DX
Table 20 SNMP Specifications
ItemSpecificationRemarks
SNMP versionSNMP v1, v2c, v3—
MIBMIB IIOnly the information managed by the ETERNUS DX can
supports the following SNMP specifications.
FibreAlliance MIB 2.2This is a MIB which is defined for the purpose of FC base
be sent with the GET command.
The SET command send operation is not supported.
SAN management.
Only the information managed by the ETERNUS DX can
be sent with the GET command.
The SET command send operation is not supported.
Unique MIBThis is a MIB in regard to hardware configuration of the
ETERNUS DX.
TrapUnique TrapA trap number is defined for each category (such as a
component disconnection and a sensor error) and a message with a brief description of an event as additional information is provided.
● Syslog
By registering the syslog destination server in the ETERNUS DX, various events that are detected by the ETERNUS
DX are sent to the syslog server as event logs.
The ETERNUS DX supports the syslog protocol which conforms to RFC3164 and RFC5424.
● Remote Support
The errors that occur in the ETERNUS DX are notified to the remote support center. The ETERNUS DX sends additional information (logs and system configuration information) for checking the error. This shortens the time to
collect information.
Remote support has the following maintenance functions.
Failure notice
•
This function reports various failures, that occur in the ETERNUS DX, to the remote support center. The maintenance engineer is notified of a failure immediately.
Information transfer
•
This function sends information such as logs and configuration information to be used when checking a failure. This shortens the time to collect the information that is necessary to check errors.
Firmware download
•
The latest firmware in the remote support center is automatically registered in the ETERNUS DX. This function
ensures that the latest firmware is registered in the ETERNUS DX, and prevents known errors from occurring.
Firmware can also be registered manually.
1. Function
Operation Management/Device Monitoring
● Host Sense
The ETERNUS DX
returns host senses (sense codes) to notify specific status to the server. Detailed information
such as error contents can be obtained from the sense code.
Note that the
•
ETERNUS DX cannot check whether the event log is successfully sent to the syslog server. Even
if a communication error occurs between the ETERNUS DX and the syslog server, event logs are not sent
again. When using the syslog function (enabling the syslog function) for the first time, confirm that the
syslog server has successfully received the event log of the relevant operation.
Using the ETERNUS Multipath Driver to monitor the storage system by host senses is recommended.
•
Sense codes that cannot be detected in a single configuration can also be reported.
1. Function
Operation Management/Device Monitoring
Device Time Synchronization
ETERNUS DX treats the time that is specified in the Master CM as the system standard time and distributes
The
that time to other modules to synchronize the storage time. The ETERNUS DX also supports the time correction
function by using the Network Time Protocol (NTP). The ETERNUS DX corrects the system time by obtaining the
time information from the NTP server during regular time correction.
The ETERNUS DX has a clock function and manages time information of date/time and the time zone (the region
in which the ETERNUS DX is installed). This time information is used for internal logs and for functions such as
Eco-mode and remote support.
The automatic time correction by NTP is recommended to synchronize time in the whole system.
When using the NTP, specify the NTP server or the SNTP server. The ETERNUS DX supports NTP protocol v4. The
time correction mode is Step mode (immediate correction). The time is regularly corrected every three hours
once the NTP is set.
If an error occurs in a system that has a different date and time for each device, analyzing the cause of this
•
error may be difficult.
Make sure to set the date and time correctly when using Eco-mode.
•
The stop and start process of the disk motors does not operate according to the Eco-mode schedule if the
date and time in the ETERNUS DX are not correct.
A power synchronized unit detects changes in the AC power output of the Uninterruptible Power Supply (UPS)
unit that is connected to the server and automatically turns on and off the
Wake On LAN is a function that turns on the ETERNUS DX
via a network.
When "magic packet" data is sent from an administration terminal, the ETERNUS DX detects the packet and the
power is turned on.
To perform Wake On LAN, utility software for Wake On LAN such as Systemwalker Runbook Automation is required and settings for Wake On LAN must be performed.
The MAC address for the ETERNUS DX can be checked on ETERNUS CLI.
ETERNUS Web GUI or ETERNUS CLI can be used to turn off the power of an ETERNUS DX remotely.
y using the high-speed
backup with Advanced Copy
function.
Backup software
Volume
Tape
Conventional backup
ETERNUS SF AdvancedCopy Manager
Disk Backup FunctionTape Backup Function
1. Function
Backup (Advanced Copy)
Backup (Advanced Copy)
The Advanced Copy function (high-speed copying function) enables data backup (data replication) at any point
without stopping the operations of the
For an ETERNUS DX backup operation, data can be replicated without placing a load on the business server. The
replication process for large amounts of data can be performed by controlling the timing and business access so
that data protection can be considered separate from operation processes.
An example of an Advanced Copy operation using ETERNUS SF AdvancedCopy Manager is shown below.
Figure 42 Example of Advanced Copy
ETERNUS DX.
Advanced Copy functions include One Point Copy (OPC), QuickOPC, SnapOPC, SnapOPC+, and Equivalent Copy
(EC).
The following table shows ETERNUS related software for controlling the Advanced Copy function.
Table 21 Control Software (Advanced Copy)
Control softwareFeature
ETERNUS Web GUI / ETERNUS CLIThe copy functions can be used without optional software.
ETERNUS SF AdvancedCopy ManagerETERNUS SF AdvancedCopy Manager supports various OSs and ISV applica-
ETERNUS SF ExpressETERNUS SF Express allows easy management and backup of systems with a
tions, and enables the use of all the Advanced Copy functions. This software
can also be used for backups that interoperate with Oracle, SQL Server, Ex-
change Server, or Symfoware Server without stopping operations.
single product.
71
Copyright 2019 FUJITSU LIMITED
Design Guide (Basic)
P3AM-8992-14ENZ0
1. Function
Backup (Advanced Copy)
Table 22 List of Functions (Copy Methods)
Number of available sessions
1,024SnapOPC+SnapOPC
A copy is executed for each LUN. With ETERNUS SF AdvancedCopy Manager, a copy can also be executed for each
logical disk (which is called a partition or a volume depending on the OS).
A copy cannot be executed if another function is running in the storage system or the target volume. For details
on the functions that can be executed simultaneously, refer to "
Simultaneous Executions" (page 146).
Backup
Control software
ETERNUS Web GUI / ETERNUS
CLI
ETERNUS SF AdvancedCopy
Manager
SnapOPC+
QuickOPC
OPC
EC
ETERNUS SF Express
SnapOPC+
Combinations of Functions That Are Available for
Type of Copy
The Advanced Copy functions offer the following copy methods: "Mirror Suspend", "Background Copy", and "Copyon-Write". The function names that are given to each method are as follows: "EC" for the "Mirror Suspend" method, "OPC" for the "Background Copy" method, and "SnapOPC" for the "Copy-on-Write" method.
When a physical copy is performed for the same area after the initial copy, OPC offers "QuickOPC", which only
performs a physical copy of the data that has been updated from the previous version. The SnapOPC+ function
only copies data that is to be updated and performs generation management of the copy source volume.
● OPC
All of the data in a volume at a specific point in time is copied to another volume in the
OPC is suitable for the following usages:
Performing a backup
•
Performing system test data replication
•
Restoring backup data (restoration after replacing a drive when the copy source drive has failed)
•
● QuickOPC
QuickOPC copies all data as initial copy in the same way as OPC. After all of the data is copied, only updated data
(differential data) is copied. QuickOPC is suitable for the following usages:
Creating a backup of the data that is updated in small amounts
As updates occur in the source data, SnapOPC/SnapOPC+ saves the data prior to change to the copy destination
(SDV/TPV). The data, prior to changes in the updated area, is saved to an SDP/TPP. Create an SDPV for the SDP
when performing SnapOPC/SnapOPC+ by specifying an SDV as the copy destination.
SnapOPC/SnapOPC+ is suitable for the following usages:
Performing temporary backup for tape backup
•
Performing a backup of the data that is updated in small amounts (generation management is available for
•
SnapOPC+)
SnapOPC/SnapOPC+ operations that use an SDV/TPV as the copy destination logical volume have the following
•
characteristics. Check the characteristics of each volume type before selecting the volume type.
Table 23 Characteristics of SnapOPC/SnapOPC+ Operations with Each Type of Copy Destination Logical Volume
Item to compareSDVTPV
Ease of operation settings
Usage efficiency of the
pool
The operation setting is complex because a
dedicated SDV and SDP must be set
The usage efficiency of the pool is higher
because the allocated size of the physical
area is small (8KB)
The operation setting is easy because a dedicated SDV
and SDP are not required
The usage efficiency of the pool is lower because the allocated size of the physical area is large with a chunk
size of 21MB / 42MB / 84MB / 168MB
*1: The difference between SnapOPC and SnapOPC+ is that SnapOPC+ manages the history of updated data as
opposed to SnapOPC, which manages updated data for a single generation only. While SnapOPC manages
updated data in units per session thus saving the same data redundantly, SnapOPC+ has updated data as
history information which can provide multiple backups for multiple generations.
● EC
An EC creates data that is mirrored from the copy source to the copy destination beforehand, and then suspends
the copy and handles each data independently.
When copying is resumed, only updated data in the copy source is copied to the copy destination. If the copy
destination data has been modified, the copy source data is copied again in order to maintain equivalence between the copy source data and the copy destination data. EC is suitable for the following usages:
Performing a backup
•
Performing system test data replication
•
If the SDP capacity is insufficient, a copy cannot be performed. In order to avoid this situation, an operation
•
that notifies the operation administrator of event information according to the remaining SDP capacity is
recommended. For more details on event notification, refer to "Event Notification" (page
For EC, the data in the copy destination cannot be referenced or updated until the copy session is suspen-
•
ded. If the monitoring software (ServerView Agents) performs I/O access to the data in the copy destination,
an I/O access error message is output to the server log message and other destinations. To prevent error
messages from being output, consider using other monitoring methods.
For a SnapOPC+, the maximum number of SnapOPC+ copy session generations can be set for a single copy
source area when seven or less multi-copy sessions are already set.
Figure 47 Multi-Copy (Including SnapOPC+)
● Cascade Copy
A copy destination with a copy session that is set can be used as the copy source of another copy session.
A Cascade Copy is performed by combining two copy sessions.
In Figure 48, "Copy session 1" refers to a copy session in which the copy destination area is also used as the copy
source area of another copy session and "Copy session 2" refers to a copy session in which the copy source area is
also used as the copy destination area of another copy session.
For a Cascade Copy, the copy destination area for copy session 1 and the copy source area for copy session 2
must be identical or the entire copy source area for copy session 2 must be included in the copy destination area
for copy session 1.
Copy sourceCopy destination and sourceCopy destination
OPC/QuickOPC/EC
OPC/QuickOPC/
SnapOPC/SnapOPC+/EC
12
1. Function
Backup (Advanced Copy)
A Cascade Copy can be performed when all of the target volumes are the same size or when the copy destination
volume for copy session 2 is larger than the other volumes.
Figure 48 Cascade Copy
Table 24 shows the supported combinations when adding a copy session to a copy destination volume where a
copy session has already been configured.
Table 24 Available Cascade Copy Combinations (When a Cascade Copy Performs Session 1 Followed by Session 2)
Copy session 2
OPC
QuickOPC
SnapOPC
SnapOPC+
EC
¡: Possible, ´: Not possible
*1: When copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session, data in the copy destination of
*2: This combination is supported only if the copy size in both the copy source volume and the copy destina-
Copy session 1
OPCQuickOPCSnapOPCSnapOPC+EC
¡ (*1)¡ (*1)
¡ (*1)¡ (*1) (*2)
¡ (*1)¡ (*1)
¡ (*1)¡ (*1)
¡¡
´´
´´
´´
´´
´´
copy session 1 is backed up. Data is not backed up in the copy source of copy session 1.
tion volume is less than 2TB.
If the copy size is 2TB or larger, perform the following operations instead.
• When performing a temporary recovery
Use a Cascade Copy of QuickOPC (copy session 1) and OPC (copy session 2).
• When backing up two generations
Use a multi-copy that is configured with QuickOPC and QuickOPC.
To suspend a Cascade Copy where session 1 is performed before session 2 and session 2 is an EC session,
•
perform the Suspend command after the physical copy for copy session 1 is complete.
A Cascade Copy can be performed when the copy type for copy session 1 is XCOPY or ODX. The copy destina-
•
tion area for XCOPY or ODX and the copy source area for copy session 2 do not have to be completely identical. For example, a Cascade Copy can be performed when the copy source area for copy session 2 is only
part of the copy destination area for copy session 1.
XCOPY or ODX cannot be set as the copy type for copy session 2 in a Cascade Copy.
For more details on XCOPY and ODX, refer to "Server Linkage Functions
•
To acquire valid backup data in the copy destination for copy session 2, a physical copy must be completed
•
or suspended in all of the copy sessions that configure the Cascade Copy. Check the copy status for copy
sessions 1 and 2 when using the backup data.
However, if a Cascade Copy performs session 1 before session 2, and copy session 1 is an OPC or QuickOPC
session and copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session, the data in the copy destination for copy session 2 is available even during a physical copy.
If copy session 1 is an EC session and copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session,
•
setting copy session 2 after setting copy session 1 to an equivalent or suspended state is recommended.
When stopping an OPC or QuickOPC session for copy session 1 during a physical copy, stop copy session 2 in
•
advance if copy session 2 is an OPC, QuickOPC, SnapOPC, or SnapOPC+ session.
If copy session 2 is an EC session, copy session 2 does not transition to an equivalent state until the physical
•
copy for copy session 1 is complete. For an EC session, a copy session cannot be suspended until the session
transitions to an equivalent state.
If a Cascade Copy performs session 1 before session 2, and copy session 1 is an OPC or QuickOPC session, the
•
logical data in the intermediate volume when copy session 2 is started (the copy destination volume for
copy session 1) is copied to the copy destination volume for copy session 2. A logical data copy is shown
below.
Striping Size Expansion is a function to expand the stripe depth value by specifying the stripe depth when creating a RAID group.
Expansion of the stripe size enables advanced performance tuning. For normal operations, the default value
does not need to be changed.
An expanded stripe depth reduces the number of drives that are accessed. A reduced number of commands to
drives improves the access performance of the corresponding RAID1+0 RAID groups. However, it should be noted
that an expanded stripe depth may reduce the sequential write performance for RAID5.
The stripe depth values that are available for each RAID type are shown below.
A controller that controls access is assigned to each RAID group and manages the load balance in the
ETERNUS
DX. The controller that controls a RAID group is called an assigned CM.
Figure 49 Assigned CMs
When the load is unbalanced between the controllers, change the assigned CM.
If an assigned controller is disconnected for any reason, the assigned CM is replaced by another controller. After
the disconnected controller is installed again and returns to normal status, this controller becomes the assigned
CM again.
The response from the ETERNUS DX can be optimized by switching the setup information of the host response
for each connected server.
The server requirements of the supported functions, LUN addressing, and the method for command responses
vary depending on the connection environments such as the server OS and the driver that will be used. A function that handles differences in server requirements is supported. This function can specify the appropriate operation mode for the connection environment and convert host responses that respond to the server in the
NUS DX.
The host response settings can be specified for the server or the port to which the server connects. For details on
the settings, refer to "Configuration Guide -Server Connection-".
Figure 50 Host Response
ETER-
If the host response settings are not set correctly, a volume may not be recognized or the desired perform-
•
ance may not be possible. Make sure to select appropriate host response settings.
The maximum number of LUNs that can be mapped to the LUN group varies depending on the connection
Storage Migration is a function that migrates the volume data from an old storage system to volumes in a new
storage system without using a host in cases such as when replacing a storage system.
The migration source storage system and migration destination
ta read from the target volume in the migration source is written to the migration destination volume in the
ETERNUS DX.
Since Storage Migration is controlled by ETERNUS DX controllers, no additional software is required.
The connection interface is FC. In addition, the direct connection and switch connection topologies are supported.
Online Storage Migration and offline Storage Migration are supported.
Offline method
•
Stop the server during the data migration. Host access becomes available after the data migration to the migration destination volume is complete. Therefore, this method prevents host access from affecting the ETERNUS DX and can shorten the time of the migration. This method is suitable for cases requiring quick data migration.
Online method
•
Host access becomes available after the data migration to the migration destination volume starts. Operations
can be performed during the data migration. Therefore, this method can shorten the time for the stopped operation. This method is suitable for cases requiring continued host access during the data migration.
Figure 51 Storage Migration
ETERNUS DX are connected using FC cables. Da-
The Storage Migration function migrates whole volumes at the block level. A data migration can be started by
specifying a text file with migration information that is described in a dedicated format from ETERNUS Web GUI.
The path between the migration source and the migration destination is called a migration path. The maximum
number of migration volumes for each migration path is 512.
Up to 16 migration source devices can be specified and up to eight migration paths can be created for each migration source device.
The capacity of a volume that is to be specified as the migration destination area must be larger than the migration source volume capacity.
For online Storage Migration, the capacity of the migration destination volume must be the same as the
•
migration source volume.
For offline Storage Migration, stop server access to both the migration source volume and the migration
•
destination volume during a migration.
For online Storage Migration, stop server access to the migration source volume and the migration destination volume before starting a migration. In addition, do not access the migration source volume from the
server during a migration.
Online storage migration can be manually resumed on the following volumes after the process (of deleting
•
a copy session) is complete.
TPV capacity optimization is running
-
An Advanced Copy session exists
-
For the migration destination device, the FC port mode needs to be switched to "Initiator" and the port pa-
•
rameter also needs to be set.
Make sure to delete the migration path after Storage Migration is complete.
Importing the migration target volume from the external storage system
1. Function
Non-disruptive Storage Migration
Non-disruptive Storage Migration
Non-disruptive Storage Migration is a function that migrates the volume data from an old storage system to volumes in a new storage system without stopping a business server in cases such as when replacing a storage
system.
The connection interface between the migration source storage system (external storage system) and the migration destination storage system (local storage system) is only the FC cable. In addition, the direct connection
and switch connection topologies are supported.
Figure 52 Non-disruptive Storage Migration
Table 27 Specifications for Paths and Volumes between the Local Storage System and the External Storage System
ItemQuantity
The maximum number of multipath connections between the local storage
system and the external storage system (per external storage system)
The maximum number of ports in the external storage system that can be
connected from the local storage system (per FC-Initiator port)
The maximum number of migration target volumes that can be imported to
the local storage system (*1)
The maximum number of migration target volumes in the external storage
system that can be imported simultaneously to the local storage system
*1: The number of migration target volumes that are imported to the local storage system is added to the
Connect the external storage system to the local storage system ETERNUS DX using FC cables. After the connection is established, add multipath connections between the local storage system and the business server to prepare for the data migration.
After disconnecting the multipath connection between the external storage system and the business server, use
RAID Migration to read data from the migration target volume in the external storage system and write data to
the migration destination volume in the local storage system.
"Oracle VM Manager", which is the user interface of the "Oracle VM" server environment virtualization software,
can provision the ETERNUS DX.
"ETERNUS Oracle VM Storage Connect Plug-in" is required to use this function.
The Oracle VM Storage Connect framework enables Oracle VM Manager to directly use the resources and functions of the ETERNUS DX
(LUN) creation, deletion, expansion, and snapshots are supported.
Figure 53 Oracle VM Linkage
in an Oracle VM environment. Native storage services such as Logical Unit Number
The ETERNUS information is added
on the vSphere Client management
screen
SAN
Obtain
information
Obtain
information
vCenter serverClient PC
VAAI
VASA
• Full Copy (XCOPY)
Copying in a storage system
VAAI
LAN
ETERNUS VASA Provider
ETERNUS SF Storage Cruiser
Operation management server
ETERNUS vCenter Plug-in
VMware vCenter Server
VMware Web Client
VMware
Server
App
OS
App
OS
App
OS
• Profile-Driven Storage
• Storage DRS
• Block Zeroing
• Hardware Assisted Locking
ETERNUS DX
1. Function
Server Linkage Functions
VMware Linkage
By linking with "VMware vSphere" (which virtualizes platforms) and "VMware vCenter Server" (which supports integrated management of VMware vSphere), the resources of the
performance can be improved.
vStorage API for Storage Awareness (VASA) is an API that enables vCenter Server to link with the storage system
and obtain storage system information. With VMware, VASA integrates the virtual infrastructure of the storage,
and enhances the Distributed Resource Scheduling (DRS) function and the troubleshooting efficiency.
ETERNUS VASA Provider is required to use the VASA function.
ETERNUS VASA Provider obtains and monitors information from the
SF Storage Cruiser.
Profile-Driven Storage
•
The Profile-Driven Storage function classifies volumes according to the service level in order to allocate virtual
machines with the most suitable volumes.
Distributed Resource Scheduler (Storage DRS)
•
The Storage DRS function moves original data in virtual machines to the most suitable storage area according
to the access volume. Storage DRS balances the loads on multiple physical servers in order to eliminate the
need for performance management on each virtual machine.
■
VMware VAAI
ETERNUS DX by using functions of ETERNUS
vStorage APIs for Array Integration (VAAI) are APIs that improve system performance and scalability by using the
storage system resources more effectively.
The ETERNUS DX supports the following features.
Full Copy (XCOPY)
•
Data copying processes can be performed in the ETERNUS DX without the use of a server such as when replicating or migrating the virtual machine. With Full Copy (XCOPY), the load on the servers is reduced and the
system performance is improved.
Block Zeroing
•
When allocating storage areas to create new virtual machines, it is necessary to zero out these storage areas
for the initialization process. This process was previously performed on the server side. By performing this
process on the ETERNUS DX side instead, the load on the servers is reduced and the dynamic capacity allocation (provisioning) of the virtual machines is accelerated.
Hardware Assisted Locking
•
This control function enables the use of smaller blocks that are stored in the ETERNUS DX for exclusive control
of specific storage areas.
Compared to LUN (logical volume) level control that is implemented in "VMware vSphere", enabling access
control in block units minimizes the storage areas that have limited access using exclusive control and improves the operational efficiency of virtual machines.
■
VMware vCenter Server
vCenter linkage
•
Various information of the ETERNUS DX can be displayed on vSphere Web Client by expanding the user interface of VMware Web Client. Because storage side information is more visualized, integrated management of
the infrastructure under a virtual environment can be realized and usability can be improved.
ETERNUS vCenter Plug-in is required to use this function.
The operability and efficiency of Virtual Machine backups in virtual environments (VMware) are improved by using the ETERNUS DX storage snapshot integration with Veeam Backup
ware.
Veeam Storage Integration is available for the ETERNUS DX60 S4.
The controller firmware version of the ETERNUS DX must be V10L86 or later.
•
The Veeam Storage Integration license must be obtained and registered in the
•
iSCSI and FC host interfaces are supported in Veeam Storage Integration for the connection between backup
•
proxies and the ETERNUS DX.
To connect a Backup Proxy with the ETERNUS DX via an FC, the host affinity settings must be configured for
•
the Backup Proxy using ETERNUS CLI. For more details, refer to "ETERNUS CLI User's Guide".
To enable the ETERNUS DX storage snapshot integration with Veeam Backup & Replication, FUJITSU Plug-In
•
for Veeam Backup & Replication must be installed to the Veeam backup server.
If a volume has several snapshot generations and these snapshots have been created with different resolu-
•
tions, only the oldest snapshot generation can be deleted.
The following volumes cannot be managed or operated by Veeam Backup & Replication:
•
Volumes with Advanced Copy sessions except SnapOPC+ sessions
-
Volumes with SnapOPC+ sessions created by ETERNUS SF AdvancedCopy Manager
-
Veeam Backup & Replication jobs or operations may fail during a RAID migration or a Thin Provisioning Vol-
•
ume balancing.
SnapOPC+ is used for Veeam Storage Integration.
•
Thin Provisioning Volumes (TPVs) are used as SnapOPC+ copy destination volumes.
Configure an appropriate maximum pool capacity for the Thin Provisioning function by taking the total capacity of volumes used for Veeam Storage Integration and the number of snapshot generations into consideration. For more details about the maximum pool capacity setting, refer to "Thin Provisioning Pool Management" in "ETERNUS Web GUI User's Guide".
ETERNUS DX.
Guidelines for the maximum pool capacity for the Thin Provisioning function:
Maximum pool capacity ³ total capacity of TPVs + total capacity of volumes for Veeam Storage Integration ´ (number of snapshot generations + 1)
It is not recommended to use multiple Veeam Backup & Replication for managing a single ETERNUS DX.
•
In such configuration, an error might occur at the jobs that are in conflict with each other when being executed from multiple Veeam Backup & Replication.
Veeam Storage Integration supports the following volumes.
•
Table 28 Volume Types That Can Be Used with Veeam Storage Integration
Volume typeCopy sourceCopy destination
Standard
WSV
TPV
SDV
SDPV
¡
¡
¡¡
´´
´´
´
´
¡: Supported ´: Not supported
Copy destination TPVs are automatically created when snapshots are created with Veeam Backup &
The ETERNUS DX supports the following functions in Windows Server.
Offloaded Data Transfer (ODX)
•
The ODX function of Windows Server 2012 or later offloads the processing load for copying and transferring
files from the CPU of the server to the storage system.
Thin Provisioning Space Reclamation
•
The Thin Provisioning Space Reclamation function of Windows Server 2012 or later automatically releases
areas in the storage system that are no longer used by the OS or applications. A notification function for the
host is provided when the amount of allocated blocks of the TPV reaches the threshold.
Hyper-V
•
Hyper-V is virtualization software for Windows Server.
By using the Hyper-V virtualized Fibre Channel, direct access to the SAN environment from a guest OS can be
performed. The volumes in the ETERNUS DX can be directly recognized and mounted from the guest OS.
Volume Shadow Copy Service (VSS)
•
VSS is performed in combination with the backup software and the server applications that are compatible
with Windows Server VSS while online backups are performed via the Advanced Copy function for the
DX.
ETERNUS VSS Hardware Provider is required to use this function.
SnapOPC+ and QuickOPC can be used as the copy method.
ETERNUS
To use the ODX function, the controller firmware version of the ETERNUS DX must be V10L80-2000 or later.
■
System Center Virtual Machine Manager (SCVMM)
System Center is a platform to manage operations of data centers and clouds. This platform also provides an
integrated tool set for the management of applications and services.
SCVMM is a component of System Center 2012 or later that performs integrated management of virtualized environments. The
ETERNUS DX can be managed from SCVMM by using the SMI-S functions of the ETERNUS DX.
ETERNUS OpenStack VolumeDriver is a program that supports linkage between the
ETERNUS DX and OpenStack.
By using the VolumeDriver for the ETERNUS DX, the ETERNUS DX can be used as a Block Storage for cinder. Creating volumes in the ETERNUS DX and assigning created volumes to VM instances can be performed via an OpenStack standard interface (Horizon).
The Logical Volume Manager is a management function that groups the save areas in multiple drives and partitions and manages these areas as one logical drive. Adding drives and expanding logical volumes can be performed without stopping the system. This function can be used on UNIX OSs (includes Linux).
LVM has a snapshot function. This function obtains any logical volume data as a snapshot and saves the snapshot as a different logical volume.
To use LUNs in the ETERNUS DX
ETERNUS DX as physical volumes.
Figure 57 Logical Volume Manager (LVM)
to configure an LVM, the LVM can be configured by registering LUNs in the
is a wizard that simplifies the creation of Thin Provisioning Pools and configuration of
host affinity for configurations enabled with Thin Provisioning.
For the procedure on configuration using the Smart Setup Wizard, refer to "Configuration Guide (Basic)".
If a Thin Provisioning Pool has not been created, the Thin Provisioning Pool configuration is automatically
determined based on the type of drives and the number of drives installed in the
The priority for selecting drive types is as follows.
•
ETERNUS DX.
SSD > SAS > Nearline SAS
If multiple drive types exist, the drive type with the highest priority is selected to create a Thin Provisioning
Pool.
To create another Thin Provisioning Pool with the unselected drive types, this wizard cannot be used. Use
the dedicated function provided by this storage system to create a Thin Provisioning Pool.
The RAID levels and the number of drives for RAID groups that configure the Thin Provisioning Pool are as
•
follows.
Drive typeRAID levelNumber of drives
SSDRAID55 to 48
SAS and Nearline SASRAID67 to 48
A Global Hot Spare is registered for each Thin Provisioning Pool.
•
The following shows an example of creating a Thin Provisioning Pool using the Smart Setup Wizard.
● For SSDs
RAID groups are created with RAID5, which has high storage efficiency.
Table 29 shows a guideline for the number of drives and user capacities when 1.92TB SSDs are installed and
Figure 58 shows an example RAID configuration.
Table 29 Guideline for the Number of Drives and User Capacities (When 1.92TB SSDs Are Installed)
RAID configuration that is to be
created
Number of installed drives
4 or lessRAID groups
5
6
7
RAID group
cannot be created
RAID5 ´ 1
RAID5 ´ 1
RAID5 ´ 1
Capacity of the
user data area
(equivalent
number of
drives)
Figure 58 RAID Configuration Example (When 12 SSDs Are Installed)
*1: The capacity of the user data area is equivalent to four drives.
● For SAS Disks and Nearline SAS Disks
RAID groups are created with RAID6, which has high storage efficiency.
Table 30 shows a guideline for the number of drives and user capacities when 1.2TB SAS disks are installed and
Figure 59 shows an example RAID configuration.
Table 30 Guideline for the Number of Drives and User Capacities (When 1.2TB SAS Disks Are Installed)