This document provides information about the system hardware components and the mechanical and
environmental specifications for the Hitachi Virtual Storage Platform F700 all-flash array.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and
recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or
Hitachi Vantara Corporation (collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an
essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not
make any other copies of the Materials. “Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials
contain the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for
information about feature and product availability, or contact Hitachi Vantara Corporation at
us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of
Hitachi products is governed by the terms of your agreements with Hitachi Vantara Corporation.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other
individuals; and
2. Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including
the U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader
agrees to comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or
import the Document and any Compliant Products.
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000,
S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of International Business
Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, the Microsoft Corporate Logo, MS-DOS,
Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,
Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered trademarks or trademarks
of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
iPad is a trademark of Apple Inc., registered in the U.S. and other countries.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
This guide describes the hardware features and specications of the Hitachi Virtual
Storage Platform F700.
Intended audience
This document is intended for Hitachi Vantara representatives, system administrators,
and authorized service providers who install, congure, and operate the VSP Fx00
models .
Readers of this document should be familiar with the following:
■
Data processing and RAID storage systems and their basic functions
■
RAID storage system hardware components and operational specications
UEFI Development Kit 2010
This product includes UEFI Development Kit 2010 written by the UEFI Open Source
Community. For more information, see the UEFI Development Kit website:
Redistribution and use in source and binary forms, with or without modication, are
permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials
provided with the distribution.
Neither the name of the Intel Corporation nor the names of its contributors might be
used to endorse or promote products derived from this software without specic prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Product version
This document revision applies to VSP F700 rmware 88-02-0x or later.
Release notes
Product version
Read the release notes before installing and using this product. They may contain
requirements or restrictions that are not fully described in this document or updates or
corrections to this document. Release notes are available on Hitachi Vantara Support
Connect: https://knowledge.hitachivantara.com/Documents.
Changes in this revision
■
Added support for 15 TB SSD
Document conventions
This document uses the following storage system terminology conventions:
ConventionDescription
VSP Fx00 modelsRefers to all of the following models, unless otherwise noted.
Product user documentation is available on Hitachi Vantara Support Connect: https://
knowledge.hitachivantara.com/Documents. Check this site for the most current
documentation, including important updates that may have been made after the release
of the product.
Getting help
Hitachi Vantara Support Connect is the destination for technical support of products and
solutions sold by Hitachi Vantara. To contact technical support, log on to Hitachi Vantara
Support Connect for contact information: https://support.hitachivantara.com/en_us/
contact-us.html.
Hitachi Vantara Community is a global online community for Hitachi Vantara customers,
partners, independent software vendors, employees, and prospects. It is the destination
to get answers, discover insights, and make connections. Join the conversation today!
Go to community.hitachivantara.com, register, and complete your prole.
doc.comments@hitachivantara.com. Include the document title and number, including
the revision level (for example, -07), and refer to specic sections and paragraphs
whenever possible. All comments become the property of Hitachi Vantara Corporation.
The Hitachi Virtual Storage Platform F700 is a versatile modular, rack-mountable all-ash
array storage system equipped with drive boxes, supporting ash module drives, scaled
for various storage capacity congurations. To deliver consistent low latency host
response times and highest IOP performance across all host connection ports,
conventional hard-disk drives are not supported in an all-ashconguration.
The storage systems provide high performance operations by using dual controllers with
high-speed processors, dual in-line cache memory modules (DIMMs), cache ash
memory (CFM), battery, fans and ports to connect iSCSI and Fibre Channel I/O modules.
Each controller has an Ethernet connection for out-of-band management. If the data
path through one controller fails, all data drives remain available to hosts using a
redundant data path through the other controller.
For reliability, essential hardware components are implemented with a redundant
conguration so that the storage system can remain operational if a component fails.
Adding and replacing components, along with rmware upgrades, can be conducted
while the storage system is active and without interruption of data availability to the
hosts. A hot spare drive can be congured to replace a failed data drive automatically,
securing the fault-tolerant integrity of the logical drives. Self-contained, hardware-based
RAID logical drives provide maximum performance in compact external enclosures.
Block configuration
A storage system congured for block-level storage provides the ability to access and
provision raw storage volumes using protocols such as Fibre Channel and iSCSI.
A block conguration consists of the following:
■
Two controllers
■
One or more drive trays
■
Optional service processor (SVP)
Features
All storage systems are highly reliable, versatile, and able to scale its performance by
adding more drive chassis and data drives. Depending on the system conguration, the
drive chassis oerings support SAS-interface solid-state and ash module drives.
The storage systems are equipped with dual controllers for communicating with a data
host.
Each controller includes the following internal components such as a processor, dual inline cache memory modules (DIMMs), cache ash memory (CFM), battery, and fans. The
controller has an Ethernet connection for out-of-band management using Hitachi Device
Manager - Storage Navigator. If the data path through one controller fails, all drives
remain available to data hosts using a redundant data path through the other controller.
The controller is equipped with LED indicators for monitoring its operating conditions
and notifying possible component replacement.
CBL controller chassis
The controller chassis houses controllers, backup fan modules, and power supplies. The
chassis also includes specic functional LEDs located on the front and rear of controller
to provide its operating status.
The following table lists the controller board specications.
ComponentVSP F700
Chassis (4U)DW850-CBL
Controller boardDW-F850-CTLM
Number of DIMM slot8
Cache memory capacity64 GiB to 256 GiB
Data encryptionN/A
CBL controller front panel bezel LEDs
The following table describes the denitions of the CBL controller front panel bezel LEDs.
Note: When System Option
Mode 1097 is set to ON,
the WARNING LED does not
blink, even if the following
failure service information
messages (SIM) are issued:
452xxx, 462xxx, 3077xx,
4100xx, and 410100.
LED might turn o during
user maintenance.
4ALARM LEDO: Normal operation.
Red: Processor failure
(system might be down).
For assistance, contact
customer support: https://
Note: Removing a controller can cause the POWER, READY, WARNING, and
ALARM LEDs on the front panel to turn o. These LEDs return to the on status
after the storage system recovers from the controller replacement.
CBL controller front panel LEDs (without bezel)
The following table describes the denitions of the CBL controller front panel LEDs.
Note: When System Option
Mode 1097 is set to ON,
the WARNING LED does not
blink, even if the following
failure service information
messages (SIM) are issued:
452xxx, 462xxx, 3077xx,
4100xx, and 410100.
Controller 2 (top).
Page 17
CBL controller front panel LEDs (without bezel)
NumberItemDescription
5BACKUP LEDGreen: Power restoration in
progress following power
outage.
Fast blink green: Restoring.
Slow blink green: Restoring,
or sequential shutdown in
progress.
6Cache ash memoryN/A
7ALM LED (for cache ash
memory)
Red: Cache ash memory
can be removed.
8CTL ALM LEDRed: Controller can be
removed.
Blink red: Failure with the
power supply unit of the
controller.
Blink red three times: Both
batteries failed or
preventive maintenance
replacement of batteries
can run.
Chapter 2: System controller
Page 18
NumberItemDescription
CBL controller rear panel LEDs
The following table describes the denitions of the CBL controller rear panel LEDs.
CBL controller rear panel LEDs
O: Battery is not
mounted, batterymounting failure occurred,
or rmware is being
upgraded. O is normal
status for congurations
without batteries (for
example, BKMF-10 and
BKMF-20).
1Power supply unit
2Front end module
3Back end module
4LAN blade
CBL controller power supply unit LEDs and connectors
The following table lists the denitions of the CBL controller power supply unit LEDs and
connectors.
The controllers are equipped with specic interfaces for connecting, powering,
conguring, and managing the storage system. The component LEDs display the
operating status of the storage system.
Front end modules
The following front end modules are available for the controllers. The LEDs display the
operating status of the module.
The drive tray contains data drives, power supplies, fans, and status LEDs. Each drive tray
provides interfaces for connecting to controllers and other drive trays. The all-ash
storage arrays have various xed storage capacity congurations with ash storage
devices. To deliver consistent low latency host response times and highest IOP
performance across all host connection ports, conventional hard disk drives (HDD) are
not included or congurable with all-ash arrays.
Small form-factor drive tray (DBS)
The following describes the physical specications of the small form-factor drive tray.
Slow blink indicates the
FMD is in the process of
startup. When powered,
the LED blinks for about
two to ve minutes until
the startup processing is
complete.
ALM LEDRed: Drive stopped due to
a failure and can be
replaced.
Note: ACT indicator is only
printed on some types of
FMDs.
The VSP Fx00 models include an optional, separate 1U service processor (SVP) dedicated
to host an element manager (Storage Navigator). The SVP operates independently from
the CPU of the storage system and operating system, and provides out‑of‑band
conguration and management of the storage system. The SVP also monitors and
collects performance data for key components of the storage system to enable
diagnostic testing and analysis for customer support.
The SVP is also available as a 64-bit software application provided by Hitachi Vantara. For
the latest interoperability updates and details, see the SVP (Service Processor) OS andHypervisor support for Gx00, Fx00 report at
interoperability.html.
Service Processor (Windows 10 Enterprise) hardware
specifications
The following table lists the hardware specications for the service processor (Windows
10 Enterprise) provided by Hitachi Vantara.
https://support.hitachivantara.com/en_us/
ItemSpecification
DimensionsHeight: 1.7 inches (43 mm)
Width: 17.2 inches (437 mm)
Depth: 9.8 inches (249 mm)
Weight: 10 lbs (4.5 kg)
ProcessorIntel N3710 Pentium processor 4C/4
threads 1.6 GHz 2M cache, 6W
Memory2 x 4 GB DDR3 1600MHz
Storage media1 TB 5400 RPM SATA HDD
LAN/Network interface card1-GbE x 4 ports (on-board NIC)
Two ports connect to the storage system controllers (one port for each controller).
■
One port connects to the IP network of the user.
■
One port connects to a user-supplied management console PC.
Note: This product is also designed for IT power distribution systems with
phase-to-phase voltage.
Service processor description
The SVP is supported in high-temperature
environments. Do not operate in any
location with temperatures above 40°C
(104° Fahrenheit).
Three of the four RJ-45 ports (which connect to the controllers and the IP network) are
congured as a bridge. The SVP can be addressed using the default IP address
192.168.0.15.
In the unlikely event you cannot connect to the SVP using the default IP address, use the
following emergency login: http://<default SVP IP address>/dev/storage/<model number><system serial number>/emergency.do. For example:
Users are responsible for adopting the appropriate security procedures with the SVP,
including:
■
Applying Windows security patches.
■
Turning on automatic Windows updates or using the manual Windows update
method.
■
Installing antivirus software that has been tested and approved by Hitachi.
SVP front panel
The front panel of the physical SVP with Windows 10 Enterprise operating system is
equipped with LEDs, a reset button, and a power button.
SVP front panel
1LED (Left to Right):
2Reset button
3Power button
SVP rear panel
The only ports used at the rear panel of the physical SVP are the power socket and the
four LAN ports. The following ports connect to your IP network, the management
console PC, and the user LAN port on each storage system controller.
Note: The SVP running Windows 10 operating system does not provide an
option to disable Spanning Tree Protocol (STP). If your network has BPDU
enabled to prevent loops, connect the user LAN port on controllers 1 and 2 to
an Ethernet switch that is also connected to the LAN1 port on the SVP.
Note: After the Initial Startup Wizard is complete, the SVP can be used in nonbridge mode. In this mode, the cables can be removed from SVP ports LAN3
and LAN4 and attached to switches. For more information, contact customer
support.
Ongoing proper maintenance of the storage system maintains the reliability of the
storage system and its constant availability to all hosts connected to it.
For more complex maintenance activities, contact customer support.
Storing the storage system
If the storage system does not receive power for more than six months, the battery can
become discharged and possibly damaged. To avoid this situation, charge the battery for
more than three hours at least once every six months.
Note: Do not store the equipment in an environment with temperatures of
104ºF (40ºC) or higher because battery life will be shortened.
Powering off the storage system
Procedure
1. Press the main switch on the controller chassis for approximately three seconds
until the POWER LED on the front of the chassis changes from solid green to a
blinking status.
2. Release the main switch and the POWER LED returns to solid green after blinking for
approximately three seconds.
The power-o process begins. The process takes approximately 18 minutes or
longer depending on the amount of data that needs to be written. The POWER LED is
solid green during the powering o process. The POWER LED changes from green to
amber when the process is completed.
3. Verify the POWER LED on the front of the storage system changes from green to
amber.
4. To stop the power supply, remove the power cables from the power supply units on
the controller chassis and drive box.
If the storage system is connected to a PDU, you can stop the power supply by
turning o the PDU breaker.
Note: If the storage system does not receive power for more than six
months, the battery can become discharged and possibly damaged. To
avoid this situation, charge the battery for more than three hours at
least once every six months.
The battery life time is aected by the battery temperature. The battery temperature
changes depending on the intake temperature and height of the storage system, the
conguration, operation of the controller boards and drives, charge-discharge count and
others. The battery lifetime will be three to ve years.
Treatment
Use the storage system in a place where the ambient temperature is 86°F (30°C) or less
on average.
Periodic parts replacement is required. If you have a maintenance service contract, parts
are replaced periodically according to the terms of the contract.
Battery unit
Note: The battery protects the data in the cache memory in an emergency,
such as a sudden power failure. In these cases, follow the normal power
down procedure. If not, the battery might reach its lifespan earlier than
expected and become unusable within three years. When replacing the
battery, follow the given procedure for disposing a used battery.
Replacement period
The battery lifetime (intake temperature is 30 degrees C or less.) in the standard
environment is as shown below.
1. The start-up time might be longer in proportion to the number of drive trays
connected. With a maximum conguration 1 controller chassis and 19 drive trays,
start-up time is approximately 8 minutes.
2. Can be mounted on the RKU rack. For the mounting, special rails for the rack and
decoration panels are required separately depending on the number of the
mounted storage systems.
Maximum volume size3 TB (when using the LDEVs of other
Storage Systems: 4TB)
Maximum volumes/host groups and iSCSI
2048
targets
Maximum volumes/parity groups2048
Note:
1. A storage system congured with RAID6, RAID 5, or RAID 1 provides redundancy
and enhances data reliability. However, there is still a possibility of losing data
caused by unforeseeable hardware or software failure. Users should always follow
recommended best practices and back up all data.
Cache memory: ECC (1 bit correction, 2 bit
detection)
Drive: Data assurance code
Cache specications
ItemSpecification
Electrical
specications
Capacity (GB per controller)512 GB
Control methodRead LRU/Write after
Battery backupProvided
Backup duration
*
Unrestricted (Saving to a nonvolatile
memory)
*
Non-volatile data in the cache memory is protected against sudden power failure. The
backup operation writes data into a cache, even if a power interruption occurs, and
transferred to the cache ash memory.
Insulation performance
ItemSpecification
Insulation withstand voltageAC 1,500 V (100mA, 1min)
Insulation resistanceDC 500 V, 10 MΩ or more
Electrical specifications
The electrical input power specications for the storage systems are described in the
following table.
Number of phases, cablingSingle-phase with protective grounding
Steady-state current 100V/
200V1,
2
Current rating of breaker/
CBL: 4.0x2FMD tray: 2.6x2/1.3x2
16.0 (each electrical)
fuse (A)
Heat value (normal) (kJ/h)CBL: 2810 or lessFMD tray: 1520 or less
Steady-state power (VA/W)3CBL: 1600/1560 or lessFMD tray: 520/490 or less
Power consumption (VA/W)CBL: 840/780 or lessFMD tray: 440/420 or less
Notes:
1. The power current of Nx2 described in this table is required for a single power
unit.
2. If one power unit fails, another power unit requires electric current for the two
power units. Therefore, plan the power supply facility so that the current-carrying
capacity for one power unit can provide the total capacity for two power units.
3. This table shows the power requirement (100 V or 200 V) for the maximum
conguration . The actual required power might exceed the value shown in the
table when the tolerance is included.
Environmental specifications
The environmental specications for the storage systems are described in the following
table.
Temperature
Caution: The following storage system components are not supported in
high-temperature environments. Do not operate the following components at
temperatures of 40°C or higher:
■
Hitachi Vantara-provided service processor (SVP) server
(To be measured when installed on leveling bolts.)
Altitude
StateControllerFMD tray
Operating
(m)
3,050
(Environmental
3,050 (Environmental temperature: 10°C to 28°C)
temperature: 10°C
to 28°C)
1
950
950 (Environmental temperature: 10°C to 40°C)
(Environmental
temperature: 10°C
to 40°C)
Non-
-60 to 12,000
operating
(m)
Note:
1. Meets the highest allowable temperature conditions and complies with ASHRAE
(American Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011
Thermal Guidelines Class A3. The maximum value of the ambient temperature
and the altitude is from 40ºC at an altitude of 950m (3000 feet) to 28ºC at an
altitude of 3050m (1000 feet).
The allowable ambient temperature is decreased by 1ºC for every 175m increase
in altitude above 950m.
Gaseous contaminant
Avoid areas exposed to corrosive gas and salty air.
OperatingGaseous contamination should be within ANSI/ISA
S71.04-2013 G1 classication levels.
1
Non-operating
1
Recommends the data centers maintain a clean operating environment by
monitoring and controlling gaseous contamination.
Acoustic Noise
The acoustic level is measured under the following conditions in accordance with
ISO7779 and the value is declared based on ISO9296. In a normal installation area (data
center / general oce), the storage system is surrounded by dierent elements from the
following measuring conditions according to ISO, such as noise sources other than the
storage system (other devices), the walls and ceilings that reect the sound. Therefore,
the values described in the table do not guarantee the acoustic level in the actual
installation area.
■
Measurement environment: In a semi-anechoic room whose ambient temperature is
23ºC±2ºC.
■
Device installation position: The Controller Chassis is at the bottom of the rack and
the Drive Box is at a height of 1.5m in the rack.
■
Measurement position: 1m away from the front, rear, left or right side of the storage
system and 1.5m high (at four points).
■
Measurement value: Energy average value of the four points (front, rear, left and
right).
The recommendation is to install the storage system in a computer room in a data
center. It is possible to install the storage system in a general oce, however, take
measures against noise as required. When you replace an existing storage system with
another system in a general oce, especially note the following: The cooling fans in the
storage system are downsized to enhance the high density of the storage system. As a
result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is
high.
60 dB (Environmental temperature 32°C or less)1, 2,
temperature 32°C or
1
less)
Non-
55 dB55 dB
opera
ting
Notes:
1. The internal temperature of the system controls the rotating speed of the fan
module. Therefore, this standard value might be exceeded if the maximum load
continues under high-temperature environment or if a failure occurs in the
system.
2. Sound pressure level (LA) changes from 66 dB or 75 dB, according to the ambient
temperature, drive conguration, and operating status. Maximum volume can
reach 79 dB during maintenance procedure for a failed ENC or power supply.
3. Acoustic power level (LwA) measured by the ISO 7779 standard is 7.2 B. This value
changes from 7.2 B to 8.1 B, according to the ambient temperature, drive
conguration, and operating status.
3
StateControllerFMD tray
Opera
ting
Non-
60 dB (Environmental
60 dB (Environmental temperature 32°C or less)1, 2,
temperature 32°C or
1
less)
55 dB (Environmental temperature 32°C or less)1, 2, 3 55 dB
opera
ting
Notes:
1. The internal temperature of the system controls the rotating speed of the fan
module. Therefore, this standard value might be exceeded if the maximum load
continues under high-temperature environment or if a failure occurs in the
system.
2. Sound pressure level (LA) changes from 66 dB or 75 dB, according to the ambient
temperature, drive conguration, and operating status. Maximum volume can
reach 79 dB during maintenance procedure for a failed ENC or power supply.
3. Acoustic power level (LwA) measured by the ISO 7779 standard is 7.2 B. This value
changes from 7.2 B to 8.1 B, according to the ambient temperature, drive
conguration, and operating status.
Some data center inert gas re suppression systems when activated release gas from
pressurized cylinders that moves through the pipes at very high velocity. The gas exits
through multiple nozzles in the data center. The release through the nozzles could
generate high-level acoustic noise. Similarly, pneumatic sirens could also generate
high-level acoustic noise. These acoustic noises may cause vibrations to the hard disk
drives in the storage systems resulting in I/O errors, performance degradation in and
to some extent damage to the hard disk drives. Hard disk drives (HDD) noise level
tolerance may vary among dierent models, designs, capacities and manufactures. The
acoustic noise level of 90 dB or less in the operating environment table represents the
current operating environment guidelines in which Hitachi storage systems are
designed and manufactured for reliable operation when placed 2 meters from the
source of the noise.
Hitachi does not test storage systems and data drives (includes HDDs, SSDs, and FMDs)
for compatibility with re suppression systems and pneumatic sirens. Hitachi also does
not provide recommendations or claim compatibility with any re suppression systems
and pneumatic sirens. The customer is responsible to follow their local or national
regulations.
To prevent unnecessary I/O error or damages to the hard disk drives in the storage
systems, Hitachi recommends the following options:
1. Install noise-reducing baes to mitigate the noise to the hard disk drives in the
storage systems.
2. Consult the re suppression system manufacturers on noise reduction nozzles to
reduce the acoustic noise to protect the hard disk drives in the storage systems.
3. Locate the storage system as far as possible from noise sources such as
emergency sirens.
4. If it can be safely done without risk of personal injury, shut down the storage
systems to avoid data loss and damages to the hard disk drives in the storage
systems.
Damage to the hard disk drives from re suppression systems or pneumatic sirens will
void the hard disk drive warranty.
3.9m/s2 (0.4G) (400gal) or
less : No critical damage for
product function. (Normal
operating with part
replacement)
9.8m/s2 (1.0G) (1,000gal) or
less: Ensure own safety
with fall prevention
Notes:
1
Vibration that is constantly applied to the storage system due to construction works
and so on.
2
Compliant with NEBS (Network Equipment-Building System) Oce Vibration
standards (GR-63-CORE zone4).
3
Compliant with IEC (International Electrotechnical Commission) standards, IEC
61584-5/Ed1 and IEC60297-Part5 (scenic test at the maximum acceleration rate of
9.8m/s2 (1.0G) equivalent to NEBS (Network Equipment-Building System) Level3).
Shared memory
Using Hitachi software products, the number of pairs, migration plans, and pool
capacities and virtual volumes depend on the amount of capacity of shared memory
installed on the controller.
The shared memory capacity allocated by shared memory function and the cache
memory capacity required for adding shard memory function vary depending on storage
system models.
External Fibre Channel, iSCSI, or Ethernet cable connections are completed at the time of
installation.
These connections are required to:
■
Establish connections from the controllers to the host computers.
■
Connect the storage system to the network, enabling storage system management
through Hitachi Command Suite or Hitachi Storage Advisor.
■
Allow communication to the storage system from the SVP.
TCP/IP port assignments
When you install your storage system, default ports must be opened to allow for
incoming and outgoing requests.
Review the following ports before you install the storage system to avoid conicts
between the TCP/IP port assignments used by the storage system and those used by
other devices and applications.
Cisco Skinny Client Control Protocol (SCCP) uses port
2000 for TCP. If you use Device Manager - Storage
Navigator in a network with SCCP, change the TCP port
that Device Manager - Storage Navigator uses (refer to
the Device Manager - Storage Navigator online help).
5989Used by SMI-S.
10995TCP Device Manager - Storage Navigator and Hitachi
Note: Hitachi Command Suite has additional port considerations. For more
information, refer to the Hitachi Command Suite Administrator Guide.
Controller connections
The controllers provide the ports that are required to connect to an optional SVP,
external drive trays, systems, and other devices.
Automatic
allocation
Automatic
allocation
4431Yes, only if Hi-
Track is not
used.
88-02-0x -xx/00
or later
or later
88-02-0x -xx/00
or later
88-02-0x -xx/00
or later
A controller contains Fibre Channel ports, iSCSI ports, or both. The number and type of
ports available for host connections vary based on the controller model.
■
Fibre Channel SFP adapters are used to connect to your Fibre Channel switch and
hosts.
■
iSCSI ports come in optical and copper (RJ-45) interfaces, and are used to connect to
your Ethernet switch and hosts.
Each controller also has:
■
A SAS port for connection to an external drive tray.
■
An RJ-45 10/100/1000 bps user LAN port for performing management activities.
■
An RJ-45 10/100/1000 bps maintenance LAN port for diagnostics.
Physical service processor connections
The SVP is available as an optional, physical device provided by Hitachi Vantara or as a
virtual guest host running on customer-provided ESX servers and VM/OS licenses and
media. The SVP provides error detection and reporting and supports diagnostic and
maintenance activities involving the storage system.
In a VSP Fx00 conguration, both the storage system and the SVP reside on the same
private network segment of your local-area network (LAN). The management console PC
used to administer the system must also reside on the same private network segment.
Physical SVP connectivity requires all of the following:
■
A static IP address for the SVP that is on the same network segment as the storage
system.
■
One Ethernet connection from each controller to separate LAN ports on the SVP.
■
One Ethernet connection to your network switch.
■
At least one management console PC on the same network segment as the SVP and
storage system.
Note: The SVP running Windows 10 operating system does not provide a way
to disable Spanning Tree Protocol (STP). If your network has BPDU enabled to
prevent loops, connect the user LAN port on controllers 1 and 2 to an
Ethernet switch instead of connecting them to SVP LAN 3 and LAN 4 ports.
Virtual SVP connectivity requires all of the following:
The storage system supports a variety of data and power cables for specic hosting
environments.
Required cables
The quantities and lengths of the cables required for storage system installation vary
according to the specic storage system and network conguration. Fibre Channel and
iSCSI cables are used to connect the controllers to a switch or host. Serial-attached SCSI
(SAS) cables are used to connect drive trays to controllers and other drive trays.
The following table describes the cables required to perform storage system connections
at the time of installation.
Interface
typeConnector typeCable requirements
Fibre
Channel
LC-LCUse a Fibre Channel cable to connect the
Fibre Channel ports on each controller to
a host computer (direct connection), or to
or several host computers via a Fibre
Channel switch. See the note and table
below.
iSCSI
(optical)
LC-LCUse an optical Ethernet cable to connect
the iSCSI 10 Gb SFP ports on each
controller to a host computer (direct
connection), or to several host computers
via an Ethernet switch.
iSCSI
(copper)
RJ-45Use a shielded Category 5e or 6a Ethernet
cable to connect the iSCSI 10 Gb RJ-45
ports on each controller to a host
computer (direct connection), or to
several host computers via an Ethernet
switch.
SASSAS opticalConnects the controller to a drive tray or
a drive tray to another drive tray. Two SAS
cables are provided with each drive tray.
EthernetRJ-45Four shielded Category 5e or 6a Ethernet
cables are required for connecting the
SVP to the controllers, management
console PC, and network switch.
Note: The maximum distances in a typical Fibre Channel SAN depend on the
kind of optical ber used and its diameter. The following table lists the
maximum supported Fibre Channel cable length based on cable size and port
speed.
The following standards apply to the management, maintenance, and iSCSI data ports.
To congure this system, use switches that comply with the following standards:
■
IEEE 802.1D STP
■
IEEE 802.1w RSTP
■
IEEE 802.3 CSMA/CD
■
IEEE 802.3u Fast Ethernet
■
IEEE 802.3z 1000 BASE-X
■
IEEE 802.1Q Virtual LANs
■
IEEE 802.3ae 10 Gigabit Ethernet
■
RFC 768 UDP
■
RFC 783 TFTP
■
RFC 791 IP
■
RFC 793 TCP
■
RFC 1157 SNMP v1
■
RFC 1231 MIB II
■
RFC 1757 RMON
■
RFC 1901 SNMPv2
iSCSI standards
iSCSI specifications
ItemSpecificationComments
iSCSI target functionSupportedN/A
iSCSI target functionSupportedTrueCopy® only
iSCSI ports2 per interface boardVSP Fx00 models:
Connection methodsDirect and switch
Host connections255 (maximum per iSCSI
Path failoverHDLM
Link10 Gbps SFP+N/A
connections
port)
1
Maximum 24 per iSCSI
system
With Linux software
initiator, the maximum
number decreases.
Minimum number of
cascading switches is
recommended.
MAC addressPer port (xed value)Factory setting: World Wide
Unique value. Cannot be
changed.
Maximum transfer unit
(MTU)
1,500, 4,500, 9,000 bytes
(Ethernet frame)
Jumbo frame, MTU size
greater than 1500
Link aggregationNot supportedN/A
Tagged VLANSupportedN/A
IPv4SupportedN/A
IPv6SupportedNote the following
precautions:
■
When iSCSI Port IPv6 is
set to Enabled, if the
IPv6 global address is
set to automatic, the
address is determined
by acquiring a prex
from an IPv6 router.
If the IPv6 router does
not exist in the network,
the address cannot be
determined. As a result,
an iSCSI connection
might be delayed. When
an iSCSI Port IPv6 is set
to Enabled, verify the
IPv6 router is connected
to the same network,
and then set IPv6 global
address automatically.
Page 77
ItemSpecificationComments
Subnet maskSupportedN/A
Gateway addressSupportedN/A
DHCPN/AN/A
DNSN/AN/A
iSCSI specications
Ping (ICMP ECHO) Transmit,
SupportedN/A
Receive
2
IPsec
N/AN/A
TCP port number3260Changeable among 1 to
65,535. Observe the
following if changing
values:
■
The setting of the
corresponding host
should also be changed
to log in the new port
number.
■
The new port number
might conict with other
network communication
or be ltered on some
network equipment,
preventing the storage
system from
communicating through
the new port number.
iSCSI nameBoth iqn3 and eui4 types
Error recovery level0 (zero)Error recovery by retrying
The unique iqn value is
automatically set when a
target is made. iSCSI name
is congurable.
from host. Does not
support Level 1 and Level
2.
data error with iSCSI
communication. The
storage system follows the
host's digest setting. If
digest is enabled, the
performance degrades.
Page 78
Managing cables
ItemSpecificationComments
Data digestSupported
Maximum iSCSI
connections at one time
CHAPSupportedAuthentication: login
Mutual (2-way) CHAPSupported (not available if
CHAP user registrationMax 512 users per iSCSI
iSNSSupportedWith iSNS (name service), a
255 per iSCSI portN/A
connected to Linux
software initiator)
port
The amount of the
degradation depends on
factors such as host
performance of host and
transaction pattern.
request is sent properly
from host to storage. CHAP
is not supported during
discovery session.
Authentication: login
request is sent properly
from host to storage.
N/A
host can discover a target
without knowing the
target's IP address.
Note:
1. JP1, HiCommand Dynamic Link Manager. Pass switching is achieved. Not
supported on Windows Vista and Windows 7 operating systems.
2. IP Security. Authentication and encryption of IP packets. The storage system does
not support IPsec.
3. iqn: iSCSI Qualied Name. The iqn consists of a type identier, "iqn," a date of
domain acquisition, a domain name, and a character string given by the individual
who acquired the domain. Example:
iqn.1994-04.jp.co.hitachi:rsd.d7m.t.10020.1b000.tar
4. eui: 64-bit Extended Unique Identier. The eui consists of a type identier, "eui,"
and an ASCII-coded, hexadecimal, EUI-64 identier. Example:
eui.0123456789abcdef
Managing cables
Organize cables to protect the integrity of your connections and allow proper airow
around your storage system.
The following table species the maximum length of SAS cables can be used to connect
controllers and drive trays.
SystemMaximum length
VSP F700140 meters or less
(459.3 ft or less)
Observing bend radius values
Never bend cables beyond their recommended bend radius. The following table provides
general guidelines for minimum bend radius values, but you should consult the
recommendation of your cable manufacturer.
Cable typeMinimum bend radius values
Fibre Channel40 mm (1.73 inch)
iSCSI optical40 mm (1.73 inch)
Category 5 EthernetFour times the outside diameter of the
Damage to the cables can aect the performance of your storage system. Observe the
following guidelines to protect the cables:
■
Keep cables away from sharp edges or metal corners.
■
When bundling cables, do not pinch or constrict the cables.
■
Do not use zip ties to bundle cables. Instead, use Velcro hook-and-loop ties that do
not have hard edges and which you can remove without cutting.
■
Never bundle network cables with power cables. If network and power cables are not
bundled separately, electromagnetic interference (EMI) can aect your data stream.
■
If you run cables from overhead supports or from below a raised oor, include
vertical distances when calculating necessary cable lengths.
■
If you use overhead cable supports:
●
Verify that your supports are anchored adequately to withstand the weight of
bundled cables.
●
Gravity can stretch and damage cables over time. Therefore, do not allow cables to
sag through gaps in your supports.
●
Place drop points in your supports that permit cables to reach racks without
bending or pulling.
■
Unintentional unplugging or unseating of a power cable can have a serious impact on
the operation of an enterprise storage system. Unlike data cables, power connectors
do not have built-in retention mechanisms to prevent this from happening.
To prevent accidental unplugging or unseating of power cables, the storage system
includes a rubber cable-retention strap near the AC receptacle on each controller.
These straps, shown in the following image, loop around the neck of a power cable
connector, and the notched tail is slipped over the hook of the restraining bar xed to
the storage system.
Cabling full-width modules
When cabling full-width modules, route the cables horizontally, so that they do not
interfere when replacing a module.
Bundled cables can obstruct the movement of conditioned air around your storage
system.
■
Secure cables away from fans.
■
Keep cables away from the intake holes at the front of the storage system.
■
Use ooring seals or grommets to keep conditioned air from escaping through cable
holes.
Preparing for future maintenance
Design your cable infrastructure to accommodate future work on the storage system.
Give thought to future tasks that will be performed on the storage system, such as
locating specic pathways or connections, isolating a fault, or adding or removing
components.
■
Purchase colored cables or apply colored tags.
■
Label both ends of every cable to denote the port to which it connects.
AC power cables
AC power cables
Utility AC power standards for connector types and voltage levels vary by country. Hitachi
provides a variety of power cables that facilitate using storage systems around the world.
Hitachi power cables meet the safety standards for the country for which they are
intended.
Power cable assemblies
For information about racks and power distribution units (PDUs), refer to the Hitachi
Universal V2 Rack Reference Guide.
Hitachi power cables consist of three parts:
■
Plug: Male connector for insertion into the AC outlet providing power. The physical
design and layout of the plug's contact meet a specic standard.
■
Cord: Main section of insulated wires of varying length, whose thickness is
determined by its current rating.
■
Receptacle: Female connector to which the equipment attaches. The physical design
and layout of the receptacle's contacts meet a specic standard. Common standards
are the IEC C13 receptacle for loads up to 10 amperes (A) and the IEC C19 receptacle
for loads up to 15 A.
Hitachi storage systems are intended for rack installation and ship with power cords.
Installation and service requirements may require additional cords and cables to be
ordered. The type of power cable required by a given installation is determined primarily
by the:
■
Type of AC line feed provided by the facility.
■
Type of AC source (wall outlet or modular and monitored PDU) to be used.
■
Serviceability of components to be connected.
Storage systems require a country-specic power cable for direct connection to a facility
AC feed.
Storage systems are designed to allow replacement of hot-pluggable components
without removing the chassis from the rack. As a result, power cables can be short
because cable movement is of minimal consideration.
Three-phase power considerations for racks
Power cable usage guidelines
Increasing power requirements for racks are making the use of three-phase power at the
rack level compelling.
■
With single-phase power, at any given time the voltage across the hot and neutral
conductors can be anywhere between its peak (maximum) and zero. Electrical
conductors must be large to meet high amperage requirements.
■
Three-phase power uses three cycles that are 120 degrees out of phase, which never
allows the voltage to drop to zero. The more consistent voltage derived from the
three hot conductors results in smoother current ow and allows small-gauge
conductors to be used to distribute the same amount of AC power. As a result, the
load balancing and increased power handling capabilities of three-phase distribution
can result in more ecient and less costly installations that require fewer AC cables
and PDUs.
Cable management
Rack installations should be planned for operational eciency, ease of maintenance, and
safety. Hitachi oers the Backend Conguration Utility (BECK), a graphical, cable-
management application that can relieve the typical cable congestion created when
populating a rack with storage systems and their accessories.
Appendix E: Power distribution units for Hitachi
Universal V2 Rack
The Hitachi Universal V2 Rack is equipped with specic power distribution units (PDU) for
Americas, APAC, and EMEA regions. The PDUs can provide electrical power to the storage
system in a single-phase or three-phase conguration.
Caution:
■
Before installing third-party devices into the rack, check the electrical
current draw of each device. Verify the electrical specications and
allowable current load on each PDU before plugging the device into the
PDU.
■
Balance the electrical current load between available PDUs.
Americas single-phase PDU 1P30A-8C13-3C19UL.P
The following gure and table describes the specications of the PDU.
Figure 1 Americas PDU for the Hitachi Universal V2 Rack (Single-phase PDU
1P30A-8C13-3C19UL.P)
Part NumberRegionPhaseDescription
1P30A-8C13-3C19U
L.P
Appendix E: Power distribution units for Hitachi Universal V2 Rack
All VSP Fx00 models storage systems can be installed into non-Hitachi racks.
The following describes the requirements and guidelines for installing the storage
system into a non-Hitachi rack.
Non-Hitachi rack support
The storage system supports non-Hitachi racks that meet Hitachi specications.
Observe the following mounting guidelines for non-Hitachi racks:
■
The VSP Fx00 models support any 4-post, EIA-310-D compliant rack that has adequate
airow and weight capacity.
■
PDUs must be mounted properly to avoid any issues while servicing the storage
system. The PDU receptacles must face toward the back (not toward each other). The
area behind the storage system and between the vertical 19-inch mounting posts
must be free of PDUs and cable loops.
Hitachi Universal V2 Rack rail kits
Use rail kits to mount the Hitachi Virtual Storage Platform family storage system in a
Hitachi Universal V2 Rack.
The following tables list the rail kit information for the specied storage systems.
Table 18 Rail kits for VSP Fx00 models
Rail kitHitachi Universal V2 RackThird-party rack
ControllerUNI
DBS and DBF drive traysCGR
SVP serverUse the rail kit supplied with the SVP server.
To avoid injuries and damage to the equipment, the storage system have warning labels
on the exterior of the hardware components. Always identify, read, and obey the
advisory warning labels situated on the exterior before handling the equipment.
The following symbols are described and contained in the warning labels.
Symbol markExplanation
Do not disassemble the equipment.
Be careful when handling heavy equipment.
Use caution when handling electrostatic-sensitive equipment
and microcircuitry.
Avoid placing any non-essential objects or equipment onto
the storage system.
Use caution when handling the equipment.
Use caution when handling equipment with movable parts.
Use caution when handling equipment with hot surfaces.
Use caution when handling equipment prone to tipping over.