This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
The following terms are trademarks of other companies:
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xiiIBM System Storage N series Hardware Guide
Preface
This IBM® Redbooks® publication provides a detailed look at the features, benefits, and
capabilities of the IBM System Storage® N series hardware offerings.
The IBM System Storage N series systems can help you tackle the challenge of effective data
management by using virtualization technology and a unified storage architecture. The
N series delivers low- to high-end enterprise storage and data management capabilities with
midrange affordability. Built-in serviceability and manageability features help support your
efforts to increase reliability, simplify and unify storage infrastructure and maintenance, and
deliver exceptional economy.
The IBM System Storage N series systems provide a range of reliable, scalable storage
solutions to meet various storage requirements. These capabilities are achieved by using
network access protocols, such as Network File System (NFS), Common Internet File
System (CIFS), HTTP, and iSCSI, and storage area network technologies, such as Fibre
Channel. By using built-in Redundant Array of Independent Disks (RAID) technologies, all
data is protected with options to enhance protection through mirroring, replication,
Snapshots, and backup. These storage systems also have simple management interfaces
that make installation, administration, and troubleshooting straightforward.
In addition, this book addresses high-availability solutions, including clustering and
MetroCluster that support highest business continuity requirements. MetroCluster is a unique
solution that combines array-based clustering with synchronous mirroring to deliver
continuous availability.
Authors
This Redbooks publication is a companion book to IBM System Storage N series Software
Guide, SG24-7129, which is available at this website:
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Roland Tretau is an Information Systems professional with over 15 years of experience in the
IT industry. He holds Engineering and Business Masters degrees, and is the author of many
storage-related IBM Redbooks publications. Roland has a solid background in project
management, consulting, operating systems, storage solutions, enterprise search
technologies, and data management.
Jeff Lin is a Client Technical Specialist for the IBM Sales & Distribution Group in San Jose,
California, USA. He holds degrees in engineering and biochemistry, and has six years of
experience in IT consulting and administration. Jeff is an expert in storage solution design,
implementation, and virtualization. He has a wide range of practical experience, including
Solaris on SPARC, IBM AIX®, IBM System x®, and VMWare ESX.
Dirk Peitzmann is a Leading Technical Sales Professional with IBM Systems Sales in
Munich, Germany. Dirk is an experienced professional and provides technical pre-sales and
post-sales solutions for IBM server and storage systems. His areas of expertise include
designing virtualization infrastructures and disk solutions and carrying out performance
analysis and the sizing of SAN and NAS solutions. He holds an engineering diploma in
Computer Sciences from the University of Applied Science in Isny, Germany, and is an Open
Group Master Certified IT Specialist.
Steven Pemberton is a Senior Storage Architect with IBM GTS in Melbourne, Australia. He
has broad experience as an IT solution architect, pre-sales specialist, consultant, instructor,
and enterprise IT customer. He is a member of the IBM Technical Experts Council for
Australia and New Zealand (TEC A/NZ), has multiple industry certifications, and is co-author
of seven previous IBM Redbooks.
Tom Provost is a Field Technical Sales Specialist for the IBM Systems and Technology
Group in Belgium. Tom has many years of experience as an IT professional providing design,
implementation, migration, and troubleshooting support for IBM System x, IBM System
Storage, storage software, and virtualization. Tom also is the co-author of several other
Redbooks and IBM Redpapers™. He joined IBM in 2010.
Marco Schwarz is an IT specialist and team leader for Techline as part of the Techline Global
Center of Excellence who lives in Germany. He has many years of experience in designing
IBM System Storage solutions. His expertise spans all recent technologies in the IBM
storage.
Thanks Bertrand Dufrasne of the International Technical Support Organization, San Jose
Center for his contributions to this project.
Thanks to the following authors of the previous editions of this book:
Alex Osuna
Sandro De Santis
Carsten Larsen
Tarik Maluf
Patrick P. Schill
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at
this website:
http://www.ibm.com/redbooks/residencies.html
xivIBM System Storage N series Hardware Guide
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at this website:
http://www.ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface xv
xviIBM System Storage N series Hardware Guide
Summary of changes
This section describes the technical changes that were made in this edition of the book and in
previous editions. This edition might also include minor corrections and editorial changes that
are not identified.
Summary of Changes
for SG24-7840-03
for IBM System Storage N series Hardware Guide
as created or updated on May 28, 2014.
May 2014, Fourth Edition
New information
The following new information is included:
The N series hardware portfolio was updated to reflect the October 2013 status quo.
Information and changed in Data ONTAP 8.1.x have been included.
High availability and MetroCluster information was updated to include SAS shelf
technology.
Changed information
The following changed information is included:
Hardware information for products that are no longer available was removed.
Information that is valid for Data ONTAP 7.x only was removed or modified to highlight
differences and improvements in the current Data ONTAP 8.1.x release.
This part introduces the N series hardware, including the storage controller models, disk
expansion shelves, and cabling recommendations.
It also describes some of the hardware functions, including active/active controller clusters,
MetroCluster, NVRAM and cache memory, and RAID-DP protection.
Finally, this part provides a high-level guide to designing an N series solution.
This part includes the following chapters:
Chapter 1, “Introduction to IBM System Storage N series” on page 3
Chapter 2, “Entry-level systems” on page 13
Chapter 3, “Mid-range systems” on page 23
Chapter 4, “High-end systems” on page 33
Chapter 5, “Expansion units” on page 45
Chapter 6, “Cabling expansions” on page 59
Chapter 7, “Highly Available controller pairs” on page 71
Chapter 8, “MetroCluster” on page 103
Chapter 9, “MetroCluster expansion cabling” on page 125
Chapter 10, “Data protection with RAID Double Parity” on page 147
Chapter 11, “Core technologies” on page 165
Chapter 12, “Flash Cache” on page 175
Chapter 13, “Disk sanitization” on page 181
Chapter 14, “Designing an N series solution” on page 187
The IBM System Storage N series offers more choices to organizations that face the
challenges of enterprise data management. The IBM System Storage N series is designed to
deliver high-end value with midrange affordability. Built-in enterprise serviceability and
manageability features support customer efforts to increase reliability, simplify, and unify
storage infrastructure and maintenance, and deliver exceptional economy.
This chapter includes the following sections:
Overview
IBM System Storage N series hardware
Software licensing structure
Data ONTAP 8 supported systems
This section introduces the IBM System Storage N series and describes its hardware
features. The IBM System Storage N series provides a range of reliable, scalable storage
solutions for various storage requirements. These capabilities are achieved by using network
access protocols, such as Network File System (NFS), Common Internet File System (CIFS),
HTTP, FTP, and iSCSI. They are also achieved by using storage area network technologies,
such as Fibre Channel and Fibre Channel over Ethernet (FCoE). The N series features
built-in Redundant Array of Independent Disks (RAID) technology. Further advanced data
protection options include snapshots, backup, mirroring, and replication technologies that can
be customized to meet client’s business requirements. These storage systems also have
simple management interfaces that make installation, administration, and troubleshooting
straightforward.
The N series unified storage solution supports file and block protocols, as shown in
Figure 1-1. Converged networking also is supported for all protocols.
Figure 1-1 Unified storage
This type of flexible storage solution offers the following benefits:
Heterogeneous unified storage solution: Unified access for multiprotocol storage
environments.
Versatile: A single integrated architecture that is designed to support concurrent block I/O
and file servicing over Ethernet and Fibre Channel SAN infrastructures.
Comprehensive software suite that is designed to provide robust system management,
copy services, and virtualization technologies.
Ease of changing storage requirements that allow fast, dynamic changes. If more storage
is required, you can expand it quickly and non-disruptively. If existing storage is deployed
incorrectly, you can reallocate available storage from one application to another quickly
and easily.
4IBM System Storage N series Hardware Guide
Maintains availability and productivity during upgrades. If outages are necessary,
downtime is kept to a minimum.
Easily and quickly implement nondisruptive upgrades.
Create effortless backup and recovery solutions that operate in a common manner across
all data access methods.
Tune the storage environment to a specific application while maintaining its availability and
flexibility.
Change the deployment of storage resources easily, quickly, and non-disruptively. Online
storage resource redeployment is possible.
Achieve robust data protection with support for online backup and recovery.
Include added value features, such as deduplication to optimize space management.
All N series storage systems use a single operating system (Data ONTAP) across the entire
platform. They offer advanced function software features that provide one of the industry’s
most flexible storage platforms. This functionality includes comprehensive system
management, storage management, onboard copy services, virtualization technologies,
disaster recovery, and backup solutions.
1.2 IBM System Storage N series hardware
The following sections address the N series models that are available at the time of this
writing. Figure 1-2 shows all of the N series models that were released by IBM to date that
belong to the N3000, N6000, and N7000 series line.
Figure 1-2 N series hardware portfolio
Chapter 1. Introduction to IBM System Storage N series 5
The hardware includes the following features and benefits:
Data compression:
– Transparent in-line data compression can store more data in less space, which
reduces the amount of storage that you must purchase and maintain.
– Reduces the time and bandwidth that is required to replicate data during volume
SnapMirror transfers.
Deduplication:
– Runs block-level data deduplication on NearStore data volumes.
– Scans and deduplicates volume data automatically, which results in fast, efficient
space savings with minimal effect on operations.
Data ONTAP:
– Provides full-featured and multiprotocol data management for block and file serving
environments through N series storage operating system.
– Simplifies data management through single architecture and user interface, and
reduces costs for SAN and NAS deployment.
Disk sanitization:
– Obliterates data by overwriting disks with specified byte patterns or random data.
– Prevents recovery of current data by any known recovery methods.
FlexCache:
– Creates a flexible caching layer within your storage infrastructure that automatically
adapts to changing usage patterns to eliminate bottlenecks.
– Improves application response times for large compute farms, speeds data access for
remote users, or creates a tiered storage infrastructure that circumvents tedious data
management tasks.
FlexClone:
– Provides near-instant creation of LUN and volume clones without requiring more
storage capacity.
– Accelerates test and development, and storage capacity savings.
FlexShare:
– Prioritizes storage resource allocation to highest-value workloads on a heavily loaded
system.
– Ensures that best performance is provided to designated high-priority applications.
FlexVol:
– Creates flexibly sized LUNs and volumes across a large pool of disks and one or more
RAID groups.
– Enables applications and users to get more space dynamically and non-disruptively
without IT staff intervention. Enables more productive use of available storage and
helps improve performance.
Gateway:
– Supports attachment to IBM Enterprise Storage Server® (ESS) series, IBM XIV®
Storage System, and IBM System Storage DS8000® and DS5000 series. Also
supports a broad range of IBM, EMC, Hitachi, Fujitsu, and HP storage subsystems.
6IBM System Storage N series Hardware Guide
MetroCluster:
– Offers an integrated high-availability and disaster-recovery solution for campus and
metro-area deployments.
– Ensures high data availability when a site failure occurs.
– Supports Fibre Channel attached storage with SAN Fibre Channel switch, SAS
attached storage with Fibre Channel -SAS bridge, and Gateway storage with SAN
Fibre Channel switch.
MultiStore:
– Partitions a storage system into multiple virtual storage appliances.
– Enables secure consolidation of multiple domains and controllers.
NearStore (near-line):
– Increases the maximum number of concurrent data streams (per storage controller).
– Enhances backup, data protection, and disaster preparedness by increasing the
number of concurrent data streams between two N series systems.
OnCommand:
– Enables the consolidation and simplification of shared IT storage management by
providing common management services, integration, security, and role-based access
controls, which delivers greater flexibility and efficiency.
– Manages multiple N series systems from a single administrative console.
– Speeds deployment and consolidated management of multiple N series systems.
Flash Cache (Performance Acceleration Module):
– Improves throughput and reduces latency for file services and other random
read-intensive workloads.
– Offers power savings by using less power than adding more disk drives to optimize
performance.
RAID-DP:
– Offers double parity bit RAID protection (N series RAID 6 implementation).
– Protects against data loss because of double disk failures and media bit errors that
occur during drive rebuild processes.
SecureAdmin:
– Authenticates the administrative user and the N series system, which creates a secure,
direct communication link to the N series system.
– Protects administrative logins, passwords, and session commands from cleartext
snooping by replacing RSH and Telnet with the encrypted SSH protocol.
Single Mailbox Recovery for Exchange (SMBR):
– Enables the recovery of a single mailbox from a Microsoft Exchange Information Store.
– Extracts a single mailbox or email directly in minutes with SMBR, compared to hours
with traditional methods. This process eliminates the need for staff-intensive, complex,
and time-consuming Exchange server and mailbox recovery.
SnapDrive:
– Provides host-based data management of N series storage from Microsoft Windows,
UNIX, and Linux servers.
– Simplifies host-consistent Snapshot copy creation and automates error-free restores.
Chapter 1. Introduction to IBM System Storage N series 7
SnapLock:
– Write-protects structured application data files within a volume to provide Write Once
Read Many (WORM) disk storage.
– Provides storage, which enables compliance with government records retention
regulations.
SnapManager:
– Provides host-based data management of N series storage for databases and
business applications.
– Simplifies application-consistent Snapshot copies, automates error-free data restores,
and enables application-aware disaster recovery.
SnapMirror:
– Enables automatic, incremental data replication between synchronous or
asynchronous systems.
– Provides flexible, efficient site-to-site mirroring for disaster recovery and data
distribution.
SnapRestore:
– Restores single files, directories, or entire LUNs and volumes rapidly, from any
Snapshot backup.
– Enables near-instant recovery of files, databases, and complete volumes.
Snapshot:
– Makes incremental, data-in-place, point-in-time copies of a LUN or volume with
Provides support for Full Disk Encryption (FDE) drives in N series disk shelf storage and
integration with License Key Managers, including IBM Tivoli® Key Lifecycle Manager.
SyncMirror:
– Maintains two online copies of data with RAID-DP protection on each side of the mirror.
– Protects against all types of hardware outages, including triple disk failure.
Gateway
Reduce data management complexity in heterogeneous storage environments for data
protection and retention.
Software bundles:
– Provides flexibility to use breakthrough capabilities while maximizing value with a
considerable discount.
– Simplifies ordering of combinations of software features: Windows Bundle, Complete
Bundle, and Virtual Bundle.
8IBM System Storage N series Hardware Guide
For more information about N series software features, see IBM System Storage N series
Storage Efficiency features
Snapshot™Copies
Point-in-time copies that write only
changed blocks. No performance
penalty.
Virtual Copies (FlexClone®)
Near-zero space, instant “virtual”
copies. Only subsequent changes in
cloned dataset get stored.
Thin Provisioning
(FlexVol
®
)
Create flexible volumes that appear to
be a certain size but are really a much
smaller pool.
RAID-DP®Protection
(RAID-6)
Protects against double disk failure with
no performance penalty.
Deduplication
Removes data redundancies in
primary and secondary storage.
Save
up to
95%
Save
up to
46%
Save
up to
33%
Save
over
80%
Save
over
80%
Thin Replication
(SnapVault
®
and SnapMirror®)
Make data copies for disaster recovery
and backup using a minimal amount of
space.
Save
up to
95%
Data Compression
Reduces footprint of primary
and secondary storage.
Save
up to
87%
Software Guide, SG24-7129, which is available at this website:
All N series systems support the storage efficiency features, as shown in Figure 1-3.
Figure 1-3 Storage efficiency features
1.3 Software licensing structure
This section provides an overview of the software licensing structure.
1.3.1 Mid-range and high-end
The software structure for mid-range and high-end systems is assembled out of the following
major options:
Data ONTAP Essentials (including one protocol of choice)
Protocols (CIFS, NFS, Fibre Channel, iSCSI)
SnapRestore
SnapMirror
SnapVault
FlexClone
SnapLock
SnapManager Suite
Chapter 1. Introduction to IBM System Storage N series 9
Figure 1-4 provides an overview of the software structure that was introduced with the
Data ONTAP
Essentials
Includes: One Protocol of choice, SnapShots, HTTP, Deduplication, Compression, NearStore, DSM/MPIO,
SyncMirror, MultiStore, FlexCache, MetroCluster, High availability, OnCommand
License Key Details: Only SyncMirror Local, Cluster Failover and Cluster Failover Remote License Keys are
required for DOT 8.1, the DSM/MPIO License key must be installed on Server
Protocols
Sold Separately: iSCSI, FCP, CIFS, NFS
License Key Details: Each Protocol License Key must be installed separately
SnapRestore
Includes: SnapRestore
®
License Key Details: SnapRestore License Key must be installed separately
SnapMirror
Includes: SnapMirror
®
License Key Details: SnapMirror License Key unlocks all product features
FlexClone
Includes: FlexClone
®
License Key Details: FlexClone License Key must be installed separately
SnapVault
Includes: SnapVault®Primary and SnapVault®Secondary
License Key Details: SnapVault Secondary License Key unlocks both Primary and Secondary products
SnapLock
Sold Separately: SnapLock®Compliance and SnapLock®Enterprise
License Key Details: Each product is unlocked by its own Master License Key
SnapManager Suite
Includes: SnapManagers for Exchange, SQL Server, SharePoint, Oracle, SAP, VMWare Virtual Infrastructure,
Hyper-V, and SnapDrives for Windows and UNIX
License Key Details: SnapManager Exchange License Key unlocks the entire Suite of features
Complete Bundle
Includes: All Protocols, Single MailBox Recovery, SnapLock ®, SnapRestore®, SnapMirror®, FlexClone®,
SnapVault®, and SnapManager Suite
License Key Details: Refer to the individual Product License Key Details
Software Structure 2.0 Licensing
PLATFORMS: N62x0 & N7950T
NOTE: For DOT 8.0 and earlier, every feature requires its own License Key to be installed separately
availability of Data ONTAP 8.1.
Figure 1-4 Software structure for mid-range and enterprise systems
To increase the business flow efficiencies, the seven-mode licensing infrastructure was
modified to handle features that are included in a more bundled or packaged manner.
You do not need to add license keys on your system for most features that are distributed at
no additional fee. For some platforms, features in a software bundle require only one license
1.3.2 Entry-level
10IBM System Storage N series Hardware Guide
key. Other features are enabled when you add certain other software bundle keys.
The entry-level software structure is similar to the mid-range and high-end structures that
were described in 1.3.1, “Mid-range and high-end” on page 9. The following changes apply:
All protocols (CIFS, NFS, Fibre Channel, iSCSI) are included with entry-level systems
Gateway feature is not available
MetroCluster feature is not available
1.4 Data ONTAP 8 supported systems
ModelsSupported by Data ONTAP Versions 8.0 and Higher
IBM8.08.0.18.0.28.0.38.1
N3220x
N3240x
N3400xxxxx
N5300xxxxx
N5600xxxxx
N6040xxxxx
N6060xxxxx
N6070xxxxx
N6210xxxx
N6240xxxx
N6270xxxx
N7600xxxxx
N7700xxxxx
N7800xxxxx
N7900xxxxx
N7950Txxxx
Current Portfolio
Figure 1-5 provides an overview of systems that support Data ONTAP 8. The listed systems
reflect the N series product portfolio as of June 2011, and some older N series systems that
are suitable to run Data ONTAP 8.
Figure 1-5 Supported Data ONTAP 8.x systems
Chapter 1. Introduction to IBM System Storage N series 11
12IBM System Storage N series Hardware Guide
Chapter 2.Entry-level systems
2
This chapter describes the IBM System Storage N series 3000 systems, which address the
entry-level segment.
This chapter includes the following sections:
Overview
N32x0 common features
N3150 model details
N3220 model details
N3240 model details
N3000 technical specifications
Figure 2-1 shows the N3000 modular disk storage system, which is designed to provide
primary and auxiliary storage for midsize enterprises. N3000 systems offer integrated data
access, intelligent management software, and data protection capabilities in a cost-effective
package. N3000 series innovations include internal controller support for Serial-Attached
SCSI (SAS) or SATA drives, expandable I/O connectivity, and onboard remote management.
Figure 2-1 N3000 modular disk storage system
The following N3000 series are available:
IBM System Storage N3150:
– Model A15: Single-node
– Model A25: Dual-node, Active/Active HA Pair
IBM System Storage N3220:
– Model A12: Single-node
– Model A22: Dual-node, Active/Active HA Pair
The IBM System Storage N3240:
– Model A14: Single-node
– Model A24: Dual-node, Active/Active HA Pair
Table 2-1 provides a comparison of the N3000 series.
a. All specifications are for dual-controller, active-active configurations.
b. Based on optional dual-port 10 GbE or 8 Gb FC mezzanine card and single slot per controller.
a
N3150 (FAS2220)N3220 (FAS2240-2)N3240 (FAS2240-4)
iSCSI
b
b
CIFS, NFS,
iSCSI, FCP
b
Ye s
b
Ye s
CIFS, NFS,
iSCSI, FCP
14IBM System Storage N series Hardware Guide
2.2 N32x0 common features
Table 2-2 provides ordering information for N32x0 systems.
Table 2-2 N3150 and N32x0 configurations
ModelForm factorHDDPSUSelect Process Control Module
N3150-A15, a252U chassis12 SAS 3.5”2One or two controllers, each with no
N3220-A12, A222U chassis24 SFF SAS 2.5”2One or two controllers, each with:
N3240-A14, A244U chassis24 SATA 3.5”4
Table 2-3 provides ordering information for N32x0 systems with Mezzanine cards.
Table 2-3 N32x0 controller configuration
Feature codeConfiguration
2030Controller with dual-port FC Mezzanine Card (include SFP+)
2031Controller with dual-port 10 GbE Mezzanine Card (no SFP+)
Table 2-4 provides information about the maximum number of supported shelves by
expansion type.
mezzanine card
Dual FC mezzanine card or
Dual 10 GE mezzanine card
Table 2-4 Number of shelves that are supported
Expansion shelf
(Total of 114 disks)
ESN 3000Up to five shelves (each with up to 24 x 3.5” SAS or SATA disk drives)
EXN 3500Up to five shelves (each with up to 24 x 2.5” SAS disk drives, or SSD)
EXN 4000Up to six shelves (each with up to 14 x 3.5” SATA disk drives)
Number of supported shelves
Chapter 2. Entry-level systems 15
2.3 N3150 model details
This section describes the N series 3150 models.
Note: Be aware of the following points regarding N3150 models:
N3150 models do not support the Fibre Channel protocol.
Compared to N32xx systems, the N3150 models have newer firmware and no
mezzanine card option is available.
2.3.1 N3150 model 2857-A15
N3150 Model A15 is a single-node storage controller. It is designed to provide CIFS, NFS,
Internet Small Computer System Interface (iSCSI), and HTTP support. Model A15 is a 2U
storage controller that must be mounted in a standard 19-inch rack. Model A15 can be
upgraded to a Model A25. However, this is a disruptive upgrade.
2.3.2 N3150 model 2857-A25
N3150 Model A25 is designed to provide identical functions as the single-node Model A15.
However, it has a second Processor Control Module and the Clustered Failover (CFO)
licensed function. Model A25 consists of two Processor Control Modules that are designed to
provide failover and failback function, which helps improve overall availability. Model A25 is a
2U rack-mountable storage controller.
2.3.3 N3150 hardware
The N3150 hardware has the following characteristics:
Specifications (single node, 2x for dual node):
– 2U, standard 19-inch rack mount enclosure (single or dual node)
– One 1.73 GHz Intel dual-core processor
– 6 GB random access ECC memory (NVRAM 768 MB)
– Four integrated Gigabit Ethernet RJ45 ports
– Two SAS ports
– One serial console port and one integrated RLM port
Redundant hot-swappable, auto-ranging power supplies and cooling fans
Maximum Capacity is 240 TB:
– Internal Storage: 6- and 12-disk orderable configurations
– External Storage: Maximum of two EXN3000 SAS/SATA or EXN3500 SAS storage
expansion units (48 disks).
16IBM System Storage N series Hardware Guide
Figure 2-2 shows the front view of the N3150.
Figure 2-2 N3150 front view
Figure 2-3 shows the N3150 Single-Controller in chassis (Model A15)
Figure 2-3 N3150 Single-Controller in chassis
Figure 2-4 shows the N3150 Dual-Controller in chassis (Model A25)
Figure 2-4 N3150 Dual-Controller in chassis
Note: The N3150 supports IP protocols only because it lacks any FC ports.
Chapter 2. Entry-level systems 17
2.4 N3220 model details
This section describes the N series 3220 models.
2.4.1 N3220 model 2857-A12
N3220 Model A12 is a single-node storage controller. It is designed to provide HTTP, iSCSI,
NFS, CIFS, and FCP support through optional features. Model A12 is a 2U storage controller
that must be mounted in a standard 19-inch rack. Model A12 can be upgraded to a Model
A22. However, this is a disruptive upgrade.
2.4.2 N3220 model 2857-A22
N3320 Model A22 is designed to provide identical functions as the single-node Model A12.
However, it has a second Processor Control Module and the CFO licensed function. Model
A22 consists of two Processor Control Modules that are designed to provide failover and
failback function, which helps improve overall availability. Model A22 is a 2U rack-mountable
storage controller.
2.4.3 N3220 hardware
The N3220 hardware has the following characteristics:
Based on the EXN3500 expansion shelf
24 2.5” SFF SAS disk drives (minimum initial order of 12 disk drives)
Specifications (single node, 2x for dual node):
– 2U, standard 19-inch rack mount enclosure (single or dual node)
– One 1.73 GHz Intel dual-core processor
– 6 GB random access ECC memory (NVRAM 768 MB)
– Four integrated Gigabit Ethernet RJ45 ports
– Two SAS ports
– One serial console port and one integrated RLM port
– One optional expansion I/O adapter slot on mezzanine card:
• 8 Gb FC card provides two FC ports
• 10 GbE card provides two 10 GbE ports
– Redundant hot-swappable, auto-ranging power supplies and cooling fans
Figure 2-5 shows the front view of the N3220.
Figure 2-5 N3220 front view
18IBM System Storage N series Hardware Guide
Figure 2-6 shows the rear view of the N3220.
Figure 2-6 N3220 rear view
Figure 2-5 shows the N3220 Single-Controller in chassis.
Figure 2-7 N3220 Dual-Controller in chassis (including optional mezzanine card)
2.5 N3240 model details
This section describes the N series 3240 models.
2.5.1 N3240 model 2857-A14
N3240 Model A14 is designed to provide a single-node storage controller with HTTP, iSCSI,
NFS, CIFS, and FCP support through optional features. The N3240 Model A14 is a 4U
storage controller that must be mounted in a standard 19-inch rack. Model A14 can be
upgraded to a Model A24. However, this is a disruptive upgrade.
2.5.2 N3240 model 2857-A24
N3240 Model A24 is designed to provide identical functions as the single-node Model A14.
However, it includes a second Processor Control Module and CFO licensed function. Model
A24 consists of two Processor Control Modules that are designed to provide failover and
failback function, which helps improve overall availability. Model A24 is a 4U rack-mountable
storage controller.
Chapter 2. Entry-level systems 19
2.5.3 N3240 hardware
Based on the EXN3000 expansion shelf
24 SATA disk drives (minimum initial order of 12 disk drives)
Specifications (single node, 2x for dual node):
– 4U, standard 19-inch rack mount enclosure (single or dual node)
– One 1.73 GHz Intel dual-core processor
– 6 GB random access ECC memory (NVRAM 768 MB)
– Four integrated Gigabit Ethernet RJ45 ports
– Two SAS ports
– One serial console port and one integrated RLM port
– One optional expansion I/O adapter slot on mezzanine card:
• 8 Gb FC card provides two FC ports
• 10 GbE card provides two 10 GbE ports
– Redundant hot-swappable, auto-ranging power supplies and cooling fans
Figure 2-8 shows the front view of the N3240
Figure 2-8 N3240 front view
Figure 2-9 shows the N3240 Single-Controller in chassis.
Figure 2-9 N3240 Single-Controller in chassis
20IBM System Storage N series Hardware Guide
Figure 2-10 shows the front and rear view of the N3240
Figure 2-10 N3240 Dual-Controller in chassis
Figure 2-11 shows the controller with the 8 Gb FC Mezzanine card option
Figure 2-11 Controller with 8 Gb FC Mezzanine card option
Figure 2-12 shows the controller with the 10 GbE Mezzanine card option
Figure 2-12 Controller with 10 GbE Mezzanine card option
Chapter 2. Entry-level systems 21
2.6 N3000 technical specifications
Table 2-5 provides an overview of the N32x0 specifications.
Figure 3-1 shows the N62x0 modular disk storage system, which includes the following
advantages:
Increase NAS storage flexibility and expansion capabilities by consolidating block and file
data sets onto a single multiprotocol storage platform.
Provide performance when your applications need it most with high bandwidth, 64-bit
architecture, and the latest I/O technologies.
Maximize storage efficiency and growth and preserve investments in staff expertise and
capital equipment with data-in-place upgrades to more powerful IBM System Storage N
series.
Improve your business efficiency by using the N6000 series capabilities, which are also
available with a Gateway feature. These capabilities reduce data management complexity
in heterogeneous storage environments for data protection and retention.
Figure 3-1 Mid-range systems
IBM System Storage N62x0 series systems help you meet your network-attached storage
(NAS) needs. They provide high levels of application availability for everything from critical
business operations to technical applications. You can also address NAS and storage area
network (SAN) as primary and auxiliary storage requirements. In addition, you get
outstanding value. These flexible systems offer excellent performance and impressive
expandability at a low total cost of ownership.
3.1.1 Common features
The N62x0 modular disk storage system includes the following common features:
Simultaneous multiprotocol support for FCoE, FCP, iSCSI, CIFS, NFS, HTTP, and FTP
File-level and block-level service in a single system
Support for Fibre Channel, SAS, and SATA disk drives
Data ONTAP software
Broad range of built-in features
Multiple supported backup methods that include disk-based and host-based backup and
tape backup to direct, SAN, and GbE attached tape devices
3.1.2 Hardware summary
The N62x0 modular disk storage system contains the following hardware:
Up to 2880 TB raw storage capacity
12/24 GB to 20/40 GB random access memory
1.6/3.2 GB to 2/4 GB nonvolatile memory
24IBM System Storage N series Hardware Guide
Integrated Fibre Channel, Ethernet, and SAS ports
Quad-port 4 Gbps adapters (optional)
Up to four Performance Acceleration Modules (Flash Cache)
Diagnostic LED/LCD
Dual redundant hot-plug integrated cooling fans and autoranging power supplies
19 inch, rack-mountable unit
N6220
The IBM System Storage N6220 includes the following storage controllers:
Model C15: A single-node base unit
Model C25: An active/active dual-node base unit, which is composed of two C15 models
Model E15: A single-node base unit, with an I/O expansion module
Model E25: An active/active dual-node base unit, which is composed of two E15 models
The Exx models contain an I/O expansion module that provides more PCIe slots. The I/O
expansion is not available on Cxx models.
N6250
The IBM System Storage N6250 includes the following storage controllers:
Model E16: A single-node base unit, with one controller and one I/O expansion module
both in a single chassis
Model E26: An active/active dual-node base unit, which is composed of two E16 models
The Exx model contains an I/O expansion module that provides more PCIe slots. The I/O
expansion is not available on Cxx models
3.1.3 Functions and features common to all models
This section describes the functions and features that are common to all eight models.
Fibre Channel, SAS, and SATA attachment
All models include Fibre Channel, SAS, and SATA attachment options for disk expansion
units. These options are designed to allow deployment in multiple environments, including
data retention, NearStore, disk-to-disk backup scenarios, and high-performance,
mission-critical I/O intensive operations.
The IBM System Storage N series supports the following expansion units:
EXN1000 SATA storage expansion unit (no longer available)
EXN2000 and EXN4000 FC storage expansion units
EXN3000 SAS/SATA expansion unit
EXN3500 SAS expansion unit
Because none of the N62x0 models include storage in the base chassis, at least one storage
expansion unit must be attached. All N62x0 models must be mounted in a standard 19-inch
rack.
Dynamic removal and insertion of the controller
The N6000 controllers are hot pluggable. You do not have to turn off PSUs to remove a
controller in a dual-controller configuration.
PSUs are independent components. One PSU can run an entire system indefinitely. There is
no “2-minute rule” if you remove one PSU. PSUs have internal fans for self-cooling only.
Chapter 3. Mid-range systems 25
RLM design and internal Ethernet switch on the controller
The Data ONTAP management interface (which is known as e0M) provides a robust and
cost-effective way to segregate management subnets from data subnets without incurring a
port penalty. On the N6000 series, the traditional RLM port on the rear of the chassis (now
identified by a wrench symbol) connects first to an internal Ethernet switch. This switch
provides connectivity to the RLM and e0M interfaces. Because the RLM and e0M each have
unique TCP/IP addresses, the switch can discretely route traffic to either interface. You do not
need to use a data port to connect to an external Ethernet switch. Set up of VLANs and VIFs
is not required and not supported because e0M allows customers to have dedicated
management networks without VLANs.
The e0M interface can be thought of as another way to remotely access and manage the
storage controller. It is similar to the serial console, RLM, and standard network interfaces.
Use the e0M interface for network-based storage controller administration, monitoring
activities, and ASUP reporting. The RLM is used when you require its higher level of support
features. Host-side application data should connect to the appliance on a separate subnet
from the management interfaces
RLM assisted cluster failover
To decrease the time that is required for cluster failover (CFO) to occur when there is an
event, the RLM can communicate with the partner node instance of Data ONTAP. This
capability was available in other N series models before the N6000 series. However, the
internal Ethernet switch makes the configuration much easier and facilitates quicker cluster
failover, with some failovers occurring within 15 seconds.
3.2 N62x0 model details
This section gives an overview of the N62x0 systems.
3.2.1 N6220 and N6250 hardware overview
The N62x0 models support several physical configurations (single or dual node) and with or
without the I/O expansion module (IOXM).
The IBM N6220/N6250 configuration flexibility is shown in Figure 3-2 on page 27.
26IBM System Storage N series Hardware Guide
Figure 3-2 IBM N6210/N6240 configuration flexibility
All of the N62x0 controller modules provide the same type and number of onboard I/O ports
and PCI slots. The Exx models include the IOXM, which provides more PCI slots.
Figure 3-3 shows the IBM N62x0 Controller I/O module.
Figure 3-3 IBM N62x0 Controller I/O
The different N62x0 models also support different chassis configurations. For example, a
single chassis N6220 might contain a single node (C15 model), dual nodes (C25), or a single
node plus IOXM (E15). A second chassis is required for the dual-node with IOXM models
(E25 and E26).
Chapter 3. Mid-range systems 27
IBM N62x0 I/O configuration flexibility is shown in Figure 3-4.
Figure 3-4 IBM N62x0 I/O configuration flexibility
IBM N62x0 I/O Expansion Module (IOXM) is shown in Figure 3-5 and features the following
characteristics:
Components are not hot swappable:
– Controller panics if it is removed
– If inserted into running IBM N62x0, IOXM is not recognized until the controller is
rebooted
4 full-length PCIe v1.0 (Gen 1) x8 slots
Figure 3-5 IBM N62x0 I/O Expansion Module (IOXM)
28IBM System Storage N series Hardware Guide
Figure 3-6 shows the IBM N62x0 system board layout.
Figure 3-6 IBM N62x0 system board layout
Figure 3-7 shows the IBM N62x0 USB Flash Module, which has the following features:
It is the boot device for Data ONTAP and the environment variables
It replaces CompactFlash
It has the same resiliency levels as CompactFlash
2 GB density is used
It is a replaceable FRU
Figure 3-7 IBM N62x0 USB Flash Module
Chapter 3. Mid-range systems 29
3.2.2 IBM N62x0 MetroCluster and gateway models
This section describes the MetroCluster feature.
Supported MetroCluster N62x0 configuration
The following MetroCluster two-chassis configurations are supported:
Each chassis single-enclosure stand-alone:
• IBM N6220 controller with blank. The N6220-C25 with MetroCluster ships the
second chassis, but does not include the VI card.
• IBM N6250 controller with IOXM
Two chassis with single-enclosure HA (twin): Supported on IBM N6250 model
Fabric MetroCluster requires EXN4000 disk shelves or SAS shelves with SAS FibreBridge
(EXN3000 and EXN3500)
Gateway configuration is supported on both models.
FCVI card and port clarifications
In many stretch MetroCluster configurations, the cluster interconnect on the NVRAM cards in
each controller is used to provide the path for cluster interconnect traffic. The N60xx and
N62xx series offer a new architecture that incorporates a dual-controller design with the
cluster interconnect on the backplane.
The N62x0 ports c0a and c0b are
communication. Use these ports to enable NVRAM mirroring after you set up a dual-chassis
HA configuration (that is, N62x0 with IOXM). These ports cannot run standard Ethernet or the
Cluster-Mode cluster network.
“Stretching” the HA-pair (also called the SFO pair) by using the c0x ports is qualified with
optical SFPs up to a distance of 30 m. Beyond that distance, you need the FC-VI adapter.
When the FC-VI card is present, the c0x ports are disabled.
Although they have different part numbers, the same model of FC card is used for
MetroCluster or SnapMirror over FC. The PCI slot that the card is installed to causes the card
to identify as either model.
Tip: Always use an FCVI card in any N62xx MetroCluster, regardless if it is a stretched or
fabric-attached MetroCluster.
the ports that you must connect to establish controller
30IBM System Storage N series Hardware Guide
3.3 N62x0 technical specifications
Table 3-1 shows the N62x0 specifications.
Table 3-1 N62x0 specifications
N6220N6220 (with optional IOXM)N6250 (always with IOXM)
Figure 4-1 shows the N7x50T modular disk storage systems, which provide the following
advantages:
High data availability and system-level redundancy
Support of concurrent block I/O and file serving over Ethernet and Fibre Channel SAN
infrastructures
High throughput and fast response times
Support of enterprise customers who require network-attached storage (NAS), with Fibre
Channel or iSCSI connectivity
Attachment of Fibre Channel, serial-attached SCSI (SAS), and Serial Advanced
Technology Attachment (SATA) disk expansion units
Figure 4-1 N7x50T modular disk storage systems
The IBM System Storage N7950T (2867 Model E22) system is an active/active dual-node
base unit. It consists of two cable-coupled chassis with one controller and one I/O expansion
module per node. It is designed to provide fast data access, simultaneous multiprotocol
support, expandability, upgradability, and low maintenance requirements.
4.1.1 Common features
The N7x50T modular disk storage systems includes the following common features:
High data availability and system-level redundancy that is designed to address the needs
of business-critical and mission-critical applications.
Single, integrated architecture that is designed to support concurrent block I/O and file
serving over Ethernet and Fibre Channel SAN infrastructures.
High throughput and fast response times for database, email, and technical applications.
Enterprise customer support for unified access requirements for NAS through Fibre
Channel or iSCSI.
Fibre Channel, SAS, and SATA attachment options for disk expansion units that are
designed to allow deployment in multiple environments. These environments include data
retention, NearStore, disk-to-disk backup scenarios, and high-performance,
mission-critical I/O intensive operations.
Can be configured either with native disk shelves, as a gateway for a back-end SAN array,
or both.
34IBM System Storage N series Hardware Guide
4.1.2 Hardware summary
The N7x50T modular disk storage systems contains the following hardware:
Up to 5760 TB raw storage capacity
96 GB - 192 GB of RAM (random access memory)
Integrated Fibre Channel, Ethernet, and SAS ports
Support for 10 Gbps Ethernet port speed
Support for 8 Gbps Fibre Channel speed
N7550T
The IBM System Storage N7550T includes the Model C20 storage controller. This controller
uses a dual-node active/active configuration, which is composed of two controller units, in
either one or two chassis (as required for Metrocluster configuration).
N7950T
The IBM System Storage N6250 includes the Model E25 storage controller. This controller
uses a dual-node active/active configuration, which is composed of two controller units, each
with an IOXM, in two chassis.
4.2 N7x50T hardware
This section provides an overview of the N7550T and N7950T hardware.
4.2.1 Chassis configuration
Figure 4-4 shows the IBM N series N7x50T chassis configuration.
Figure 4-2 IBM N series N7950T configuration
Chapter 4. High-end systems 35
Figure 4-3 shows the IBM N series N7550T base components.
Figure 4-3 IBM N series N7550T base components
Figure 4-4 shows the IBM N series N7950T configuration.
Figure 4-4 IBM N series N7950T configuration
4.2.2 Controller module components
Although they differ in processor count and memory configuration, the processor modules for
the N7550T and N7950T provide the same onboard I/O connections. The N7950T also
includes an I/O expansion module (IOXM) to provide more I/O capacity.
Figure 4-5 on page 37 shows the IBM N series N7x50T controller I/O.
36IBM System Storage N series Hardware Guide
Figure 4-5 N7x50 controller
Figure 4-6 shows an internal view of the IBM N series N7x50T Controller module. The
N7550T and N7950T differ in number of processors and installed memory.
Figure 4-6 N7x50 internal view
Chapter 4. High-end systems 37
4.2.3 I/O expansion module components
The N7950T model always includes the I/O expansion module in the second bay in each of its
two chassis. This provides another 20 PCIe expansion slot (2x 10 slots) to the N7950T
relative to the N7550T. The IOXM is not supported on the N7550T model.
Figure 4-7 shows the IBM N series N7950T I/O Expansion Module (IOXM).
Figure 4-7 IBM N series N7950T I/O Expansion Module (IOXM)
The N7950T IOXM features the following characteristics:
All PCIe v2.0 (Gen 2) slots: Vertical slots have different form factor
Not hot-swappable:
– Controller panics if removed
– Hot pluggable, but not recognized until reboot
Figure 4-8 shows the IBM N series N7950T I/O Expansion Module (IOXM).
Figure 4-8 IBM N series N7950T I/O Expansion Module (IOXM)
38IBM System Storage N series Hardware Guide
4.3 IBM N7x50T configuration rules
This section describes the configuration rules for N7x50 systems.
4.3.1 IBM N series N7x50T slot configuration
This section describes the configuration rules for the vertical I/O slots and horizontal PCIe
slots.
Vertical I/O slots
The vertical I/O slots include the following characteristics:
Vertical slots use custom form-factor cards:
– Look similar to standard PCIe
– Cannot put standard PCIe cards into the vertical I/O slots
Vertical slot rules:
– Slot 1 must have a special Fibre Channel or SAS system board: Feature Code 1079
(Fibre Channel) and Feature Code 1080 (SAS)
– Slot 2 must have NVRAM8
– Slots 11 and 12 (N7950T with IOXM only):
• Can configure with a special FC I/O or SAS I/O card: Feature Code 1079 (FC) and
Feature Code 1080 (SAS)
• Can mix FC and SAS system boards in slots 11 and 12
– FC card ports can be set to target or initiator
Horizontal PCIe slots
The horizontal PCIe slots include the following characteristics:
Support standard PCIe adapters and cards:
– 10 GbE NIC (new quad port 1 GbE PCIe adapter for N7x50T FC1028)
– 10 GbE unified target adapter
– 8 Gb Fibre Channel
– Flash Cache
Storage HBAs: Special-purpose FC I/O and SAS I/O cards, and NVRAM8, are not used in
PCIe slots
4.3.2 N7x50T hot-pluggable FRUs
The following items are hot-pluggable:
Fans: Two-minute shutdown rule if you remove a fan FRU
Controllers: Do not turn off PSUs to remove a controller in dual- controller systems
PSUs:
– One PSU can run the entire system
– There is no 2-minute shutdown rule if one PSU removed
IOXMs are not hot pluggable (N7950T only):
– Removing the IOXM forces a system reboot
– System does not recognize a hot-plugged IOXM
Chapter 4. High-end systems 39
4.3.3 N7x50T cooling architecture
The N7x50T cooling architecture includes the following features:
Six fan FRUs per chassis, which is paired three each for top and bottom bays (each fan
FRU has two fans)
One failed fan is allowed per chassis bay:
– Controller can run indefinitely with single failed fan
– Two failed fans in controller bay cause a shutdown
– Two-minute shutdown rule applies if a fan FRU is removed: Rule that is enforced on a
per-controller basis
4.3.4 System-level diagnostic procedures
The following system-level tools are present in N7x50T systems:
SLDIAG replaces SYSDIAG: Both run system-level diagnostic procedures
SLDIAG has the following major differences from SYSDIAG:
– SLDIAG runs from maintenance mode: SYSDIAG booted with a separate binary
– SLDIAG has a CLI interface: SYSDIAG used menu tables
SLDIAG used on all new IBM N series platforms going forward
4.3.5 MetroCluster, Gateway, and FlexCache
MetroCluster and Gateway configurations include the following characteristics:
Supported MetroCluster two-chassis configuration
Single-enclosure stand-alone chassis: IBM N series N7950T-E22 controller with IOXM
Fabric MetroCluster requires EXN4000 shelves
The N7x50T series can also function as a Gateway
FlexCache uses N7x50T chassis:
– Controller module (and in IOXM for N7950T)
– Supports dual-enclosure HA configuration
4.3.6 N7x50T guidelines
The following tips are useful for the N7x50T model:
Get hands-on experience with Data ONTAP 8.1
Do not attempt to put vertical slot I/O system boards in horizontal expansion slots
Do not attempt to put expansion cards in vertical I/O slots
Onboard 10 GbE ports require feature code for SFP+: Not compatible with other SFP+ for
the two-port 10 GbE NIC (FC 1078)
Onboard 8 Gb SFP not interchangeable with other SFPs: 8 Gb SFP+ autoranges 8 Gbps,
4 Gbps, and 2 Gbps; does not support 1 Gbps
Pay attention when 6 Gb SAS system board in I/O slot 1
NVRAM8 and SAS use QSFP connection
40IBM System Storage N series Hardware Guide
Figure 4-9 shows the use of the SAS Card in I/O Slot 1.
Figure 4-9 Using SAS Card in I/O Slot 1
NVRAM8 and SAS I/O system boards use the QSFP connector:
– Mixing the cables does not cause physical damage, but the cables do not work
– Label your HA and SAS cables when you remove them
4.3.7 N7x50T SFP+ modules
This section provides detailed information about SFP+ modules.
Figure 4-10 shows the 8 Gb SFP+ modules.
Figure 4-10 8 Gb SFP+ modules
Chapter 4. High-end systems 41
Figure 4-11 shows the 10 GbE SFP+ modules.
Figure 4-11 10 GbE SFP+ modules
42IBM System Storage N series Hardware Guide
4.4 N7000T technical specifications
Table 4-1 provides the technical specifications of the N7x50T.
This section gives an overview of the N Series expansion unit technology. Figure 5-1 shows
the shelf topology comparison.
Figure 5-1 Shelf topology comparison
5.2 Expansion unit EXN3000
The IBM System Storage EXN3000 SAS/SATA expansion unit is available for attachment to N
series systems with PCIe adapter slots.
The EXN3000 SAS/SATA expansion unit is designed to provide SAS or SATA disk expansion
capability for the IBM System Storage N series systems. The EXN3000 is a 4U disk storage
expansion unit. It can be mounted in any industry standard 19-inch rack. The EXN3000
includes the following features:
Dual redundant hot-pluggable integrated power supples and cooling fans
Dual redundant disk expansion unit switched controllers
Diagnostic and status LEDs
5.2.1 Overview
The IBM System Storage EXN3000 SAS/SATA expansion unit is available for attachment to
all N series systems except N3300, N3700, N5200, and N5500. The EXN3000 provides
low-cost, high-capacity, and serially attached SCSI (SAS) Serial Advanced Technology
Attachment (SATA) disk storage for the IBM N series system storage.
46IBM System Storage N series Hardware Guide
The EXN3000 is a 4U disk storage expansion unit. It can be mounted in any
industry-standard 19-inch rack. The EXN3000 includes the following features:
Dual redundant hot-pluggable integrated power supplies and cooling fans
Dual redundant disk expansion unit switched controllers
24 hard disk drive slots
The EXN3000 SAS/SATA expansion unit is shown in Figure 5-2.
Figure 5-2 EXN3000 front view
The EXN3000 SAS/SATA expansion unit is shipped with no disk drives unless disk drives are
included in the order. In that case, the disk drives are installed in the plant.
The EXN3000 SAS/SATA expansion unit can be shipped with no disk drives installed. Disk
drives that are ordered with the EXN3000 are installed by IBM in the plant before shipping.
Requirement: For an initial order of an N series system, at least one of the storage
expansion units must be ordered with at least five disk drive features.
Figure 5-3 shows the rear view and the fans.
Figure 5-3 EXN3000 rear view
Chapter 5. Expansion units 47
5.2.2 Supported EXN3000 drives
Table 5-1 lists the drives that are supported by EXN3000 at the time of this writing.
Table 5-1 EXN3000 supported drives
EXN3000RPMCapacity
SAS15 K600 GB
SATA7.2 K1 TB
SSDN/A200 GB
5.2.3 Environmental and technical specifications
Table 5-2 shows the environmental and technical specifications.
600 GB encrypted
2 TB
3 TB
3 TB encrypted
4 TB
Table 5-2 EXN3000 environmental specifications
EXN3000Specification
Disk24
Rack size4U
WeightEmpty: 21.1 lb. (9.6 kg)
Without drives: 53.7 lb. (24.4 kg)
With drives: 110 lb. (49.9 kg)
IBM System Storage EXN3200 Model 306 SATA Expansion Unit is a 4U high-density SATA
enclosure for attachment to PCIe-based N series systems with SAS ports. The EXN3200
ships with 48 disk drives per unit.
The EXN3200 is a disk storage expansion unit for mounting in any industry standard 19-inch
rack. The EXN3200 provides low-cost, high-capacity SAS disk storage for the IBM N series
system storage family.
The EXN3200 must be ordered with a full complement of (48) disks.
48IBM System Storage N series Hardware Guide
5.3.1 Overview
The IBM System Storage EXN3200 SATA expansion unit is available for attachment to all N
series systems, except N3300, N3700, N5200, and N5500. The EXN3000 provides low-cost,
high-capacity, and SAS SATA disk storage for the IBM N series system storage.
The EXN3200 is a 4U disk storage expansion unit. It can be mounted in any
industry-standard 19-inch rack. The EXN3200 includes the following features:
Four redundant, hot-pluggable, integrated power supplies and cooling fans
Dual redundant disk expansion unit switched controllers
48 hard disk drives (in 24 bays)
Diagnostic and status LEDs
The EXN3200 must be ordered with a full complement of disks. Disk drives that are ordered
with the EXN3200 are shipped separately from the EXN3200 shelf and must be installed at
the customer's location.
Disk drive bays are numbered horizontally starting from 0 at the upper left position to 23 at the
lower right position. The EXN3200 SAS/SATA expansion unit is shown in Figure 5-4.
Figure 5-4 EXN3200 front view
Each of the 24 disk bays contains two SATA HDDs on the same carrier, as shown in
Figure 5-5.
Figure 5-5 EXN3200 disk carrier
Since removing a disk tray to replace a failed disk removes two disks, it is recommended to
have four spare disks instead of two when using the EXN3200 expansion unit.
Figure 5-6 on page 50 shows the EXN3200 rear view, with the following components
numbered:
1. IOM fault LED
2. ACP ports
3. Two I/O modules (IOM6)
4. SAS ports
5. SAS port link LEDs
6. IOM A and power supplies one and two
Chapter 5. Expansion units 49
7. IOM B and power supplies three and four
8. Four power supplies (each with integrated fans)
9. Power supply LEDs
Figure 5-6 EXN3200 rear view
5.3.2 Supported EXN3000 drives
Table 5-3 lists the drives that are supported by EXN3200 at the time of this writing.
Table 5-3 EXN3000 supported drives
EXN3000RPMCapacity
SATA7.2 K3 TB
7.2 K4 TB
5.3.3 Environmental and technical specifications
Table 5-4 shows the environmental and technical specifications
Table 5-4 EXN3000 environmental and technical specifications
Input voltage100 to 240 V (100 V actual 200 to 240 V (200 V actual)
Total input current
measured, A
SizeWorst
case,
a
2 PSU
3 TB8.713.296.574.591.733.46
4 TB8.543.406.794.251.693.38
TypicalWorst
Per PSU
b
pair
System,
four PSU
case,
2 PSU
c
Typical
Per PSU
pair
System,
four PSU
Total input power
measured, W
3 TB870329657919346693
4 TB853339677837329657
50IBM System Storage N series Hardware Guide
Input voltage100 to 240 V (100 V actual 200 to 240 V (200 V actual)
SizeWorst
case,
a
2 PSU
Tot al t he rma l
dissipation, BTU/hr
WeightWith midplane, four PSUs, two IOMs, four HDD carriers: 81 lbs (36.7 kg)
a. Worst-case indicates a system that is running with two PSUs, high fan speed, and power that is distributed over
two power cords.
b. Per PSU pair indicates typical power needs, per PSU pair, for a system operating under normal conditions.
c. System indicates typical power needs for four PSUs in a system operating under normal conditions and power that
is distributed over four power cords.
3 TB297011222243313711812362
4 TB290911552309285411202240
Fully configured: 145 lbs (65.8 kg)
TypicalWorst
Per PSU
b
pair
System,
four PSU
case,
2 PSU
c
Typical
Per PSU
pair
System,
four PSU
5.4 Expansion unit EXN3500
The EXN3500 is a small form factor (SFF) 2U disk storage expansion unit for mounting in any
industry standard 19-inch rack. The EXN3500 provides low-cost, high-capacity SAS disk
storage with slots for 24 hard disk drives for the IBM N series system storage family.
The EXN3500 SAS expansion unit is shipped with no disk drives unless they are included in
the order. In that case, the disk drives are installed in the plant.
The EXN3500 SAS expansion unit is a 2U SFF disk storage expansion unit that must be
mounted in an industry-standard 19-inch rack. It can be attached to all N series systems
except N3300, N3700, N5200, and N5500. It includes the following features:
Third-generation SAS product
Increased density
24 x 2.5 inch 10 K RPM drives in 2U rack at same capacity points (450 GB and 600 GB)
offers double the GB/rack U of the EXN3000
Increased IOPs/rack U
Greater bandwidth
6 Gb SAS 2.0 offers ~24 Gb (6 Gb x 4) combined bandwidth per wide port
Improved power consumption: Power consumption per GB reduced by approximately
30-50%*
Only SAS drives are supported in the EXN3500: SATA is not supported
The following features were not changed:
Same underlying architecture and FW base as EXN3000
All existing EXN3000 features and functionality
Still uses the 3 Gb PCIe Quad-Port SAS HBA (already 6 Gb capable) or onboard SAS
ports
Chapter 5. Expansion units 51
5.4.1 Overview
The EXN3500 includes the following hardware:
Dual, redundant, hot-pluggable, integrated power supplies and cooling fans
Dual, redundant, disk expansion unit switched controllers
24 SFF hard disk drive slots
Diagnostic and status LEDs
Figure 5-7 shows the EXN3500 front view.
Figure 5-7 EXN3500 front view
The EXN3500 SAS expansion unit can be shipped with no disk drives installed. Disk drives
ordered with the EXN3500 are installed by IBM in the plant before shipping. Disk drives can
be of 450 GB and 600 GB physical capacity, and must be ordered as features of the
EXN3500.
Requirement: For an initial order of an N series system, at least one of the storage
expansion units must be ordered with at least five disk drive features.
Figure 5-8 shows the rear view of the EXN3500, which highlights the connectivity and
resiliency.
Figure 5-8 EXN3500 rear view
52IBM System Storage N series Hardware Guide
Figure 5-9 shows the IOM differences.
Figure 5-9 IOM differences
5.4.2 Intermix support
EXN3000 and EXN3500 can be combined in the following configurations:
Intermix of EXN3000 and EXN3500 shelves: EXN3000 and EXN3500 shelves cannot be
intermixed on the same stack.
Only applicable to N3150 and N32x0, not other platforms: mixing EXN3500 and EXN3000
w/ IOM3 or IOM6 is supported.
Applies only to N3150 and N32x0, not other platforms.
EXN3000 supports IOM3 and IOM6 modules.
Attention: Even though it is supported to intermix IMO3 and IOM6 modules, it is not
recommend that you do so. The maximum loop speed is limited to IMO3 speed.
EXN3500 supports only IOM6 modules: the use of IOM3 modules in an EXN3500 is not
supported.
5.4.3 Supported EXN3500 drives
Table 5-5 on page 54 lists the drives that are supported by EXN3500 at the time of this
writing.
Chapter 5. Expansion units 53
Table 5-5 EXN3500 supported drives
EXN3500RPMCapacity
SAS10 K450 GB
SSDN/A200 GB
5.4.4 Environmental and technical specification
Table 5-6 shows the environmental and technical specifications.
Table 5-6 EXN3500 environmental specifications
EXN3500Specification
Disk24
600 GB
600 GB encrypted
900 GB
900 GB encrypted
1.2 TB
800 GB
Rack size2U
WeightEmpty: 17.4 lbs. (7.9 kg)
Without Drives: 34.6 lbs. (15.7 kg)
With Drives: 49 lbs. (22.2 kg)
PowerSAS: 450 GB 3.05A, 600 GB 3.59A
Thermal (BTU/hr)SAS: 450 GB 1024, 600 GB 1202
5.5 Self-Encrypting Drive
This section describes the FDE 600 GB 2.5 HDD drive.
5.5.1 SED at a glance
At the time of this writing, only the following FDE 600 GB drive is supported:
Self-Encrypting Drive (SED):
– 600 GB capacity
– 2.5-inch form factor, 10 K RPM, 6 GB SAS
– Encryption that is enabled through disk drive firmware (same drive as what is shipping
with different firmware)
Available in EXN3500 and EXN3000 expansion shelf and N3220 (internal drives)
controller: Only fully populated (24 drives) and N3220 controller
54IBM System Storage N series Hardware Guide
Requires DOT 8.1 minimum
Only allowed with HA (dual node) systems
Provides storage encryption capability (key manager interface)
5.5.2 SED overview
Storage Encryption is the implementation of full disk encryption (FDE) by using
self-encrypting drives from third-party vendors, such as Seagate and Hitachi. FDE refers to
encryption of all blocks in a disk drive, whether by software or hardware. NSE is encryption
that operates seamlessly with Data ONTAP features, such as storage efficiency. This is
possible because the encryption occurs below Data ONTAP as the data is being written to the
physical disk.
5.5.3 Threats mitigated by self-encryption
Self-encryption mitigates several threats. The primary threat model it addresses, per the
Trusted Computing Group (TCG) specification, is the prevention of unauthorized access to
encrypted data at rest on powered-off disk drives. That is, it prevents someone from removing
a shelf or drive and mounting them on an unauthorized system. This security minimizes risk
of unauthorized access to data if drives are stolen from a facility or compromised during
physical movement of the storage array between facilities.
Self-encryption also prevents unauthorized data access when drives are returned as spares
or after drive failure. This security includes cryptographic shredding of data for non-returnable
disk (NRD), disk repurposing scenarios, and simplified disposal of the drive through disk
destroy commands. These processes render a disk unusable. This greatly simplifies the
disposal of drives and eliminates the need for costly, time-consuming physical drive
shredding.
All data on the drives is automatically encrypted. If you do not want to track where the most
sensitive data is or risk it being outside an encrypted volume, use NSE to ensure that all data
is encrypted.
5.5.4 Effect of self-encryption on Data ONTAP features
Self-encryption operates below all Data ONTAP features, such as SnapDrive, SnapMirror,
and even compression and deduplication. Interoperability with these features should be
transparent. SnapVault and SnapMirror are supported, but for data at the destination to be
encrypted, the target must be another self-encrypted system.
The use of SnapLock prevents the inclusion of self-encryption. Therefore, simultaneous
operation of SnapLock and self-encryption is impossible. This limitation is being evaluated for
a future release of Data ONTAP. MetroCluster is not supported because of the lack of support
for the SAS interface. Support for MetroCluster is targeted for a future release of Data ONTAP.
5.5.5 Mixing drive types
In Data ONTAP 8.1, all drives that are installed within the storage platform must be
self-encrypting drives. The mixing of encrypted with unencrypted drives or shelves across a
stand-alone platform or high availability (HA) pair is not supported.
Chapter 5. Expansion units 55
5.5.6 Key management
This section describes key management.
Overview of Key Management Interoperability Protocol
Key Management Interoperability Protocol (KMIP) is an encryption key interoperability
standard that was created by a consortium of security and storage vendors (OASIS).
Version 1.0 was ratified in September 2010, and participating vendors later released
compatible products. KMIP seems to replace IEEE P1619.3, which was an earlier proposed
standard.
With KMIP-compatible tools, organizations can manage their encryption keys from a single
point of control. This system improves security, simplifies complexity, and achieves regulation
compliance more quickly and easily. It is a huge improvement over the current approach of
the use of many different encryption key management tools for many different business
purposes and IT assets.
Communication with the KMIP server
Self-encryption uses Secure Sockets Layer (SSL) certificates to establish secure
communications with the KMIP server. These certificates must be in Base64-encoded X.509
PEM format, and can be self-signed or signed by a certificate authority (CA).
Supported key managers
Self-encryption with Data ONTAP 8.1 supports IBM Tivoli Key Lifecycle Management
Version 2 server for key management (others follow). Other KMIP-compliant key managers
are evaluated as they are released into the market.
Self-encryption supports up to four key managers simultaneously for high availability of the
authentication key. Figure 5-10 shows authentication key use in self-encryption. It
demonstrates how the Authentication Key (AK) is used to wrap the Data Encryption Key
(DEK) and is backed up to an external key management server.
Figure 5-10 Authentication key use
56IBM System Storage N series Hardware Guide
Security Key Lifecycle Manager
Obtaining that central point of control requires more than an open standard. It also requires a
dedicated management solution that is designed to capitalize on it. IBM Security Key
Lifecycle Manager Version 2 gives you the power to manage keys centrally at every stage of
their lifecycles.
Security Key Lifecycle Manager performs key serving transparently for encrypting devices
and key management, making it simple to use. It is also easy to install and configure.
Because it demands no changes to applications and servers, it is a seamless fit for virtually
any IT infrastructure.
For these reasons, IBM led the IT industry in developing and promoting an exciting new
security standard: Key Management Interoperability Protocol (KMIP). KMIP is an open
standard that is designed to support the full lifecycle of key management tasks from key
creation to key retirement.
IBM Security Key Lifecycle Manager Version 1.0 supports the following operating systems:
AIX V5.3, 64-bit, Technology Level 5300-04, and Service Pack 5300-04-02, AIX 6.1 64 bit
Red Hat Enterprise Linux AS Version 4.0 on x86, 32-bit
SUSE Linux Enterprise Server Version 9 on x86, 32-bit, and V10 on x86, 32-bit
Sun Server Solaris 10 (SPARC 64-bit)
Remember: In Sun Server Solaris, Security Key Lifecycle Manager runs in a 32-bit
JVM.
Microsoft Windows Server 2003 R2 (32-bit Intel)
IBM z/OS® V1 Release 9, or later
For more information about Security Key Lifecycle Manager, see this website:
This section describes cabling the disk shelf SAS connections and the optional ACP
connections for a new storage system installation. Cabling the EXN3500 is similar to the
EXN3000. As a result, the information that is provided is applicable for both.
As of this writing, the maximum distance between controller nodes that are connected to
EXN3000 disk shelves is 5 meters. HA pairs with EXN3000 shelves are local, mirrored, or a
stretch MetroCluster, depending on the licenses that are installed for cluster failover.
The EXN3000 shelves are not supported for MetroClusters that span separate sites, nor are
they supported for fabric-attached MetroClusters.
The example that is used throughout is an HA pair with two 4-port SAS-HBA controllers in
each N series controller. The configuration includes two SAS stacks, each of which has three
SAS shelves.
Important: We recommend that you always use HA (dual path) cabling for all shelves that
are attached to N series heads.
6.1.1 Controller-to-shelf connection rules
Each controller connects to each stack of disk shelves in the system through the controller
SAS ports. These ports can be A, B, C, and D, and can be on a SAS HBA in a physical PCI
slot [slot 1-N] or on the base controller.
For quad-port SAS HBAs, the controller-to-shelf connection rules ensure resiliency for the
storage system that is based on the ASIC chip design. Ports A and B are on one ASIC chip,
and ports C and D are on a second ASIC chip. Because ports A and C connect to the top
shelf and ports B and D connect to the bottom shelf in each stack, the controllers maintain
connectivity to the disk shelves if an ASIC chip fails.
Figure 6-1 shows a quad-port SAS HBA with the two ASIC chips and their designated ports.
Figure 6-1 Quad-port SAS HBA with two ASIC chips
60IBM System Storage N series Hardware Guide
Connecting the Quad-port SAS HBAs adhere to the following rules for connecting to SAS
shelves:
HBA port A and port C always connect to the top storage expansion unit in a stack of
storage expansion units.
HBA port B and port D always connect to the bottom storage expansion unit in a stack of
storage expansion units.
Think of the four HBA ports as two units of ports. Port A and port C are the top connection
unit, and port B and port D are the bottom connection unit (see Figure 6-2). Each unit (A/C
and B/D) connects to each of the two ASIC chips on the HBA. If one chip fails, the HBA
maintains connectivity to the stack of storage expansion units.
Figure 6-2 Top and bottom cabling for quad-port SAS HBAs
SAS cabling is based on the following rules that each controller is connected to the top
storage expansion unit and the bottom storage expansion unit in a stack:
Controller 1 always connects to the top storage expansion unit IOM A and the bottom
storage expansion unit IOM B in a stack of storage expansion units
Controller 2 always connects to the top storage expansion unit IOM B and the bottom
storage expansion unit IOM A in a stack of storage expansion units
6.1.2 SAS shelf interconnects
SAS shelf interconnect adheres to the following rules:
All the disk shelves in a stack are daisy-chained when there is more than one disk shelf in
a stack.
IOM A circle port is connected to the next IOM A square port.
IOM B circle port is connected to the next IOM B square port.
Chapter 6. Cabling expansions 61
Figure 6-3 shows how the SAS shelves are interconnected for two stacks with three shelves
each.
Figure 6-3 SAS shelf interconnect
62IBM System Storage N series Hardware Guide
6.1.3 Top connections
The top ports of the SAS shelves are connected to the HA pair controllers, as shown in
Figure 6-4.
Figure 6-4 SAS shelf cable top connections
Chapter 6. Cabling expansions 63
6.1.4 Bottom connections
The bottom ports of the SAS shelves are connected to the HA pair controllers, as shown in
Figure 6-5.
Figure 6-5 SAS shelf cable bottom connections
Figure 6-5 is a fully redundant example of SAS shelf connectivity. No single cable failure or
shelf controller causes any interruption of service.
6.1.5 Verifying SAS connections
After you complete the SAS connections in your storage system by using the applicable
cabling procedure, verify the SAS connections. Complete the following steps to verify that the
storage expansion unit IOMs have connectivity to the controllers:
1. Enter the following command at the system console:
sasadmin expander_map
Tip: For Active/Active (high availability) configurations, run this command on both
nodes.
64IBM System Storage N series Hardware Guide
2. Review the output and perform the following tasks:
– If the output lists all of the IOMs, the IOMs have connectivity. Return to the cabling
procedure for your storage configuration to complete the cabling steps.
– IOMs might not be shown because the IOM is cabled incorrectly. The incorrectly cabled
IOM and all of the IOMs downstream from it are not displayed in the output. Return to
the cabling procedure for your storage configuration, review the cabling to correct
cabling errors, and verify SAS connectivity again.
6.1.6 Connecting the optional ACP cables
This section provides information about cabling the disk shelf ACP connections for a new
storage system installation. This section also provides information about cabling the optional
disk shelf ACP connections for a new storage system installation, as shown in Figure 6-6.
Figure 6-6 SAS shelf cable ACP connections
The following ACP cabling rules apply to all supported storage systems that use SAS storage:
You must use CAT6 Ethernet cables with RJ-45 connectors for ACP connections.
If your storage system does not have a dedicated network interface for each controller, you
must dedicate one for each controller at system setup. You can use a quad-port Ethernet
card.
All ACP connections to the disk shelf are cabled through the ACP ports, which are
designated by a square symbol or a circle symbol.
Chapter 6. Cabling expansions 65
Enable ACP on the storage system by entering the following command at the console:
options acp.enabled on
Verify that the ACP cabling is correct by entering the following command:
storage show acp
For more information about cabling SAS stacks and ACP to an HA pair, see IBM System
Storage EXN3000 Storage Expansion Unit Hardware and Service Guide, which is available at
this website:
http://www.ibm.com/storage/support/nas
6.2 EXN4000 disk shelves cabling
This section describes the requirements for connecting an expansion unit to N series storage
systems and other expansion units. For more information about installing and connecting
expansion units in a rack, or connecting an expansion unit to your storage system, see the
Installation and Setup Instructions for your storage system.
66IBM System Storage N series Hardware Guide
6.2.1 Non-multipath Fibre Channel cabling
Figure 6-7 shows EXN4000 disk shelves that are connected to a HA pair with non-multipath
cabling. A single Fibre Channel cable or shelf controller failure might cause a takeover
situation.
Figure 6-7 EXN4000 dual controller non-multipath
Attention: Do not mix Fibre Channel and SATA expansion units in the same loop.
Chapter 6. Cabling expansions 67
6.2.2 Multipath Fibre Channel cabling
Figure 6-8 shows four EXN4000 disk shelves in two separate loops that are connected to an
HA pair with redundant multipath cabling. No single Fibre Channel cable or shelf controller
failure causes a takeover situation.
Figure 6-8 EXN4000 dual controller with multipath
Tip: For N series controllers to communicate with an EXN4000 disk shelf, the Fibre
Channel ports on the controller or gateway must be set for initiator. Changing the behavior
of the Fibre Channel ports on the N series system can be performed by using the fcadmin
command.
68IBM System Storage N series Hardware Guide
6.3 Multipath HA cabling
A standard N series clustered storage system has multiple single-points-of-failure on each
shelf that can trigger a cluster failover (see Example 6-1). Cluster failovers can disrupt access
to data and put an increased workload on the surviving cluster node.
Example 6-1 Clustered system with a single connection to disks
N6270A> storage show disk –p
PRIMARY PORT SECONDARY PORT SHELF BAY
------- ---- --------- ---- ---------
0a.16 A1 0
0a.18 A1 2
0a.19 A1 3
0a.20 A1 4
Multipath HA (MPHA) cabling adds redundancy, which reduces the number of conditions that
can trigger a failover, as shown in Example 6-2.
Example 6-2 Clustered system with MPHA connections to disks
N6270A> storage show disk -p
PRIMARY PORT SECONDARY PORT SHELF BAY
------- ---- --------- ---- ---------
0a.16 A 0c.16 B 1 0
0c.17 B 0a.17 A 1 1
0c.18 B 0a.18 A 1 2
0a.19 A 0c.19 B 1 3
With only a single connection to the A channel, a disk loop is technically a daisy chain. When
any component (fiber cable, shelf cable, or shelf controller) in the loop fails, access is lost to
all shelves after the break, which triggers a cluster failover event.
MPHA cabling creates a true loop by providing a path into the A channel and out of the B
channel. Multiple shelves can experience failures without losing communication to the
controller. A cluster failover is only triggered when a single shelf experiences failures to the A
and B channels.
Chapter 6. Cabling expansions 69
70IBM System Storage N series Hardware Guide
Chapter 7.Highly Available controller pairs
7
IBM System Storage N series Highly Available (HA) pair configuration consists of two nodes
that can take over and fail over their resources or services to counterpart nodes. This function
assumes that all resources can be accessed by each node. This chapter describes aspects of
determining HA pair status, and HA pair management.
In Data ONTAP 8.x, the recovery capability that is provided by a pair of nodes (storage
systems) is called an
two nodes stops functioning. Previously with Data ONTAP 7G, this function was called an
Active/Active configuration.
This chapter includes the following sections:
HA pair overview
HA pair types and requirements
Configuring the HA pair
Managing an HA pair configuration
HA pair. This pair is configured to serve data for each other if one of the
An HA pair is two storage systems (nodes) whose controllers are connected to each other
directly. The nodes are connected to each other through an NVRAM adapter, or, in the case
of systems with two controllers in a single chassis, through an internal interconnect. This
allows one node to serve data on the disks of its failed partner node. Each node continually
monitors its partner, mirroring the data for each other’s nonvolatile memory (NVRAM or
NVMEM). Figure 7-1 shows a standard HA pair configuration.
Figure 7-1 Standard HA pair configuration
In a standard HA pair, Data ONTAP functions so that each node monitors the functioning of its
partner through a heartbeat signal that is sent between the nodes. Data from the NVRAM of
one node is mirrored to its partner. Each node can take over the partner’s disks or array LUNs
if the partner fails. The nodes also synchronize time.
7.1.1 Benefits of HA pairs
Configuring storage systems in an HA pair provides the following benefits:
Fault tolerance: When one node fails or becomes impaired, a takeover occurs and the
partner node serves the data of the failed node.
Nondisruptive software upgrades: When you halt one node and allow takeover, the partner
node continues to serve data for the halted node while you upgrade the node you halted.
72IBM System Storage N series Hardware Guide
Nondisruptive hardware maintenance: When you halt one node and allow takeover, the
partner node continues to serve data for the halted node. You can then replace or repair
hardware in the node you halted.
Figure 7-2 shows an HA pair where Controller A failed and Controller B took over services
from the failing node.
Figure 7-2 Failover configuration
7.1.2 Characteristics of nodes in an HA pair
To configure and manage nodes in an HA pair, you must know the following characteristics
that all types of HA pairs have in common:
HA pairs are connected to each other. This connection can be through an HA interconnect
that consists of adapters and cable, or, in systems with two controllers in the same
chassis, through an internal interconnect. The nodes use the interconnect to perform the
following tasks:
– Continually check whether the other node is functioning.
– Mirror log data for each other’s NVRAM.
– Synchronize each other’s time.
They use two or more disk shelf loops (or third-party storage) in which the following
conditions apply:
– Each node manages its own disks or array LUNs.
– Each node in takeover mode manages the disks or array LUNs of its partner. For
third-party storage, the partner node takes over read/write access to the array LUNs
that are owned by the failed node until the failed node becomes available again.
Clarification: Disk ownership is established by Data ONTAP or the administrator,
rather than by the disk shelf to which the disk is attached.
Chapter 7. Highly Available controller pairs 73
They own their spare disks, spare array LUNs (or both) and do not share them with the
other node.
They each have mailbox disks or array LUNs on the root volume:
– Two if it is an N series controller system (four if the root volume is mirrored by using the
SyncMirror feature).
– One if it is an N series gateway system (two if the root volume is mirrored by using the
SyncMirror feature).
Tip: The mailbox disks or LUNs are used to perform the following tasks:
Maintain consistency between the pair
Continually check whether the other node is running or it ran a takeover
Store configuration information that is not specific to any particular node
They can be on the same Windows domain, or on separate domains.
7.1.3 Preferred practices for deploying an HA pair
To ensure that your HA pair is robust and operational, you must be familiar the following
guidelines:
Make sure that the controllers and disk shelves are on separate power supplies or grids so
that a single power outage does not affect both components.
Use virtual interfaces (VIFs) to provide redundancy and improve availability of network
communication.
Maintain a consistent configuration between the two nodes. An inconsistent configuration
is often the cause of failover problems.
Make sure that each node has sufficient resources to adequately support the workload of
both nodes during takeover mode.
Use the HA Configuration Checker to help ensure that failovers are successful.
If your system supports remote management by using a Remote LAN Management (RLM)
or Service Processor, ensure that you configure it properly.
Higher numbers of traditional volumes and FlexVols on your system can affect takeover
and giveback times.
When or FlexVols are added to an HA pair, consider testing the takeover and giveback
times to ensure that they fall within your requirements.
For systems that use disks, check for and remove any failed disks.
For more information about configuring an HA pair, see the Data ONTAP 8.0 7-Mode High-Availability Configuration Guide, which is available at this website:
http://www.ibm.com/storage/support/nas
7.1.4 Comparison of HA pair types
Table 7-1 on page 75 lists the types of N series HA pair configurations and where each might
be applied.
74IBM System Storage N series Hardware Guide
Table 7-1 Configuration types
HA pair
configuration
type
Standard HA pair
configuration
Mirrored HA pair
configuration
If A-SIS
active
NoUp to 500 meters
YesUp to 500 meters
Distance between
nodes
a
a
Failover possible
after loss of entire
node (including
storage)
NoUse this configuration to provide
NoUse this configuration to add
Notes
higher availability by protecting
against many hardware single
points of failure.
increased data protection to the
benefits of a standard HA pair
configuration.
Stretch
MetroCluster
Fabric-attached
MetroCluster
a. SAS configurations are limited to 5 meters between nodes
YesUp to 500 meters
(270 meters if Fibre
Channel speed 4 Gbps
and 150 meters if Fibre
Channel speed is 8
Gbps)
YesUp to 100 km depending
on switch configuration.
For gateway systems, up
to 30 km.
Certain terms have the following particular meanings when they are used to refer to HA pair
configuration:
An
HA pair configuration is a pair of storage systems that are configured to serve data for
each other if one of the two systems becomes impaired. In Data ONTAP documentation
and other information resources, HA pair configurations are sometimes also called
.
pairs
When a system is in an HA pair configuration, systems are often called
sometimes called the
.
node
Controller failover, which is also called cluster failover (CFO), refers to the technology
local node, and the other node is called the partner node or remote
that enables two storage systems to take over each other’s data. This configuration
improves data availability.
FC direct-attached topologies are topologies in which the hosts are directly attached to
the storage system. Direct-attached systems do not use a fabric or Fibre Channel
switches.
YesUse this configuration to provide
data and hardware duplication
to protect against a local
disaster.
YesUse this configuration to provide
data and hardware duplication
to protect against a larger-scale
disaster.
HA
nodes. One node is
FC dual fabric topologies are topologies in which each host is attached to two physically
independent fabrics that are connected to storage systems. Each independent fabric can
consist of multiple Fibre Channel switches. A fabric that is zoned into two logically
independent fabrics is not a dual fabric connection.
FC single fabric topologies are topologies in which the hosts are attached to the storage
systems through a single Fibre Channel fabric. The fabric can consist of multiple Fibre
Channel switches.
iSCSI direct-attached topologies are topologies in which the hosts are directly attached to
the storage controller. Direct-attached systems do not use networks or Ethernet switches.
Chapter 7. Highly Available controller pairs 75
iSCSI network-attached topologies are topologies in which the hosts are attached to
storage controllers through Ethernet switches. Networks can contain multiple Ethernet
switches in any configuration.
Mirrored HA pair configuration is similar to the standard HA pair configuration, except
that there are two copies, or
mirroring
Remote storage refers to the storage that is accessible to the local node, but is at the
location of the remote node.
Single storage controller configurations are topologies in which there is only one storage
controller is used. Single storage controller configurations have a single point of failure and
do not support cfmodes in Fibre Channel SAN configurations.
Standard HA pair configuration refers to a configuration set up in which one node
automatically takes over for its partner when the partner node becomes impaired.
.
plexes, of the data. This configuration is also called data
7.2 HA pair types and requirements
The following types of HA pairs are available, each having distinct advantages and
requirements:
Standard HA pairs
Mirrored HA pairs
Stretch MetroClusters
Fabric-attached MetroClusters
Each of these HA pair types is described in the following sections.
Tip: You must follow certain requirements and restrictions when you are setting up a new
HA pair configuration. These restrictions are described in the following sections.
7.2.1 Standard HA pairs
In a standard HA pair, Data ONTAP functions so that each node monitors the functioning of its
partner through a heartbeat signal that is sent between the nodes. Data from the NVRAM of
one node is mirrored by its partner. Each node can take over the partner’s disks or array
LUNs if the partner fails. Also, the nodes synchronize time.
Standard HA pairs have the following characteristics:
Standard HA pairs provide high availability by pairing two controllers so that one can serve
data for the other in case of controller failure or other unexpected events.
Data ONTAP functions so that each node monitors the functioning of its partner through a
heartbeat signal that is sent between the nodes.
Data from the NVRAM of one node is mirrored by its partner. Each node can take over the
partner’s disks or array LUNs if the partner fails.
76IBM System Storage N series Hardware Guide
Figure 7-3 shows a standard HA pair with native disk shelves without Multipath Storage.
Figure 7-3 Standard HA pair with native disk shelves without Multipath Storage
In the example that is shown in Figure 7-3, cabling is configured without redundant paths to
disk shelves. If one controller loses access to disk shelves, the partner controller can take
over services. Takeover scenarios are described later in this chapter.
Setup requirements and restrictions for standard HA pairs
The following requirements and restrictions apply for standard HA pairs:
Architecture compatibility: Both nodes must have the same system model and be running
the same firmware version. See the Data ONTAP Release Notes for the list of supported
systems, which is available at this website:
http://www.ibm.com/storage/support/nas
For systems with two controller modules in a single chassis, both nodes of the HA pair
configuration are in the same chassis and have internal cluster interconnect.
Storage capacity: The number of disks must not exceed the maximum configuration
capacity. The total storage that is attached to each node also must not exceed the
capacity for a single node.
Clarification: After a failover, the takeover node temporarily serves data from all the
storage in the HA pair configuration. When the single-node capacity limit is less than
the total HA pair configuration capacity limit, the total disk space in a HA pair
configuration can be greater than the single-node capacity limit. The takeover node can
temporarily serve more than the single-node capacity would normally allow if it does not
own more than the single-node capacity.
Chapter 7. Highly Available controller pairs 77
Disks and disk shelf compatibility:
– Fibre Channel, SAS, and SATA storage are supported in standard HA pair
configuration if the two storage types are not mixed on the same loop.
– One node can have only Fibre Channel storage and the partner node can have only
SATA storage, if needed.
HA interconnect adapters and cables must be installed unless the system has two
controllers in the chassis and an internal interconnect.
Nodes must be attached to the same network and the network interface cards (NICs) must
be configured correctly.
The same system software, such as Common Internet File System (CIFS), Network File
System (NFS), or SyncMirror must be licensed and enabled on both nodes.
For an HA pair that uses third-party storage, both nodes in the pair must see the same
array LUNs. However, only the node that is the configured owner of a LUN has read and
write access to the LUN.
Tip: If a takeover occurs, the takeover node can provide only the functionality for the
licenses that are installed on it. If the takeover node does not have a license that was used
by the partner node to serve data, your HA pair configuration loses functionality at
takeover.
License requirements
The cluster failover (cf) license must be enabled on both nodes.
7.2.2 Mirrored HA pairs
Mirrored HA pairs have the following characteristics:
Mirrored HA pairs provide high availability through failover, as do standard HA pairs.
Mirrored HA pairs maintain two complete copies of all mirrored data. These copies are
called plexes, and are continually and synchronously updated when Data ONTAP writes to
a mirrored aggregate.
The plexes can be physically separated to protect against the loss of one set of disks or
array LUNs.
Mirrored HA pairs use SyncMirror.
Restriction: Mirrored HA pairs do not provide the capability to fail over to the partner node
if one node is lost. For this capability, use a MetroCluster.
Setup requirements and restrictions for mirrored HA pairs
The restrictions and requirements for mirrored HA pairs include those for a standard HA pair
with the following other requirements for disk pool assignments and cabling:
You must ensure that your disk pools are configured correctly:
– Disks or array LUNs in the same plex must be from the same pool, with those in the
opposite plex from the opposite pool.
– There must be sufficient spares in each pool to account for a disk or array LUN failure.
– Avoid having both plexes of a mirror on the same disk shelf because that configuration
results in a single point of failure.
78IBM System Storage N series Hardware Guide
If you are using third-party storage, paths to an array LUN must be redundant.
License requirements
The following licenses must be enabled on both nodes:
cf
syncmirror_local
7.2.3 Stretched MetroCluster
Stretch MetroCluster includes the following characteristics:
Stretch MetroClusters provide data mirroring and the ability to start a failover if an entire
site becomes lost or unavailable.
Stretch MetroClusters provide two complete copies of the specified data volumes or file
systems that you indicated as being mirrored volumes or file systems in an HA pair.
Data volume copies are called plexes, and are continually and synchronously updated
every time Data ONTAP writes data to the disks.
Plexes are physically separated from each other across separate groupings of disks.
The Stretch MetroCluster nodes can be physically distant from each other (up to
500 meters).
Remember: Unlike mirrored HA pairs, MetroClusters provide the capability to force a
failover when an entire node (including the controllers and storage) is unavailable.
Figure 7-4 shows a simplified Stretch MetroCluster.
Figure 7-4 Simplified Stretch MetroCluster
Chapter 7. Highly Available controller pairs 79
A Stretch MetroCluster can be cabled to be redundant or non-redundant, and aggregates can
be mirrored or unmirrored. Cabling for Stretch MetroCluster follows the same rules as for a
standard HA pair. The main difference is that a Stretch MetroCluster spans over two sites with
a maximum distance of up to 500 meters.
A MetroCluster provides the cf forcetakeover -d command, which gives a single command
to start a failover if an entire site becomes lost or unavailable. If a disaster occurs at one of the
node locations, your data survives on the other node. In addition, it can be served by that
node while you address the issue or rebuild the configuration.
In a site disaster, unmirrored data cannot be retrieved from the failing site. For the surviving
site to do a successful takeover, the root volume must be mirrored.
Setup requirements and restrictions for stretched MetroCluster
You must follow certain requirements and restrictions when you are setting up a new Stretch
MetroCluster configuration.
The restrictions and requirements for stretch MetroClusters include those for a standard HA
pair and those for a mirrored HA pair. The following requirements also apply:
SATA and Fibre Channel storage is supported on stretch MetroClusters, but both plexes of
the same aggregate must use the same type of storage.
For example, you cannot mirror a Fibre Channel aggregate with SATA storage.
MetroCluster is not supported on the N3300, N3400, and N3600 platforms.
The following distance limitations dictate the default speed that you can set:
– If the distance between the nodes is less than 150 meters and you have an 8 Gb FC-VI
adapter, set the default speed to 8 Gb. If you want to increase the distance to
270 meters or 500 meters, you can set the default speed to 4 Gb or 2 Gb.
– If the distance between nodes is 150 - 270 meters and you have an 8 Gb FC-VI
adapter, set the default speed to 4 Gb.
– If the distance between nodes is 270 - 500 meters and you have an 8 Gb FC-VI or 4 Gb
FC-VI adapter, set the default speed to 2 Gb.
If you want to convert the stretch MetroCluster configuration to a fabric-attached
MetroCluster configuration, unset the speed of the nodes before conversion. You can
unset the speed by using the unsetenv command.
License requirements
The following licenses must be enabled on both nodes:
Like Stretched MetroClusters, Fabric-attached MetroClusters allow you to mirror data
between sites and to declare a site disaster, with takeover, if an entire site becomes lost or
unavailable.
The main difference from a Stretched MetroCluster is that all connectivity between
controllers, disk shelves, and between the sites is carried over IBM/Brocade Fibre Channel
switches. These are called the
80IBM System Storage N series Hardware Guide
back-end switches.
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.