Related documentation ........................................................................................................................................................................................................................................................... 3
General best practices ..................................................................................................................................................................................................................................................................... 7
Rename hosts to a user-friendly name ...................................................................................................................................................................................................................... 8
Disk group initialization for linear storage .............................................................................................................................................................................................................. 9
Best practices for monitoring array health ................................................................................................................................................................................................................... 9
Configure email and SNMP notifications ................................................................................................................................................................................................................. 9
Set the notification level for email and SNMP .................................................................................................................................................................................................. 11
Sign up for proactive notifications for the HPE MSA 1040/2040/2042 array ................................................................................................................ 12
Best practices for provisioning storage on the HPE MSA 1040/2040/2042 .......................................................................................................................... 12
Pool balancing ............................................................................................................................................................................................................................................................................... 13
Using the HPE MSA 2042 embedded SSDs ..................................................................................................................................................................................................... 16
Best practices when choosing drives for HPE MSA 1040/2040/2042 storage .................................................................................................................... 24
Best practices to improve availability ............................................................................................................................................................................................................................. 24
Dual power supplies ................................................................................................................................................................................................................................................................. 30
Reverse cabling of expansion enclosures ............................................................................................................................................................................................................ 30
Create disk groups across expansion enclosures ......................................................................................................................................................................................... 31
Best practices to enhance performance....................................................................................................................................................................................................................... 33
Other methods to enhance array performance .............................................................................................................................................................................................. 35
Best practices for SSDs ................................................................................................................................................................................................................................................................ 36
Use SSDs for randomly accessed data .................................................................................................................................................................................................................... 36
SSD and performance............................................................................................................................................................................................................................................................. 36
Full disk encryption ......................................................................................................................................................................................................................................................................... 38
FDE on MSA 2040 arrays ................................................................................................................................................................................................................................................... 38
Best practices for disk group expansion ...................................................................................................................................................................................................................... 38
Disk group expansion capability for supported RAID levels ............................................................................................................................................................... 39
Disk group expansion recommendations ............................................................................................................................................................................................................. 40
Recreate the disk group with additional capacity and restore data............................................................................................................................................. 40
Best practices for firmware updates ................................................................................................................................................................................................................................ 41
General MSA 1040/2040/2042 device firmware update best practices .............................................................................................................................. 41
MSA 1040/2040/2042 array controller or I/O module firmware update best practices ........................................................................................ 41
MSA 1040/2040/2042 disk drive firmware update best practices ............................................................................................................................................ 41
Miscellaneous best practices .................................................................................................................................................................................................................................................. 42
Using linear and virtual disk groups .......................................................................................................................................................................................................................... 42
Boot from storage considerations ............................................................................................................................................................................................................................... 42
8 Gb/16 Gb switches and SFP transceivers ....................................................................................................................................................................................................... 42
This white paper highlights the best practices and recommendations from Hewlett Packard Enterprise (HPE) for optimizing the
HPE MSA 1040/2040/2042 storage arrays and should be used together with other HPE Modular Smart Array (MSA) manuals.
Intended audience
This white paper is intended for HPE MSA 1040/2040/2042 administrators with previous storage area network (SAN) knowledge. It offers
MSA practices that can contribute to an MSA best customer experience.
This paper is also designed to convey best practices in the deployment of HPE MSA 1040/2040/2042 arrays.
Prerequisites
Prerequisites for using MSA arrays include knowledge of:
• Networking
• Storage system configuration
• SAN management
• Connectivity methods such as direct attach storage (DAS), Fibre Channel, and serial attached SCSI (SAS)
• Internet SCSI (iSCSI) and Ethernet protocols
Related documentation
In addition to this guide, refer to other documents or materials for this product:
• HPE MSA 1040 Quick Start Instructions
• HPE MSA 1040 Installation Guide
• HPE MSA 1040 Cable Configuration Guide
• HPE MSA 1040 User Guide
• HPE MSA 1040/2040 CLI Reference Guide
• HPE MSA 1040/2040 SMU Reference Guide
• HPE MSA 2040 Quick Start Instructions
• HPE MSA 2040 Cable Configuration Guide
• HPE MSA 2040 User Guide
Technical white paper Page 4
Introduction
The HPE MSA 1040 is designed for entry-level markets. It supports 8 Gb Fibre Channel, 6/12 Gb SAS, and 1 GbE and 10 GbE iSCSI protocols.
The MSA 1040 arrays leverages fourth-generation controller architecture with a new processor, two host ports per controller and 4 GB cache
per controller.
An outline of the features of an MSA 1040 array includes:
• New controller architecture with a new processor
• 6 GB cache per controller (data [read/write] cache = 4 GB and metadata and system operating system memory = 2 GB)
• Support for small form factor (SFF) and large form factor (LFF) solid state drives (SSDs)
• 6 Gb/12 Gb SAS connectivity
• Two host ports per controller
• New web interface
• 4 Gb/8 Gb Fibre Channel connectivity
• 1 GbE/10 GbE iSCSI connectivity
• Support for:
1
– MSA fanout SAS cables
– Up to four disk enclosures including the array enclosure
– Up to 99 SFF drives and 48 LFF drives
– Thin Provisioning (requires a license
– Sub-LUN tiering (requires a license
– Read cache
– Performance Tier
– Wide striping
1
1
(requires a license)
1
(requires a license), which supports more hard drives behind a single volume to improve performance (for example, more
1
)
1)
than 16 drives for a volume).
The HPE MSA 2040 is a high-performance storage system designed for HPE customers who need 8 Gb or 16 Gb Fibre Channel, 6 Gb SAS
or 12 Gb SAS, and 1 GbE or 10 GbE iSCSI connectivity with four host ports per controller. The MSA 2040 storage system provides an
excellent value for customers who need performance balanced with price to support initiatives such as consolidation and virtualization.
The MSA 2040 delivers this performance by offering:
• New controller architecture with a new processor
• 6 GB cache per controller (data [read/write] cache = 4 GB and metadata and system operating system memory = 2 GB)
• Support for SFF and LFF SSDs
• Four host ports per controller
• 4 Gb/8 Gb/16 Gb Fibre Channel connectivity
• 6 Gb/12 Gb SAS connectivity
• 1 GbE/10 GbE iSCSI connectivity
1
• New web interface
1
With GL200 and later firmware. With GL220 and later firmware for MSA 1040 SSD features. Creation of an SSD virtual disk group for both read and write capabilities requires
a Performance Auto Tiering License (D4T79A/D4T79AAE).
Technical white paper Page 5
• Support for:
– Both Fibre Channel and iSCSI in a single controller
– Up to eight disk enclosures including the array enclosure
– Up to 199 SFF drives and 96 LFF drives
– Thin Provisioning
– Sub-LUN tiering
1
1
– Read cache
– Performance Tier
– Wide striping
– Full drive encryption (FDE) using self-encrypting drives (SED)
1
(requires a license)
1
; which supports more hard drives (more than 16) behind a single volume to improve performance
2
The HPE MSA 2042 SAN Storage offers an entry-level storage platform with built-in hybrid flash for application acceleration and highperformance. It is ideal for performance-hungry applications and includes 800 GB of SSD capacity.
The all-inclusive set-and-forget software suite includes built-in, real-time data tiering that dynamically moves constantly used data to flash
and less used data to lower-cost media tiers. This suite includes 512 snapshots for instant recovery and remote replication for affordable
application availability. The software delivers management tools that are ideal for IT generalists and server administrators.
An outline of the features of an MSA 2042 array includes:
• The industry’s fastest entry array, now with 800 GB of SSD capacity, standard
• Built-in, real-time tiering dynamically to move hot data to flash and cold data to lower-cost media, standard
• 512 snapshots out-of-the-box for instant recovery, standard
• Remote replication included for affordable application availability, standard
• New 400 GB and 800 GB LFF SSDs
• All MSA 2042 models ship standard with the HPE Advanced Data Services (ADS) Software Suite LTU. Software titles included in the
ADS Software Suite include:
– HPE MSA 2040 Performance Automated Tiering LTU
– HPE MSA 512-Snapshot Software LTU
– HPE MSA Remote Snap Software LTU
Important
The ADS Software Suite license key must be installed to enable the services.
The HPE MSA 1040/2040/2042 storage system brings the performance benefits of SSDs to MSA array family customers. These arrays
have been designed to maximize performance by using high-performance drives across all applications sharing the array.
The HPE MSA 2040/2042 storage systems are positioned to provide an excellent value for customers who need increased performance to
support initiatives such as consolidation and virtualization.
The HPE MSA 1040/2040 storage systems ship standard with a license for 64 snapshots and Volume Copy for increased data protection.
There is also an optional license for 512 snapshots. With the optional MSA Remote Snap, the MSA array can replicate linear volumes
between the MSA 1040/2040/2042 and P2000 G3 or MSA 1040/2040/2042, and virtual volumes between two MSA 1040/2040/2042
arrays running GL220 and later firmware.
2
SED drives are only supported in the MSA 2040.
Technical white paper Page 6
Terminology
Terms and concepts used in this white paper include:
• Virtual disk (Vdisk): The Vdisk nomenclature is being replaced by “disk group.” Linear storage and the HPE Storage Management
Utility (SMU) Version 2 uses “Vdisk”. Virtual storage and SMU Version 3 uses “disk group”. Vdisk and disk group are essentially the same.
Vdisks (linear disk groups) have additional RAID types; NRAID, RAID 0, and RAID 3 are available only in the CLI, and RAID 50 is available
in both the CLI and SMU.
• Linear storage: Linear storage is the traditional storage technology that has been used for four MSA generations. With linear storage,
the user specifies which drives make up a RAID group and all storage is fully allocated.
• Virtual storage: Virtual storage is an extension of linear storage. Data is virtualized not only across a single disk group, as in the linear
implementation, but also across multiple disk groups with different performance capabilities and use cases.
• Page: A page is an individual block of data residing on a physical disk. For virtual storage, the page size is 4 MB. A page is the smallest
unit of data that can be allocated, de-allocated, or moved between virtual disk groups in a tier or between tiers.
• Disk group: A disk group is a collection of disks in a given redundancy mode (RAID 1, 5, 6, or 10 for virtual disk groups and NRAID and
RAID 0, 1, 3, 5, 6, 10, or 50 for linear disk groups). A disk group is equivalent to a Vdisk in linear storage and uses the same proven fault
tolerant technology used by linear storage. You can create disk group RAID levels and size based on performance and capacity
requirements. With GL200 and later firmware, you can allocate multiple virtual disk groups into a storage pool for use with virtual
storage features. Although linear disk groups are also in storage pools, there is a one-to-one correlation between linear disk groups and
their associated storage pools.
• Storage pools: GL200 and later firmware introduces storage pools, which comprise one or more virtual disk groups or one linear disk group.
For virtual storage, LUNs are no longer restricted to a single disk group as with linear storage. A volume’s data on a given LUN can now span
all disk drives in a pool. When capacity is added to a system, users benefit from the performance of all spindles in that pool.
When leveraging storage pools, the MSA 1040/2040/2042 supports large, flexible volumes with sizes up to 128 TB and facilitates
seamless capacity expansion. As volumes are expanded, data automatically reflows to balance capacity utilization on all drives.
• Logical unit number (LUN): The MSA 1040/2040/2042 arrays support 512 volumes and up to 512 snapshots in a system. All these
volumes can be mapped to LUNs. Maximum LUN sizes are up to 128 TB; the LUNs sizes are dependent on the storage architecture:—that is, linear or virtual. Thin Provisioning enables you to create the LUNs independent of the physical storage.
• Thin Provisioning: Thin Provisioning allows storage allocation of physical storage resources only when they are consumed by an
application. Thin Provisioning also allows over-provisioning of physical storage pool resources, allowing volumes to grow easily without
requiring storage capacity to be predicted upfront.
• Thick Provisioning: All storage is fully allocated with Thick Provisioning. Linear storage always uses Thick Provisioning.
• Tiers: Disk tiers are created by aggregating one or more disk groups of similar physical disks. MSA 1040/2040/2042 arrays support three
distinct tiers:
– A performance tier with SSDs
– A standard SAS tier with enterprise SAS hard disk drives (HDDs)
– An archive tier with midline SAS HDDs
Technical white paper Page 7
With firmware earlier than GL200, the MSA 1040/2040 operated through manual tiering, where LUN level tiers are manually created and
managed by using dedicated Vdisks and volumes. LUN level tiering requires careful planning such that applications requiring the highest
performance are placed on Vdisks using high-performance SSDs. Applications with lower performance requirements can be placed on
Vdisks comprising enterprise SAS or midline SAS HDDs. Beginning with GL200 and later firmware, the MSA 1040/2040/2042 arrays
support sub-LUN tiering and automated data movement between tiers.
The MSA 1040/2040/2042 automated tiering engine moves data between available tiers based on the access characteristics of that
data. Frequently accessed data contained in pages migrates to the highest available tier delivering maximum I/Os to the application.
Similarly, “cold” or infrequently accessed data is moved to lower-performance tiers. Data is migrated between tiers automatically such that
I/Os are optimized in real-time.
The archive and standard tiers are provided at no charge on the MSA 2040 platform beginning with GL200 and later firmware. The
MSA 1040 requires a license when using archive and standard tiers. A performance tier using a fault-tolerant SSD disk group is a paid
feature that requires a license for both the MSA 1040 and MSA 2040. Without the performance tier license installed, SSDs can still be
used as read cache with the sub-LUN tiering feature. Sub-LUN tiering from SAS midline (archive tier) to enterprise SAS (standard tier)
drives is provided at no charge for the MSA 2040.
Note
The MSA 1040 requires a license to enable Sub-LUN tiering and other virtual storage features such as Thin Provisioning.
• Read cache: Read cache is an extension of the controller cache. Read cache allows a lower-cost way to get performance improvements
from SSD drives.
• Sub-LUN tiering: Sub-LUN tiering is a technology that enables the automatic movement of data between storage tiers based on access
trends. In MSA 1040/2040/2042 arrays, sub-LUN tiering places data in a LUN that is accessed frequently in higher-performing media
and data that is infrequently accessed is placed in slower media.
General best practices
This section outlines some general best practices when administering an MSA 1040/2040/2042 storage system:
• Become familiar with the array by reading the manuals: The first recommended best practice is to read the corresponding guides for
the HPE MSA 1040 or HPE MSA 2040/2042. These documents include the User Guide, the Storage Management Utility (SMU)
Reference Guide, or the CLI Reference Guide. The appropriate guide depends on the interface that you will use to configure the storage
array. Always operate the array in accordance with the user manual. In particular, never exceed the environmental operation
requirements.
Another HPE MSA 1040 and HPE MSA 2040/2042 document of importance to review is the HPE MSA Remote Snap Technical white
paper located at: h20195.www2.hpe.com/v2/GetPDF.aspx/4AA1-0977ENW.pdf
• Implement virtual disk groups: Beginning with the release of the GL200 firmware, storage administrators can implement features such
as Thin Provisioning, sub-LUN tiering, read cache, and wide striping. Hewlett Packard Enterprise recommends using virtual storage to
take advantage of the advanced virtualization features of the firmware.
• Use version 3 of the Storage Management Utility: With the release of the GL200 firmware, there is an updated version of the Storage
Management Utility (SMU). This new web GUI enables you to use the new features of the GL200 firmware. SMU V3 must be used for
virtual volume replication.
SMU V3 is the recommended web GUI. SMU V3 can be accessed by adding “/v3” to the IP address of the MSA array:
https://<MSA array IP>/v3
The minimum required web GUI is SMU V2 if you are using the replication features of the MSA 1040/2040 for linear volumes. SMU V2
can be accessed by adding “/v2” to the IP address of the MSA array: https://<MSA array IP>/v2
Technical white paper Page 8
• Stay current on firmware: Use the latest controller, disk, and expansion enclosure firmware to benefit from the continual
improvements in the performance, reliability, and functionality of HPE MSA 1040/2040/2042 arrays. For additional information,
refer to the release notes and release advisories for the respective MSA products. You can locate this information at:
hpe.com/storage/msa1040, hpe.com/storage/msa2040, or hpe.com/storage/msa2042
.
• Use tested and supported configurations: Deploy the MSA array only in supported configurations. Do not risk the availability of your
critical applications to unsupported configurations. Hewlett Packard Enterprise does not recommend or provide support for
unsupported MSA configurations.
The primary HPE portal to obtain detailed information about supported HPE Storage product configurations is Single Point of
Connectivity knowledge (SPOCK). An HPE Passport account is required to enter the SPOCK website. You can access SPOCK at:
hpe.com/storage/spock
• Understand what a host is from the array perspective: An initiator is analogous to an external port on a host bus adapter (HBA). An
initiator port does not equate to a physical server, but rather a unique connection on that server. For example, because a dual-port Fibre
Channel HBA has two ports, there are two unique initiators, and the array will show two separate initiators for that HBA.
With the new GL200 firmware, there is a new definition for host. A host is a collection of one or more initiators. GL200 firmware also
supports more initiators than previous versions of MSA 1040/2040 firmware, which supported only 64 hosts with one initiator per host.
The latest firmware can support 512 hosts with multiple initiators per host. With virtual storage, the MSA can manage 1024 initiators.
With the GL200 firmware, the array supports the grouping of initiators under a single host and grouping hosts into a host group.
Grouping of initiators and hosts allows simplification of the mapping operations.
Rename hosts to a user-friendly name
Applying friendly names to the hosts enables easy identification of the hosts that are associated with servers and operating systems. A
recommended method for acquiring and renaming Worldwide Name (WWN) is to connect one cable at a time and then rename the WWN to
an identifiable name.
The following procedure outlines the steps needed to rename hosts using SMU v3.
1. Log in to the SMU and from the left frame, click Hosts.
2. Locate and highlight the WWN (ID) you want to name.
3. From the Action button, click Modify Initiator.
4. Enter the initiator nickname and click OK.
5. Repeat the process for additional initiator connections.
Figure 1. Renaming hosts
The recommended practice would be to rename initiators as outlined in Figure 1, and use host aggregating of initiators and grouping of
hosts that use SMU v3.
Technical white paper Page 9
Disk group initialization for linear storage
During the creation of a disk group for linear storage, you have the option to create a disk group in online mode (default) or offline mode. If
you enable the Online initialization option, you can use the disk group while it is initializing. Online initialization takes more time because
parity initialization is used during the process to initialize the disk group. Online initialization is supported for all RAID levels on MSA
1040/2040/2042 arrays except for RAID 0 and NRAID. Online initialization does not impact fault tolerance.
If the Online Initialization option is unchecked, which equates to offline initialization, you must wait for initialization to complete before
using the disk group for linear storage, but the initialization takes less time to complete.
Figure 2. Choosing online or offline initialization
Best practices for monitoring array health
Setting up the array to send notifications is important for troubleshooting and log retention.
Configure email and SNMP notifications
SMU v3 is the recommended method for setting up email and SNMP notifications. You can set up these services easily by using a web
browser. To connect; enter the IP address of the management port of the MSA 1040/2040/2042.
You can send email notifications to up to as many as three different email addresses. In addition to the email notification,
Hewlett Packard Enterprise recommends sending managed logs with the Include logs as an email attachment option enabled. With this
feature enabled, the system automatically attaches the system log files to the managed logs email notifications sent. The managed logs
email notification is sent to an email address that will retain the logs for future diagnostic investigation.
The MSA 1040/2040/2042 storage system has a limited amount of space to retain logs. When this log space is exhausted, the oldest
entries in the log are overwritten. For most systems this space is adequate to allow for diagnosing issues seen on the system. The managed
logs feature notifies the administrator that the logs are nearing a full state and that older information will soon start to be overwritten. The
administrator can then choose to manually save off the logs. If Include logs as an email attachment is also checked, the segment of logs
that is nearing a full state will be attached to the email notification. Managed logs attachments can be multiple MB in size.
Enabling the managed logs feature allows log files to be transferred from the storage system to a log-collection system to avoid losing
diagnostic data. The option is disabled by default.
Hewlett Packard Enterprise recommends enabling SNMP traps. SNMPv1 traps can be sent to up to three host trap addresses (that is, to an
HPE SIM server or another SNMP server). To send SNMPv3 traps, create an SNMPv3 user with the Trap Target account type. Use SNMPv3
traps rather than SNMPv1 traps for greater security. SNMP traps can be useful in troubleshooting issues with the MSA 1040/2040/2042
array.
Technical white paper Page 10
To configure email and SNMPv1 settings in the SMU, click Home → Action → Set Up Notifications. Enter the correct information for
SNMP, email, and managed logs, as shown in Figure 4.
Figure 3. Setting Up Management services
Figure 4. SNMP, Email, and Managed Logs Notification Settings
Technical white paper Page 11
To configure SNMPv3 users and trap targets, click Home → Action → Manage Users as shown in Figure 5.
Figure 5. Manage Users
Enter the correct information for SNMPv3 trap targets as shown in Figure 6.
Figure 6. User Management
Set the notification level for email and SNMP
Setting the notification level to Warning, Error, or Critical on email and SNMP configurations ensures that events of that level or higher are
sent to the destinations (that is., an SNMP server or SMTP server) set for that notification. Hewlett Packard Enterprise recommends setting
the notification level to Warning.
HPE MSA 1040/2040/2042 notification levels perform as follows:
• Warning sends notifications for all warning, error, or critical events.
• Error only sends error and critical events.
• Critical only sends critical events.
Technical white paper Page 12
Sign up for proactive notifications for the HPE MSA 1040/2040/2042 array
Sign up for proactive notifications to receive MSA product advisories. Applying the suggested resolutions can enhance the availability of the
product.
Sign up for the notifications at: hpe.com/go/myadvisory
Best practices for provisioning storage on the HPE MSA 1040/2040/2042
The release of the GL200 firmware for the MSA 1040/2040/2042 introduced virtual storage features such as Thin Provisioning, wide
striping, and sub-LUN tiering. This section describes the best methods for optimizing these features for the MSA 1040/2040/2042.
Thin Provisioning
Thin Provisioning is a storage allocation scheme that automatically allocates storage as your applications need it.
Thin Provisioning dramatically increases storage utilization by removing the equation between allocated and purchased capacity. Traditionally,
application administrators purchased storage based on the capacity required at the moment and for future growth. This resulted in over-purchasing
capacity and unused space.
With Thin Provisioning, applications can be provided with all the capacity they need to grow but can begin operating on a smaller amount of
physical storage. As the applications fill their storage, new storage can be purchased as needed and added to the array’s storage pools. This
results in a more efficient utilization of storage and a reduction in power and cooling requirements.
Thin Provisioning is enabled by default for virtual storage. The overcommit setting only applies to virtual storage and simply lets you
oversubscribe the physical storage (that is, provision volumes in excess of physical capacity). If you disable overcommit, you can only
provision virtual volumes up to the available physical capacity. The overcommit setting is not applicable on traditional linear storage.
Overcommit is performed on a per-pool basis using the Change Pool Settings option. To disable the overcommit Pool Settings:
1. Open SMUv3 and select Pools.
2. Click Change Pool Settings.
3. Deselect Enable overcommitment of pool? by clicking the box. Refer to Figures 7 and 8.
Figure 7. Change Pool Settings
Technical white paper Page 13
Figure 8. Disabling overcommitment of the pool
Thresholds and notifications
If you use Thin Provisioning, monitor space consumption and set notification thresholds appropriately for the rate of storage consumption.
The following thresholds and notifications can help determine when more storage needs to be added. Users with a manage role can view
and change settings that affect the thresholds and corresponding notifications for each storage pool.
• Low Threshold—When this percentage of pool capacity has been used, Informational event 462 is generated to notify the
administrator. This value must be less than the Mid Threshold value. The default is 25%.
• Mid Threshold—When this percentage of pool capacity has been used, Warning event 462 is generated to notify the administrator
to add capacity to the pool. This value must be between the Low Threshold and High Threshold values. The default is 50%. If the
overcommitment setting is enabled, the event has informational severity; if the overcommitment setting is disabled, the event has
Warning severity.
• High Threshold—When this percentage of pool capacity has been used, Warning event 462 is generated to alert the administrator that
it is critical to add capacity to the pool. This value is automatically calculated based on the available capacity of the pool minus reserved
space. This value cannot be changed by the user.
Refer to Figures 7 and 8 for examples of setting the thresholds.
T10 unmap for thin reclaim
Unmap is the ability to reclaim thinly provisioned storage after the storage is no longer needed. There are procedures to reclaim unmap
space when using Thin Provisioning and VMware ESX®.
The user should run the unmap command with ESX 5.0 Update 1 or later to avoid performance issues. With ESX 5.0, unmap is automatically
executed when deleting or moving a Virtual Machine.
In ESX 5.0 Update 1 and later, the unmap command was decoupled from auto reclaim; therefore, use the VMware vSphere® CLI command to
run the unmap command.
Refer to VMware® documentation for more details on the unmap command and reclaiming space.
Pool balancing
Creating and balancing storage pools properly can help with performance of the MSA array. Hewlett Packard Enterprise recommends
keeping pools balanced from a capacity utilization and performance perspective. Pool balancing leverages both controllers and balance the
workload across the two pools.
Assuming symmetrical composition of storage pools, create and provision storage volumes by the workload that will be used. For example,
an archive volume would be best placed in a pool with the most available archive tier space. For a high-performance volume, create the disk
group on the pool that is getting the least amount of I/O on the standard and performance tiers.
Determining the pool space can easily be viewed in SMUv3. Simply navigate to Pools and click the name of the pool.
Loading...
+ 32 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.