Intel SC5650BCDP, SSR212MC2 - Storage Server Hard Drive Array, S1200BTL, S1200BTS, S3420GP, S3200SH, X38ML, AXX4SASMOD, SASMF81 Software User's Manual
Specifications and Main Features
Frequently Asked Questions
User Manual
Intel® RAID Software User’s Guide:
•Intel® Embedded Server RAID
Technology 2
•Intel® IT/IR RAID
•Intel® Integrated Server RAID
•Intel® RAID Controllers using the
Intel® RAID Software Stack 3
Revision 19.0
April, 2012
Intel Order Number: D29305-019
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL®
PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY
INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS
PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL
ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED
WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY
OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE,
MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER
INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving,
life sustaining applications. Intel may make changes to specifications and product descriptions at
any time, without notice.
Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United
States and other countries.
The software described in this document is designed for use with Intel® RAID controllers, and
with on-serverboard RAID solutions that use the Intel
package names begin with “ir3”), Embedded Server RAID Technology 2 (driver package
names begin with ESRT2) or Intel
Supported Hardware
This manual covers the software stack that is shared by multiple Intel® server products:
•Intel
®
Embedded Server RAID Technology 2 (ESRT2) on the Intel® Enterprise South
Bridge 2 (ESB2) in the chipset, the Intel
3420 PCH chipset, Intel
following:
—Intel® Server Board S1200BTL/S1200BTS
—Intel® Server Boards based on the Intel® S5000 and S7000 chipsets
—Intel® Server Boards based on the Intel® 5500/5520 chipset with the Intel® I/O
Controller Hub 10R (ICH10R)
—Intel® Server Boards that include the LSI* 1064e SAS (Serially attached SCSI)
controller and some that include the LSI* 1068 SAS controller
—Intel® Server Boards S3420GP
®
RAID Software Stack 3 (driver
®
IT/IR RAID.
®
®
C200 series chipset and Intel® C600 series chipset used in the
I/O Controller Hub 9R (ICH9R), the Intel®
—Intel® Server Boards S3200SH and X38ML
—Intel® SAS Entry RAID Module AXX4SASMOD (when the module is in
ESRTII mode)
—Intel® RAID Controller SASMF8I
Intel® Embedded Server RAID Technology 2 provides driver based RAID modes 0,1,
®
and 10 with an optional RAID 5 mode provided by Intel
RAID C600 Upgrade Key
RKSATA4R5, RKSATA8R5, RKSAS4R5 or RKSAS8R5.
Intel® Embedded Server RAID Technology 2 provides driver-based RAID modes 0, 1,
and 10 with an optional RAID 5 mode provided by the Intel
AXXRAKSW5 on the ESB2 and LSI* 1064e on some models of Intel
®
RAID Activation Key
®
server boards.
ESB2 supports SATA only.
LSI* SAS 1064e and 1068 provide SATA (Serial ATA) and SAS support. Not all 1068
®
SAS boards provide Intel
Embedded Server RAID Technology 2 modes.
Intel® Embedded Server RAID Technology 2 must be enabled in the server system BIOS
®
before it is available. Intel
Embedded Server RAID Technology 2 is limited to a
maximum of eight drives including hot spare(s). Expander devices are not yet supported
by ESRT2.
•Intel
®
IT/IR RAID solutions with below Intel® IT/IR RAID controllers:
—Intel® RAID Controller SASWT4I
—Intel® RAID Controller SASUC8I
Intel® RAID Software User’s Guide1
—Intel® RAID SAS Riser Controller AFCSASRISER in Intel® Server System
®
S7000FC4UR without Intel
SAS RAID Activation Key AXXRAKSAS2 installed.
—Intel® RAID SAS Riser Controller AFCSASRISER in Intel® Server System
®
S7000FC4UR without Intel
SAS RAID Activation Key AXXRAKSAS2 installed
—Intel® SAS Entry RAID Module AXX4SASMOD
—Intel® 6G SAS PCIe Gen2 RAID Module RMS2LL080 and RMS2LL040
•Intel
®
Integrated RAID Technology on the Intel® ROMB solutions. Server boards and
systems include:
—Intel® Server Board S5000PSL (Product code: S5000PSLROMB)
—Intel® Server System SR1550AL (Product code: SR1550ALSAS)
—Intel® Server System SR2500 (Product code: SR2500LX)
—Intel® Server System SR4850HW4s
—Intel® Server System SR6850HW4s
—Intel® Server System S7000FC4UR with a SAS riser card
—Intel® Server Boards S3420GP, S5520HC/S5500HCV, S5520UR, S5520SC, and
®
S5500WB12V/S5500WB with the Intel
Integrated RAID Controller
SROMBSASMR
Systems using the Intel® RAID Controller SROMBSAS18E provide XOR RAID modes
®
0, 1, 5, 10, and 50 when the optional Intel
RAID Activation Key AXXRAK18E and a
DDR2 400 MHz ECC DIMM are installed.
Systems using the Intel® RAID Controller SROMBSASFC or SROMBSASMP2
®
require the optional Intel
RAID Activation Key AXXRAKSAS2 and a DDR2 667
MHz ECC DIMM to provide RAID modes 0, 1, 5, 6, 10, 50, and 60.
The Intel® Integrated RAID Controller SROMBSASMR has a specially designed
®
connector that only fits Intel
Server Boards S5520HC/S5500HCV, S5520UR,
S5520SC, and S5500WB12V/S5500WB.
Note: This manual does not include the software RAID modes provided by the SAS riser
®
card on the Intel
Server System S7000FC4UR. This manual does not include the
RAID modes provided by the FALSASMP2 without Intel
RS25AB080, RS25SB008, RS25DB080, RS25NB008, RS2VB080, RS2VB040,
RT3WB080, RS2SG244, RS2WG160, RS2BL080, RS2BL080SNGL,
RS2BL080DE, RS2BL040, RS2PI008, RS2PI008DE, RS2MB044, RS2WC080,
RS2WC040, RMS2MH080, RMS2AF080 and RMS2AF040) support SAS 2.0 new
features with XOR RAID modes 0, 1, 5, 6, 10, 50, and 60. (RS2WC080 and
RS2WC040 are entry level hardware RAID controllers and do not support RAID 6
and 60; RMS2AF080 and RMS2AF040 are entry level hardware RAID controllers
and do not support RAID 10, 6 and 60.)
For more details, refer to the Technical Product Specification (TPS) or Hardware User's Guide (HWUG) for the RAID controllers.
Note: The Intel® RAID Controllers RMS2AF080, RMS2AF040, RS2WC080, and RS2WC040 only support
strip sizes of 8KB, 16KB, 32KB, and 64KB. Also, their Cache Policy only supports Write Through,
Direct I/O, and Normal RAID (No Read Ahead). For more details,refer to their Hardware User's
Guide (HWUG).
This manual does not include information about native SATA or SAS-only modes of the RAID
controllers.
Two versions of the Intel® RAID Controller RS2BL080 are available - RS2BL080, RS2BL080DE.
All features on RS2BL080 are supported on RS2BL080DE. In addition, RS2BL080DE provides one
more feature of FDE (Full Disk Encryption) that RS2BL080 doesn't support.
Two versions of the Intel® RAID Controller RS2PI008 are available - RS2PI008, RS2PI008DE.
All features on RS2PI008 are supported on RS2PI008DE. In addition, RS2PI008DE provides one
more feature of FDE (Full Disk Encryption) that RS2PI008 doesn't support.
Caution: Some levels of RAID are designed to increase the availability of data and some to provide data
redundancy. However, installing a RAID controller is not a substitute for a reliable backup strategy.
It is highly recommended you back up data regularly via a tape drive or other backup strategy to
guard against data loss. It is especially important to back up all data before working on any system
components and before installing or changing the RAID controller or configuration.
Software
Intel® Embedded Server RAID Technology 2, Intel® IT/IR RAID and Intel® Integrated Server
RAID controllers include a set of software tools to configure and manage RAID systems.
These include:
•Intel
®
RAID controller software and utilities: The firmware installed on the RAID
controller provides pre-operating system configuration.
—For Intel® Embedded Server RAID Technology 2, press <Ctrl> + <E> during the
server boot to enter the BIOS configuration utility.
—For Intel® IT/IR RAID, press <Ctrl> + <C> during the server boot to enter the LSI
MPT* SAS BIOS Configuration Utility
—For Intel® Integrated Server RAID, press <Ctrl> + <G> during the server boot to
enter the RAID BIOS Console II.
•Intel
®
RAID Controller Drivers: Intel provides software drivers for the following
operating systems.
Intel® RAID Software User’s Guide3
—Microsoft Windows 2000*, Microsoft Windows XP*, and Microsoft Windows
Server 2003* (32-bit and 64-bit editions)
—Red Hat* Enterprise Linux 3.0, 4.0, and 5.0 (with service packs; X86 and X86-64)
—SuSE* Linux Enterprise Server 9.0, SuSE* Linux Enterprise Server 10, and SuSE*
Linux Enterprise Server 11(with service packs; X86 and X86-64)
—VMWare* ESX 4i
Note: Only the combinations of controller, driver, and Intel® Server Board or System
listed in the Tested Hardware and Operating System List (THOL) were tested.
Check the supported operating system list for both your RAID controller and your
server board to verify operating system support and compatibility.
•Intel
®
RAID Web Console 2: A full-featured graphical user interface (GUI) utility is
provided to monitor, manage, and update the RAID configuration.
RAID Terminology
RAID is a group of physical disks put together to provide increased I/O (Input/Output)
performance (by allowing multiple, simultaneous disk access), fault tolerance, and reliability
(by reconstructing failed drives from remaining data). The physical drive group is called an
array, and the partitioned sets are called virtual disks. A virtual disk can consist of a part of one
or more physical arrays, and one or more entire arrays.
Using two or more configured RAID arrays in a larger virtual disk is called spanning. It is
represented by a double digit in the RAID mode/type (10, 50, 60).
Running more than one array on a given physical drive or set of drives is called a sliced
configuration.
The only drive that the operating system works with is the virtual disk, which is also called a
virtual drive. The virtual drive is used by the operating system as a single drive (lettered
storage device in Microsoft Windows*).
The RAID controller is the mastermind that must configure the physical array and the virtual
disks, and initialize them for use, check them for data consistency, allocate the data between
the physical drives, and rebuild a failed array to maintain data redundancy. The features
available per controller are highlighted later in this document and in the hardware guide for the
RAID controller.
The common terms used when describing RAID functions and features can be grouped into
two areas: fault tolerance (data protection and redundancy) and performance.
Fault Tolerance
Fault tolerance describes a state in which even with a drive failure, the data on the virtual drive
is still complete and the system is available after the failure and during repair of the array.
Most RAID modes are able to endure a physical disk failure without compromising data
integrity or processing capability of the virtual drive.
4Intel
®
RAID Software User’s Guide
Hot Spare
RAID mode 0 is not fault tolerant. With RAID 0, if a drive fails, then the data is no longer
complete and no longer available. Backplane fault tolerance can be achieved by a spanned
array where the arrays are on different backplanes.
True fault tolerance includes the automatic ability to restore the RAID array to redundancy so
that another drive failure will not destroy its usability.
True fault tolerance requires the availability of a spare disk that the controller can add to the
array and use to rebuild the array with the data from the failed drive. This spare disk is called a
hot spare. It must be a part of the array before a disk failure occurs. A hot-spare drive is a
physical drive that is maintained by the RAID controller but not actually used for data storage
in the array unless another drive fails. Upon failure of one of the array’s physical drives, the
hot-spare drive is used to hold the recreated data and restore data redundancy.
Hot-spare drives can be global (available to any array on a controller) or dedicated (only
usable by one array). There can be more than one hot spare per array and the drive of the
closest capacity is used. If both dedicated and global hot-spare drives are available, then the
dedicated drive is used first. If the hot swap rebuild fails, then that hot spare is also marked
failed. Since RAID 0 is not redundant, there is no hot spare value.
If a hot-spare drive is not an option, then it is possible to perform a hot or cold swap of the
failed drive to provide the new drive for rebuild after the drive failure. A swap is the manual
substitution of a replacement drive in a disk subsystem. If a swap is performed while the
system is running, it is a hot swap. A hot swap can only be performed if the backplane and
enclosure support it. If the system does not support hot-swap drives, then the system must be
powered down before the drive swap occurs. This is a cold swap.
In all cases (hot spare, hot swap, or cold swap), the replacement drive must be at least as large
as the drive it replaces. In all three cases, the failed drive is removed from the array. If using a
hot spare, then the failed drive can remain in the system. When a hot spare is available and an
automatic rebuild starts, the failed drive may be automatically removed from the array before
the utilities detect the failure. Only the event logs show what happened.
If the system is shut down during the rebuild, all rebuilds should automatically restart
on reboot.
Note: If running a sliced configuration (RAID 0, RAID 5, and RAID 6 on the same set of physical drives),
then the rebuild of the spare will not occur until the RAID 0 array is deleted.
On Intel® RAID Controller RS2WC080 and RS2WC040, if Virtual Drive is in degrade mode
due to failed physical drive, auto rebuild is not supported for hot-plugged drive until a manual
®
selection is made by users. As part of JBOD implementation for Intel
RAID Controller
RS2WC080 and RS2WC040, all new drives that are hot-plugged will automatically become
JBOD. Users need to manually move the JBOD drive to Unconfigured Good and auto rebuild
starts after that. For more details, refer to Hardware User's Guide (HWUG) for
above controllers.
Data Redundancy
Data redundancy is provided by mirroring or by disk striping with parity stripes.
•Disk mirroring is found only in RAID 1 and 10. With mirroring, the same data
simultaneously writes to two disks. If one disk fails, the contents of the other disk can be
Intel® RAID Software User’s Guide5
used to run the system and reconstruct the failed array. This provides 100% data
redundancy but uses the most drive capacity, since 50% of the total capacity is available.
Until a failure occurs, both mirrored disks contain the same data at all times. Either drive
can act as the operational drive.
•Parity is the ability to recreate data by using a mathematical calculation derived from
multiple data sets. Parity is basically a checksum of all the data known as the
“ABCsum”. When drive A fails, the controller uses the ABCsum to calculates what
remains on drives B+C. The remainder must be recreated onto new drive A.
Parity can be dedicated (all parity stripes are placed on the same drive) or distributed
(parity stripes are spread across multiple drives). Calculating and writing parity slows
the write process but provides redundancy in a much smaller space than mirroring.
Parity checking is also used to detect errors in the data during consistency checks and
patrol reads.
RAID 5 uses distributed parity and RAID 6 uses dual distributed parity (two different
sets of parity are calculated and written to different drives each time.) RAID modes 1
and 5 can survive a single disk failure, although performance may be degraded,
especially during the rebuild. RAID modes 10 and 50 can survive multiple disk failures
across the spans, but only one failure per array. RAID mode 6 can survive up to two disk
failures. RAID mode 60 can sustain up to two failures per array.
Data protection is also provided by running calculations on the drives to make sure data is
consistent and that drives are good. The controller uses consistency checks, background
initialization, and patrol reads. You should include these in regular maintenance schedules.
•The consistency check operation verifies that data in the array matches the redundancy
data (parity or checksum). This is not provided in RAID 0 in which there is no
fault tolerance.
•Background initialization is a consistency check that is forced five minutes after the
creation of a virtual disk. Background initialization also checks for media errors on
physical drives and ensures that striped data segments are the same on all physical drives
in an array.
•Patrol read checks for physical disk errors that could lead to drive failure. These checks
usually include an attempt at corrective action. Patrol read can be enabled or disabled
with automatic or manual activation. This process starts only when the RAID controller
is idle for a defined period of time and no other background tasks are active, although a
patrol read check can continue to run during heavy I/O processes.
Enclosure Management
Enclosure management is the intelligent monitoring of the disk subsystem by software or
hardware usually within a disk enclosure. It increases the ability for the user to respond to a
drive or power supply failure by monitoring those sub systems.
Performance
Performance improvements come from multiple areas including disk striping and disk
spanning, accessing multiple disks simultaneously, and setting the percentage of processing
capability to use for a task.
6Intel
®
RAID Software User’s Guide
Disk Striping
Disk Spanning
Disk striping writes data across all of the physical disks in the array into fixed size partitions or
stripes. In most cases, the stripe size is user-defined. Stripes do not provide redundancy but
improve performance since striping allows multiple physical drives to be accessed at the same
time. These stripes are interleaved in a repeated sequential manner and the controller knows
where data is stored. The same stripe size should be kept across RAID arrays.
Terms used with strip sizing are listed below.
•Strip size: One disk section
•Stripe size: Total of one set of strips across all data disks, not including parity stripes
•Stripe width: The number of disks involved
Disk spanning allows more than one array to be combined into a single virtual drive. The
spanned arrays must have the same stripe size and must be contiguous. Spanning alone does
not provide redundancy but RAID modes 10, 50, and 60 all have redundancy provided in their
pre-spanned arrays through RAID 1, 5, or 6.
Note: Spanning two contiguous RAID 0 drives does not produce a new RAID level or add fault tolerance.
CPU Usage
It does increase the size of the virtual volume and improves performance by doubling the number of
spindles. Spanning for RAID 10, RAID 50, and RAID 60 requires two to eight arrays of RAID 1, 5,
or 6 with the same stripe size and that always uses the entire drive.
Resource allocation provides the user with the option to set the amount of compute cycles to
devote to various tasks, including the rate of rebuilds, initialization, consistency checks, and
patrol read. Setting resource to 100% gives total priority to the rebuild. Setting it at 0% means
the rebuild will only occur if the system is not doing anything else. The default rebuild rate
is 30%.
Intel® RAID Software User’s Guide7
8Intel
®
RAID Software User’s Guide
2RAID Levels
The RAID controller supports RAID levels 0, 1E, 5, 6, 10, 50, and 60. The supported RAID
levels are summarized below. In addition, it supports independent drives (configured as RAID
0). This chapter describes the RAID levels in detail.
Summary of RAID Levels
•RAID 0: Uses striping to provide high data throughput, especially for large files in an
®
environment that does not require fault tolerance. In Intel
called Integrated Striping (IS), which supports striped arrays with two to ten disks.
IT/IR RAID, RAID 0 is also
•RAID 1: Uses mirroring so that data written to one disk drive simultaneously writes to
another disk drive. This is good for small databases or other applications that require
small capacity but complete data redundancy. In Intel
called Integrated Mirroring (IM) which supports two-disk mirrored arrays and hot-spare
disks.
®
IT/IR RAID, RAID 1 is also
•RAID 5: Uses disk striping and parity data across all drives (distributed parity) to
provide high data throughput, especially for small random access.
•RAID 6: Uses distributed parity, with two independent parity blocks per stripe, and disk
striping. A RAID 6 virtual disk can survive the loss of two disks without losing data.
•RAID IME: Integrated Mirroring Enhanced (IME) which supports mirrored arrays with
®
three to ten disks, plus hot-spare disks. This is implemented in Intel
IT/IR RAID.
•RAID 10: A combination of RAID 0 and RAID 1, consists of striped data across
mirrored spans. It provides high data throughput and complete data redundancy but uses
a larger number of spans.
•RAID 50: A combination of RAID 0 and RAID 5, uses distributed parity and disk
striping and works best with data that requires high reliability, high request rates, high
data transfers, and medium-to-large capacity.
Note: It is not recommended to have a RAID 0, RAID 5, and RAID 6 virtual disk in the
same physical array. If a drive in the physical array has to be rebuilt, the RAID 0
virtual disk will cause a failure during the rebuild.
•RAID 60: A combination of RAID 0 and RAID 6, uses distributed parity, with two
independent parity blocks per stripe in each RAID set, and disk striping. A RAID 60
virtual disk can survive the loss of two disks in each of the RAID 6 sets without losing
data. It works best with data that requires high reliability, high request rates, high data
transfers, and medium-to-large capacity.
Intel® RAID Software User’s Guide9
RAID Adapter
ABCDEF
A
C
E
B
D
F
Data Striping
RAID 0
Available Capacity
N=# disks
C = Disk Capacit y
Available Capacity = N*C
RAID 0
Selecting a RAID Level
To ensure the best performance, select the optimal RAID level when the system drive is
created. The optimal RAID level for a disk array depends on a number of factors:
•The number of physical drives in the disk array
•The capacity of the physical drives in the array
•The need for data redundancy
•The disk performance requirements
RAID 0 - Data Striping
RAID 0 provides disk striping across all drives in the RAID array. RAID 0 does not provide
any data redundancy, but does offer the best performance of any RAID level. RAID 0 breaks
up data into smaller segments, and then stripes the data segments across each drive in the
array. The size of each data segment is determined by the stripe size. RAID 0 offers high
bandwidth.
Note: RAID level 0 is not fault tolerant. If a drive in a RAID 0 array fails, the whole virtual disk (all
physical drives associated with the virtual disk) will fail.
By breaking up a large file into smaller segments, the RAID controller can use both SAS drive
and SATA drives to read or write the file faster. RAID 0 involves no parity calculations to
complicate the write operation. This makes RAID 0 ideal for applications that require high
bandwidth but do not require fault tolerance.
Figure 1. RAID 0 - Data Striping
Table 1. RAID 0 Overview
Uses
Strong Points
Provides high data throughput, especially for large files. Any environment
that does not require fault tolerance.
Provides increased data throughput for large files. No capacity loss penalty
for parity.
Weak Points
Drives
10Intel
Does not provide fault tolerance or high bandwidth. If any drive fails, all data
is lost.
1 to 32
®
RAID Software User’s Guide
RAID Adapter
ABC
A
B
C
A
B
C
Disk Mirroring
RAID 1
Available Capacity
N=# disks
C = Disk Capaci ty
Available Cap acity =
(N*C) /2
RAID 1
RAID 1 - Disk Mirroring/Disk Duplexing
In RAID 1, the RAID controller duplicates all data from one drive to a second drive. RAID 1
provides complete data redundancy, but at the cost of doubling the required data storage
capacity. Table 2 provides an overview of RAID 1.
Table 2. RAID 1 Overview
Uses
Strong Points
Weak Points
Drives
Use RAID 1 for small databases or any other environment that requires fault
tolerance but small capacity.
Provides complete data redundancy. RAID 1 is ideal for any application that
requires fault tolerance and minimal capacity.
Requires twice as many disk drives. Performance is impaired during drive
rebuilds.
2 to 32 (must be an even number of drives)
Figure 2. RAID 1 - Disk Mirroring/Disk Duplexing
RAID 5 - Data Striping with Striped Parity
RAID 5 includes disk striping at the block level and parity. Parity is the data’s property of
being odd or even, and parity checking detects errors in the data. In RAID 5, the parity
information is written to all drives. RAID 5 is best suited for networks that perform a lot of
small I/O transactions simultaneously.
RAID 5 addresses the bottleneck issue for random I/O operations. Because each drive contains
both data and parity, numerous writes can take place concurrently.
Table 3 provides an overview of RAID 5.
Intel® RAID Software User’s Guide11
RAID Adapter
ABCDEF
A
C
P3
B
P2
E
Data Striping &
Striped Parity
RAID 5
Available Capacity
N=# disks
C = Disk Capacity
Available Capacity =
(N*C)(N-1) /N
P1
D
F
RAID 5
Table 3. RAID 5 Overview
Provides high data throughput, especially for large files. Use RAID 5 for
transaction processing applications because each drive can read and write
Uses
independently. If a drive fails, the RAID controller uses the parity drive to
recreate all missing information. Use also for office automation and online
customer service that requires fault tolerance. Use for any application that
has high read request rates but low write request rates.
Strong Points
Weak Points
Drives
Provides data redundancy, high read rates, and good performance in most
environments. Provides redundancy with lowest loss of capacity.
Not well suited to tasks requiring lot of writes. Suffers more impact if no
cache is used (clustering). If a drive is being rebuilt, disk drive performance
is reduced. Environments with few processes do not perform as well
because the RAID overhead is not offset by the performance gains in
handling simultaneous processes.
3 to 32
Figure 3. RAID 5 - Data Striping with Striped Parity
RAID 6 - Distributed Parity and Disk Striping
RAID 6 is similar to RAID 5 (disk striping and parity), but instead of one parity block per
stripe, there are two. With two independent parity blocks, RAID 6 can survive the loss of two
disks in a virtual disk without losing data.
Table 4 provides an overview of RAID 6.
Table 4. RAID 6 Overview
Provides a high level of data protection through the use of a second parity block in
each stripe. Use RAID 6 for data that requires a high level of protection from loss.
In the case of a failure of one drive or two drives in a virtual disk, the RAID
controller uses the parity blocks to recreate the missing information. If two drives
Uses
12Intel
in a RAID 6 virtual disk fail, two drive rebuilds are required, one for each drive.
These rebuilds do not occur at the same time. The controller rebuilds one failed
drive at a time.
Use for office automation and online customer service that requires fault
tolerance. Use for any application that has high read request rates but low write
request rates.
®
RAID Software User’s Guide
Segment 1
Segment 6
Segment 2
Segment 7
Segment 3
Segment 8
Segment 4
Parity (P5-P8)
Parity (P1-P4)
Parity (Q5-Q8)
Parity (Q9–Q1
Parity (Q1-Q4)
Segment 5
Parity is distributed across all drives in the array. When only three hard drives are available for
RAID 6, the situation has to be that P equals Q equals original data, which means that the original
data has three copies across the three hard drives.
Segment 10
Parity (P9-P12)
Segment 9
Segment 12
Segment 11
Segment 16
Parity (P17-P20)
Parity (P13-P16)
Segment 19
Segment 15
Segment 17
Segment 13
Segment 18
Segment 14
Parity (Q17-Q20)
Parity (Q13-Q16)
Segment 20
Strong Points
Weak Points
Provides data redundancy, high read rates, and good performance in most
environments. Can survive the loss of two drives or the loss of a drive while
another drive is being rebuilt. Provides the highest level of protection against drive
failures of all of the RAID levels. Read performance is similar to that of RAID 5.
Not well suited to tasks requiring lot of writes. A RAID 6 virtual disk has to
generate two sets of parity data for each write operation, which results in a
significant decrease in performance during writes. Disk drive performance is
reduced during a drive rebuild. Environments with few processes do not perform
as well because the RAID overhead is not offset by the performance gains in
handling simultaneous processes. RAID 6 costs more because of the extra
capacity required by using two parity blocks per stripe.
RAID IME
Drives
3 to 32
The following figure shows a RAID 6 data layout. The second set of parity drives are denoted
by Q. The P drives follow the RAID 5 parity scheme.
Figure 4. Example of Distributed Parity across Two Blocks in a Stripe (RAID 6)
An IME volume can be configured with up to ten mirrored disks (one or two global hot spares
can also be added). Figure 5 shows the logical view and physical view of an Integrated
Mirroring Enhanced (IME) volume with three mirrored disks. Each mirrored stripe is written
to a disk and mirrored to an adjacent disk. This type of configuration is also called RAID 1E.
Intel® RAID Software User’s Guide13
Figure 5. Integrated Mirroring Enhanced with Three Disks
Table 5. RAID 1E Overview
Uses
Strong Points
Weak Points
Drives
Use RAID 1E for small databases or any other environment that requires
fault tolerance but small capacity.
Provides complete data redundancy. RAID 1E is ideal for any application that
requires fault tolerance and minimal capacity.
Requires twice as many disk drives. Performance is impaired during drive
rebuilds.
3 to 10
RAID 10 - Combination of RAID 1 and RAID 0
RAID 10 is a combination of RAID 0 and RAID 1. RAID 10 consists of stripes across
mirrored drives. RAID 10 breaks up data into smaller blocks and then mirrors the blocks of
data to each RAID 1 RAID set. Each RAID 1 RAID set then duplicates its data to its other
drive. The size of each block is determined by the stripe size parameter, which is set during the
creation of the RAID set. RAID 10 supports up to eight spans.
Table 6 provides an overview of RAID 10.
14Intel
®
RAID Software User’s Guide
RAID Ad apter
ABCDEF
Disk Mirror
&
Data Striping
RAID 10
Available Capacity
N=# disks
C = Disk Capacity
Available Capacity =
(N*C) /2
A
C
E
A
C
E
B
D
F
B
D
F
Stripe Set
Mirror Set
RAID 10
Table 6. RAID 10 Overview
Appropriate when used with data storage that requires 100 percent
Uses
redundancy of mirrored arrays and that needs the enhanced I/O
performance of RAID 0 (striped arrays). RAID 10 works well for mediumsized databases or any environment that requires a higher degree of fault
tolerance and moderate to medium capacity.
Strong Points
Weak Points
Drives
Figure 6. RAID 10 - Combination of RAID 1 and RAID 0
Provides both high data transfer rates and complete data redundancy.
Requires twice as many drives as all other RAID levels except RAID 1.
4 - 240
RAID 50 - Combination of RAID 5 and RAID 0
RAID 50 provides the features of both RAID 0 and RAID 5. RAID 50 includes both parity and
disk striping across multiple arrays. RAID 50 is best implemented on two RAID 5 disk arrays
with data striped across both disk groups.
RAID 50 breaks up data into smaller blocks and then stripes the blocks of data to each RAID 5
disk set. RAID 5 breaks up data into smaller blocks, calculates parity by performing an
exclusive-or on the blocks and then writes the blocks of data and parity to each drive in the
array. The size of each block is determined by the stripe size parameter, which is set during the
creation of the RAID set.
RAID level 50 supports up to eight spans and tolerates up to eight drive failures, though less
Intel® RAID Software User’s Guide15
than total disk drive capacity is available. Though multiple drive failures can be tolerated, only
one drive failure can be tolerated in each RAID 1 level array.
Table 7 provides an overview of RAID 50.
RAID Adapt er
ABCDEFGHIJK
RAID 5
&
Data Striping
RAID 50
Available Capacity
N=# disks
C = Disk Capacity
Available Capacity =
(N*C)(N-1) /N
Stripe Set
RAID 5 Set
A
E
P(I+K)
C
P(E+G)
I
P1(A+C)
G
K
B
F
P(J+L)
D
P(F+H)
J
P1(B+D)
H
L
RAID 50
Table 7. RAID 50 Overview
Uses
Strong Points
Weak Points
Drives
Figure 7. RAID 50 - Combination of RAID 5 and RAID 0
Appropriate when used with data that requires high reliability, high request
rates, high data transfer, and medium to large capacity.
Provides high data throughput, data redundancy, and very good
performance.
Requires 2 to 8 times as many parity drives as RAID 5.
6 to 32
RAID 60 - Combination of RAID 0 and RAID 6
RAID 60 provides the features of both RAID 0 and RAID 6, and includes both parity and disk
striping across multiple arrays. RAID 6 supports two independent parity blocks per stripe.
A RAID 60 virtual disk can survive the loss of two disks in each of the RAID 6 sets without
losing data. RAID 60 is best implemented on two RAID 6 disk groups with data striped across
both disk groups.
RAID 60 breaks up data into smaller blocks, and then stripes the blocks of data to each RAID
6 disk set. RAID 6 breaks up data into smaller blocks, calculates parity by performing an
exclusive-or on the blocks and then writes the blocks of data and parity to each drive in the
array. The size of each block is determined by the stripe size parameter, which is set during the
creation of the RAID set.
RAID 60 supports up to 8 spans and tolerates up to 16 drive failures, though less than total
disk drive capacity is available. Each RAID 6 level can tolerate two drive failures.
Table 8 provides an overview of RAID 60.
16Intel
®
RAID Software User’s Guide
Loading...
+ 174 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.