Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc.
is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, Dell Precision, and
OpenManage are trademarks of Dell Inc.; MegaRAID is a registered trademark of LSI Corporation;
Microsoft, MS-DOS, Windows Server , W indows, and W indows V ista are either trademarks or registered
trademarks of Microsoft Corporation in the United States and/or other countries; Citrix XenServer is
a trademark of Citrix Systems Inc. and/or one or more of its subsidiaries, and may be registered in the
U.S. Patent and Trademark Off ice and in other countries; VMware is a registered trademark of VMware,
Inc. in the United States and/or other jurisdictions; Solaris is a trademark of Sun Microsystems, Inc.;
Intel is a registered trademark of Intel Corporation or its subsidiaries in the United States or other
countries; Novell and NetWar e are registered trademarks, and SUSE is a registered trademark of Novell,
Inc. in the United States and other countries; Red Hat and Red Hat Enterprise Linux are registered
trademarks of Red Hat, Inc.
Other trademarks and trade names may be used in this document to refer to either the entities claiming
the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and
trade names other than its own.
Use the following safety guidelines to help ensure your own personal safety
and to help protect your system and working environment from potential
damage.
WARNING: There is a danger of a new battery exploding if it is incorrectly
installed. Replace the battery only with the same or equivalent type recommended
by the manufacturer. See "SAFETY: Battery Disposal" on page 14.
NOTE: For complete information about U.S. Terms and Conditions of Sale, Limited
Warranties, and Returns, Export Regulations, Software License Agreement, Safety,
Environmental and Ergonomic Instructions, Regulatory Notices, and Recycling
Information, see the documentation that was shipped with your system.
SAFETY: General
•Observe and follow service markings. Do not service any product except as
explained in your user documentation. Opening or removing covers that
are marked with the triangular symbol with a lightning bolt may expose
you to electrical shock. Components inside these compartments must be
serviced only by a trained service technician.
•If any of the following conditions occur, unplug the product from the
electrical outlet, and replace the part or contact your trained service
provider:
–The power cable, extension cable, or plug is damaged.
–An object has fallen in the product.
–The product has been exposed to water.
–The product has been dropped or damaged.
–The product does not operate correctly when you follow the operating
instructions.
•Use the product only with approved equipment.
WARNING: Safety Instructions11
•Operate the product only from the type of external power source indicated
on the electrical ratings label. If you are not sure of the type of power
source required, consult your service provider or local power company.
•Handle batteries carefully. Do not disassemble, crush, puncture, short
external contacts, dispose of in fire or water, or expose batteries to
temperatures higher than 60° Celsius (140° Fahrenheit). Do not attempt
to open or service batteries; replace batteries only with batteries designated
for the product.
SAFETY: When Working Inside Your System
Before you remove the system covers, perform the following steps in the
sequence indicated.
WARNING: Except as expressly otherwise instructed in Dell documentation, only
trained service technicians are authorized to remove the system cover and access
any of the components inside the system.
WARNING: To help avoid possible damage to the system board, wait 5 seconds
after turning off the system before removing a component from the system board or
disconnecting a peripheral device.
1
Turn off the system and any connected devices.
2
Disconnect your system and devices from their power sources. To reduce
the potential of personal injury or shock, disconnect any
telecommunication lines from the system.
3
Ground yourself by touching an unpainted metal surface on the chassis
before touching anything inside the system.
4
While you work, periodically touch an unpainted metal surface on the
chassis to dissipate any static electricity that might harm internal
components.
12WARNING: Safety Instructions
In addition, note these safety guidelines when appropriate:
•When you disconnect a cable, pull on its connector or on its strain-relief
loop, not on the cable itself. Some cables have a connector with locking
tabs. If you are disconnecting this type of cable, press in on the locking
tabs before disconnecting the cable. As you pull connectors apart, keep
them evenly aligned to avoid bending any connector pins. Also, when you
connect a cable, make sure both connectors are correctly oriented and
aligned.
•Handle components and cards with care. Do not touch the components or
contacts on a card. Hold a card by its edges or by its metal mounting
bracket. Hold a component such as a microprocessor chip by its edges, not
by its pins.
Protecting Against Electrostatic Discharge
Electrostatic discharge (ESD) events can harm electronic components inside
your system. Under certain conditions, ESD may build up on your body or an
object, such as a peripheral, and then discharge insto another object, such as
your system. To prevent ESD damage, you must discharge static electricity from
your body before you interact with any of your system’s internal electronic
components, such as a memory module. You can protect against ESD by
touching a metal grounded object (such as an unpainted metal surface on
your system’s I/O panel) before you interact with anything electronic. When
connecting a peripheral (including handheld digital assistants) to your system,
you should always ground both yourself and the peripheral before connecting
it to the system. Additionally, as you work inside the system, periodically
touch an I/O connector to remove any static charge your body may have
accumulated.
WARNING: Safety Instructions13
You can also take the following steps to prevent damage from electrostatic
discharge:
•When unpacking a static-sensitive component from its shipping carton, do
not remove the component from the antistatic packing material until you
are ready to install the component. Just before unwrapping the antistatic
package, be sure to discharge static electricity from your body.
•When transporting a sensitive component, first place it in an antistatic
container or packaging.
•Handle all electrostatic sensitive components in a static-safe area. If
possible, use antistatic floor pads and work bench pads.
SAFETY: Battery Disposal
Your system may use a nickel-metal hydride (NiMH), lithium
coin-cell, and/or a lithium-ion battery. The NiMH, lithium coincell, and lithium-ion batteries are long-life batteries, and it is
possible that you will never need to replace them. However, should
you need to replace them, see the instructions included in the
section "Configuring and Managing RAID" on page 77.
NOTE: Do not dispose of the battery along with household waste. Contact your
local waste disposal agency for the address of the nearest battery deposit site.
NOTE: Your system may also include circuit cards or other components that
contain batteries. These batteries too must be disposed of in a battery deposit site.
For information about such batteries, see the documentation for the specific card or
component.
Taiwan Battery Recycling Mark
14WARNING: Safety Instructions
Overview
The Dell™ PowerEdge™ Expandable RAID Controller (PERC) 6 family of
controllers and the Dell Cost-Effective RAID Controller (CERC) 6/i offer
redundant array of independent disks (RAID) control capabilities. The PERC 6
and CERC 6/i Serial Attached SCSI (SAS) RAID controllers only support
Dell-qualified SAS and SATA hard disk drives (HDD) and solid-state
drives (SSD). The controllers are designed to provide reliability, high
performance, and fault-tolerant disk subsystem management.
PERC 6 and CERC 6/i Controller Descriptions
The following list describes each type of controller:
•The PERC 6/E Adapter with two external x4 SAS ports and a transportable
battery backup unit (TBBU)
•The PERC 6/i Adapter with two internal x4 SAS ports, with or without a
battery backup unit, depending on the system
•The PERC 6/i Integrated controller with two internal x4 SAS ports and a
battery backup unit
•The CERC 6/i Integrated controller with one internal x4 SAS port and no
battery backup unit
Each controller supports up to 64 virtual disks.
NOTE: The number of virtual disks supported by the PERC 6/i and the CERC 6/i
cards is limited by the configuration supported by the system.
PCI Architecture
•PERC 6 controllers support a Peripheral Component Interconnect
Express (PCI-E) x8 host interface.
•CERC 6/i Modular controllers support a PCI-E x4 host interface.
NOTE: PCI-E is a high-performance input/output (I/O) bus architecture designed to
increase data transfers without slowing down the Central Processing Unit (CPU).
Overview15
Operating System Support
The PERC 6 and CERC 6/i controllers support the following operating systems:
•Citrix® XenServer® Dell Edition
•Microsoft
•Microsoft Windows
•Microsoft Windows Vista
•Microsoft Windows Server 2008 (including Hyper-V™ virtualization)
•Novell
•Red Hat
Linux Version 5
•Solaris™ 10 (64-bit)
•SUSE
and Version 11 (64-bit)
•VMware
NOTE: Windows XP and Windows Vista operating systems are supported with a
PERC 6 controller only when the controller is installed in a Dell Precision™
workstation.
NOTE: For the latest list of supported operating systems and driver installation
instructions, see the system documentation on the Dell Support website
at support.dell.com. For specific operating system service pack requirements,
see the Drivers and Downloads section on the Dell Support site at support.dell.com.
®
Windows Server® 2003
®
XP
®
®
NetWare® 6.5
®
Enterprise Linux® Version 4 and Red Hat Enterprise
®
Linux Enterprise Server Version 9 (64-bit), Version 10 (64-bit),
®
ESX 3.5 and 3.5i
RAID Description
RAID is a group of independent physical disks that provides high performance by
increasing the number of drives used for saving and accessing data. A RAID disk
subsystem improves I/O performance and data availability. The physical disk
group appears to the host system either as a single storage unit or multiple
logical units. Data throughput improves because several disks are accessed
simultaneously. RAID systems also improve data storage availability and fault
tolerance. Data loss caused by a physical disk failure can be recovered by rebuilding
missing data from the remaining physical disks containing data or parity.
CAUTION: In the event of a physical disk failure, a RAID 0 virtual disk fails,
resulting in data loss.
16Overview
Summary of RAID Levels
•RAID 0 uses disk striping to provide high data throughput, especially for
large files in an environment that requires no data redundancy.
•RAID 1 uses disk mirroring so that data written to one physical disk is
simultaneously written to another physical disk. RAID 1 is good for small
databases or other applications that require small capacity, but also require
complete data redundancy.
•RAID 5 uses disk striping and parity data across all physical disks
(distributed parity) to provide high data throughput and data redundancy,
especially for small random access.
•RAID 6 is an extension of RAID 5 and uses an additional parity block.
RAID 6 uses block-level striping with two parity blocks distributed across
all member disks. RAID 6 provides protection against double disk failures,
and failures while a single disk is rebuilding. If you are using only one array,
deploying RAID 6 is more effective than deploying a hot spare disk.
•RAID 10, a combination of RAID 0 and RAID 1, uses disk striping across
mirrored disks. It provides high data throughput and complete data
redundancy. RAID 10 can support up to eight spans, and up to 32 physical
disks per span.
•RAID 50, a combination of RAID 0 and RAID 5, uses distributed data
parity and disk striping and works best with data that requires high system
availability, high request rates, high data transfers, and medium to large
capacity.
•RAID 60 is a combination of RAID 6 and RAID 0, a RAID 0 array is striped
across RAID 6 elements. RAID 60 requires at least 8 disks.
RAID Terminology
Disk Striping
Disk striping allows you to write data across multiple physical disks instead of
just one physical disk. Disk striping involves partitioning each physical disk
storage space in stripes of the following sizes: 8 KB, 16 KB, 32 KB, 64 KB,
128 KB, 256 KB, 512 KB, and 1024 KB. These stripes are interleaved in a
repeated sequential manner. The part of the stripe on a single physical disk is
called a stripe element.
Overview17
For example, in a four-disk system using only disk striping (used in RAID 0),
Stripe element 1
Stripe element 5
Stripe element 9
Stripe element 2
Stripe element 6
Stripe element 10
Stripe element 3
Stripe element 7
Stripe element 11
Stripe element 4
Stripe element 8
Stripe element 12
segment 1 is written to disk 1, segment 2 is written to disk 2, and so on. Disk
striping enhances performance because multiple physical disks are accessed
simultaneously, but disk striping does not provide data redundancy.
Figure 2-1 shows an example of disk striping.
Figure 2-1. Example of Disk Striping (RAID 0)
Disk Mirroring
With mirroring (used in RAID 1), data written to one disk is simultaneously
written to another disk. If one disk fails, the contents of the other disk can be
used to run the system and rebuild the failed physical disk. The primary
advantage of disk mirroring is that it provides complete data redundancy.
Because the contents of the disk are completely written to a second disk, it
does not matter if one of the disks fails. Both disks contain the same data at
all times. Either of the physical disks can act as the operational physical disk.
Disk mirroring provides complete redundancy, but is expensive because each
physical disk in the system must be duplicated.
Stripe element 1
Stripe element 2
Stripe element 3
Stripe element 1 Duplicated
Stripe element 2 Duplicated
Stripe element 3 Duplicated
Stripe element 4 Stripe element 4 Duplicated
Figure 2-2. Example of Disk Mirroring (RAID 1)
Spanned RAID Levels
Spanning is a term used to describe the way in which RAID levels 10, 50,
and 60 are constructed from multiple sets of basic, or simple RAID levels.
For example, a RAID 10 has multiple sets of RAID 1 arrays where each RAID 1
set is considered a span. Data is then striped (RAID 0) across the RAID 1
spans to create a RAID 10 virtual disk. If you are using RAID 50 or RAID 60,
you can combine multiple sets of RAID 5 and RAID 6 together with striping.
Parity Data
Parity data is redundant data that is generated to provide fault tolerance
within certain RAID levels. In the event of a drive failure the parity data can
be used by the controller to regenerate user data. Parity data is present for
RAID 5, 6, 50, and 60.
The parity data is distributed across all the physical disks in the system. If a
single physical disk fails, it can be rebuilt from the parity and the data on the
remaining physical disks. RAID level 5 combines distributed parity with disk
striping, as shown in Figure 2-3. Parity provides redundancy for one physical
disk failure without duplicating the contents of entire physical disks.
RAID 6 combines dual distributed parity with disk striping. This level of
parity allows for two disk failures without duplicating the contents of entire
physical disks.
Overview19
Figure 2-3. Example of Distributed Parity (RAID 5)
Stripe element 1
Stripe element 7
Stripe element 2
Stripe element 8
Stripe element 3
Stripe element 9
Stripe element 4
Stripe element 10
Stripe element 5
Parity (6–10)
Parity (11–15)
Parity (1–5)
Stripe element 6
Stripe element 12
Stripe element 15
Stripe element 11Stripe element 14
Stripe element 13
Stripe element 19
Stripe element 25
Stripe element 20
Stripe element 23
Stripe element 18
Stripe element 21
Stripe element 16
Stripe element 22
Stripe element 17
Parity (21–25)
Parity (26–30)
Parity (16–20)
Stripe element 24
Stripe element 30
Stripe element 27Stripe element 29
Stripe element 26
Stripe element 28
Stripe element 1
Stripe element 5
Stripe element 2
Stripe element 6
Stripe element 3
Parity (5–8)
Stripe element 4
Parity (5–8)
Parity (1–4)
Stripe element 7
Stripe element 10
Parity (1–4)
Stripe element 8
Stripe element 12
Stripe element 9
Stripe element 11
Parity (9–12)
Parity (9–12)
Stripe element 13 Stripe element 14Stripe element 16Parity (13–16)
Stripe element 15
Parity (13–16)
NOTE: Parity is distributed across multiple physical disks in the disk group.
Figure 2-4. Example of Dual Distributed Parity (RAID 6)
NOTE: Parity is distributed across all drives in the array.
20Overview
About PERC 6 and CERC 6/i
Controllers
This section describes the features of the Dell™ PowerEdge™ Expandable
RAID Controller (PERC) 6 and the Dell Cost-Effective RAID Controller
(CERC) 6/i such as the configuration options, disk array performance,
RAID management utilities, and operating system software drivers.
PERC 6 and CERC 6 Controller Features
The PERC 6 and CERC 6 family of controllers support only Dell-qualified
Serial-attached SCSI (SAS) hard disk drives(HDDs), SATA HDDs, and
solid-state disks (SSD). Mixing SAS and SATA drives within a virtual disk is
not supported. Also, mixing HDDs and SSDs within a virtual disk is not
supported.
Table 3-1 compares the hardware configurations for the PERC 6 and CERC 6/i
controllers.
Table 3-1. PERC 6 and CERC 6/i Controller Comparisons
SpecificationPERC 6/E
Adapter
RAID Levels0, 1, 5, 6, 10,
50, 60
Enclosures
per Port
Ports2 x4 external
ProcessorLSI adapter
Up to 3
enclosures
wide port
SAS RAID-onChip, 8-port
with 1078
PERC 6/i AdapterPERC 6/i
Integrated
0, 1, 5, 6, 10,
50, 60
N/AN/AN/A
2 x4 internal
wide port
LSI adapter SAS
RAID-on-Chip,
8-port with 1078
About PERC 6 and CERC 6/i Controllers21
0, 1, 5, 6, 10,
50, 60
2 x4 internal
wide port
LSI adapter SAS
RAID-on-Chip,
8-port with 1078
CERC 6/i
Integrated
0,1,5,6,
and 10
1 x4
internal
wide port
LSI adapter
SAS RAIDon-Chip,
4-port with
1078
a
Table 3-1. PERC 6 and CERC 6/i Controller Comparisons (continued)
Up to 16
virtual disks
per disk
group for
nonspanned
RAID
levels: 0, 1,
5, and 6.
One virtual
disk per
disk group
for spanned
RAID
level 10.
22About PERC 6 and CERC 6/i Controllers
Table 3-1. PERC 6 and CERC 6/i Controller Comparisons (continued)
SpecificationPERC 6/E
Adapter
Multiple
Virtual Disks
per
Up to 64
virtual disks
per controller
Controller
Support for
YesYesYesx4 PCIe
x8 PCIe Host
Interface
Online
Ye sYe sYe sYe s
Capacity
Expansion
Dedicated
Ye sYe sYe sYe s
and Global
Hot Spares
Hot Swap
Ye sYe sYe sYe s
Devices
Supported
Enclosure
Hot-Add
Mixed
YesN/AN/AN/A
c
Ye sYe sYe sYe s
Capacity
Physical
Disks
Supported
Hardware
Ye sYe sYe sYe s
Exclusive-OR
(XOR)
Assistance
PERC 6/i AdapterPERC 6/i
Integrated
Up to 64 virtual
disks per
controller
Up to 64 virtual
disks per
controller
CERC 6/i
Integrated
Up to 64
virtual disks
per
controller
About PERC 6 and CERC 6/i Controllers23
Table 3-1. PERC 6 and CERC 6/i Controller Comparisons (continued)
SpecificationPERC 6/E
Adapter
Revertible
Hot Spares
Supported
Redundant
Path Support
a. These RAID configurations are only supported on select Dell modular systems.
b. The PERC 6/i adapter supports a battery backup unit (BBU) on selected systems only.
For additional information, see the documentation that shipped with the system.
c. Using the enclosure Hot-Add feature, you can hot plug enclosures to the PERC 6/E adapter
without rebooting the system.
NOTE: The maximum array size is limited by the maximum number of drives per
disk group (32), the maximum number of spans per disk group (8), and the size of the
physical drives.
NOTE: The number of physical disks on a controller is limited by the number of slots
in the backplane on which the card is attached.
Ye sYe sYe sYe s
YesN/AN/AN/A
PERC 6/i AdapterPERC 6/i
Integrated
CERC 6/i
Integrated
Using the SMART Feature
The Self-Monitoring Analysis and Reporting Technology (SMART) feature
monitors the internal performance of all motors, heads, and physical disk
electronics to detect predictable physical disk failures. The SMART feature
helps monitor physical disk performance and reliability.
SMART-compliant physical disks have attributes for which data can be
monitored to identify changes in values and determine whether the values are
within threshold limits. Many mechanical and electrical failures display some
degradation in performance before failure.
24About PERC 6 and CERC 6/i Controllers
A SMART failure is also referred to as a predicted failure. There are numerous
factors that relate to predicted physical disk failures, such as a bearing failure,
a broken read/write head, and changes in spin-up rate. In addition, there are
factors related to read/write surface failure, such as seek error rate
and excessive bad sectors.
For information on physical disk status, see "Disk Roaming" on page 27.
NOTE: For detailed information on Small Computer System Interface (SCSI)
interface specifications, see www.t10.org and for detailed information on for Serial
Attached ATA (SATA) interface specifications, see www.t13.org.
Initializing Virtual Disks
You can initialize the virtual disks as described in the following sections.
Background Initialization
Background Initialization (BGI) is an automated process that writes the
parity or mirror data on newly created virtual disks. BGI assumes that the data
is correct on all new drives. BGI does not run on RAID 0 virtual disks.
NOTE: You cannot permanently disable BGI. If you cancel BGI, it automatically
restarts within five minutes. For information on stopping BGI, see "Stopping
Background Initialization" on page 108.
You can control the BGI rate in the Dell™ OpenManage™ storage
management application. Any change in the BGI rate does not take effect
until the next BGI run.
NOTE: Unlike full or fast initialization of virtual disks, Background Initialization does
not clear data from the physical disks.
Consistency Check (CC) and BGI perform similar functions in that they
both correct parity errors. However, CC reports data inconsistencies through
an event notification, but BGI does not (BGI assumes the data is correct, as it
is run only on a newly created disk). You can start CC manually, but not BGI.
About PERC 6 and CERC 6/i Controllers25
Full Initialization of Virtual Disks
Performing a Full Initialization on a virtual disk overwrites all blocks and
destroys any data that previously existed on the virtual disk. Full Initialization
of a virtual disk eliminates the need for that virtual disk to undergo a
Background Initialization and can be performed directly after the creation of
a virtual disk.
During Full Initialization, the host is not able to access the virtual disk.
You can start a Full Initialization on a virtual disk by using the Slow Initialize
option in the Dell OpenManage storage management application. To use the
BIOS Configuration Utility to perform a Full Initialization, see "Initializing
Virtual Disks" on page 88.
NOTE: If the system reboots during a Full Initialization, the operation aborts and a
BGI begins on the virtual disk.
Fast Initialization of Virtual Disks
A fast initialization on a virtual disk overwrites the first and last 8 MB of the
virtual disk, clearing any boot records or partition information. This operation
takes only 2–3 seconds to complete and is recommended when recreating
virtual disks. To perform a fast initialization using the BIOS Configuration
Utility, see "Initializing Virtual Disks" on page 88.
Consistency Checks
CC is a background operation that verifies and corrects the mirror or parity
data for fault tolerant virtual disks. It is recommended that you periodically
run a consistency check on virtual disks.
You can manually start a consistency check using the BIOS Configuration
Utility or a OpenManage storage management application. To start a CC
using the BIOS Configuration Utility, see "Checking Data Consistency" on
page 88. CCs can be scheduled to run on virtual disks using a OpenManage
storage management application.
By default, CC automatically corrects mirror or parity inconsistencies.
However, you can enable the Abort Consistency Check on Error feature on
the controller using Dell OpenManage storage management application.
With the Abort Consistency Check on Error setting enabled, consistency
check notifies if any inconsistency is found and aborts instead of
automatically correcting the error.
26About PERC 6 and CERC 6/i Controllers
Disk Roaming
The PERC 6 and CERC 6/i controllers support moving physical disks from
one cable connection or backplane slot to another on the same controller.
The controller automatically recognizes the relocated physical disks and
logically places them in the proper virtual disks that are part of the disk group.
You can perform disk roaming only when the system is turned off.
CAUTION: Do not attempt disk roaming during RAID level migration (RLM) or
online capacity expansion (OCE). This causes loss of the virtual disk.
Perform the following steps to use disk roaming:
1
Turn off the power to the system, physical disks, enclosures, and system
components. Disconnect power cords from the system.
2
Move the physical disks to desired positions on the backplane or
the enclosure.
3
Perform a safety check. Make sure the physical disks are inserted properly.
4
Turn on the system.
The controller detects the RAID configuration from the configuration
data on the physical disks.
Disk Migration
The PERC 6 and CERC 6/i controllers support migration of virtual disks from
one controller to another without taking the target controller offline.
However, the source controller must be offline prior to performing the
disk migration. The controller can import RAID virtual disks in optimal,
degraded, or partially degraded states. You cannot import a virtual disk that is in
an offline state.
NOTE: Disks cannot be migrated back to previous Dell PERC RAID controllers.
When a controller detects a physical disk with an existing configuration, it
flags the physical disk as foreign, and it generates an alert indicating that a
foreign disk was detected.
CAUTION: Do not attempt disk roaming during RAID level migration (RLM) or
online capacity expansion (OCE). This causes loss of the virtual disk.
About PERC 6 and CERC 6/i Controllers27
Perform the following steps to use disk migration:
1
Turn off the system that contains the source controller.
2
Move the appropriate physical disks from the source controller to the
target controller.
The system with the target controller can be running while inserting the
physical disks.
The controller flags the inserted disks as foreign disks.
3
Use the OpenManage storage management application to import the
detected foreign configuration.
NOTE: Ensure that all physical disks that are part of the virtual disk are migrated.
NOTE: You can also use the controller BIOS configuration utility to migrate disks.
Compatibility With Virtual Disks Created on PERC 5 Controllers
Virtual disks that were created on the PERC 5 family of controllers can be
migrated to the PERC 6 and CERC 6/i controllers without risking data or
configuration loss. Migrating virtual disks from PERC 6 and CERC 6/i controllers
to PERC 5 is not supported.
Virtual disks created on the CERC 6/i controller or the PERC 5 family of
controllers can be migrated to PERC 6.
NOTE: For more information about compatibility, contact your Dell technical
support representative.
Compatibility With Virtual Disks Created on SAS 6/iR Controllers
Virtual disks created on the SAS 6/iR family of controllers can be migrated to
PERC 6 and CERC 6/i. However, only virtual disks with boot volumes of the
following Linux operating systems successfully boot after migration:
•Red Hat® Enterprise Linux® 4
•Red Hat Enterprise Linux 5
•SUSE
28About PERC 6 and CERC 6/i Controllers
®
Linux Enterprise Server 10 (64-bit)
NOTE: The migration of virtual disks with Microsoft® Windows® operating systems
is not supported.
CAUTION: Before migrating virtual disks, back up your data and ensure the
firmware of both controllers is the latest revision. Also ensure you use the SAS 6
firmware version 00.25.47.00.06.22.03.00 or newer.
Migrating Virtual Disks from SAS 6/iR to PERC 6 and CERC 6/i
NOTE: The supported operating systems listed in "Compatibility With Virtual Disks
Created on SAS 6/iR Controllers" on page 28 contain the driver for the PERC 6
and CERC 6/i controller family. No additional drivers are needed during the
migration process.
1
If virtual disks with one of the supported Linux operating systems listed in
"Compatibility With Virtual Disks Created on SAS 6/iR Controllers" on
page 28 are being migrated, open a command prompt and type the
following commands:
Move the appropriate physical disks from the SAS 6/iR controller to the
PERC 6 and CERC 6/i. If you are replacing your SAS 6/iR controller with a
PERC 6 controller, see the
system or on the Dell Support website at
CAUTION: After you have imported the foreign configuration on the PERC 6 or
CERC 6/i storage controllers, migrating the storage disks back to the SAS 6/iR
controller may result in the loss of data.
4
Boot the system and import the foreign configuration that is detected.
Hardware Owner’s Manual
support.dell.com
You can do this in two ways:
•Press <F> to automatically import the foreign configuration
•Enter the
BIOS Configuration Utility
and navigate to the
Configuration View
shipped with your
.
Foreign
NOTE: For more information on accessing the BIOS Configuration Utility,
see "Entering the BIOS Configuration Utility" on page 79
NOTE: For more information on Foreign Configuration View, see "Foreign
Configuration View" on page 104
5
If the migrated virtual disk is the boot volume, ensure that the virtual disk
.
is selected as the bootable volume for the target PERC 6 and CERC 6/i
controller. See "Controller Management Actions" on page 104.
About PERC 6 and CERC 6/i Controllers29
6
Exit the
7
Ensure all the latest drivers for PERC 6 or CERC 6/i controller
(available on the Dell support website at
For more information, see "Driver Installation" on page 63.
NOTE: For more information about compatibility, contact your Dell technical
support representative.
BIOS Configuration Utility
and reboot the system.
support.dell.com
) are installed.
Battery Management
NOTE: Battery management is only applicable to PERC 6 family of controllers.
The Transportable Battery Backup Unit (TBBU) is a cache memory module
with an integrated battery pack that enables you to transport the cache
module with the battery in a new controller. The TBBU protects the integrity
of the cached data on the PERC 6/E adapter by providing backup power
during a power outage.
The Battery Backup Unit (BBU) is a battery pack that protects the integrity of
the cached data on the PERC 6/i adapter and PERC 6/i Integrated controllers
by providing backup power during a power outage.
The battery, when new, provides up to 24 hours of backup power for the
cache memory.
Battery Warranty Information
The BBU offers an inexpensive way to protect the data in cache memory.
The lithium-ion battery provides a way to store more power in a smaller
form factor than previous batteries.
Your PERC 6 battery, when new, provides up to 24 hours of controller cache
memory backup power. Under the 1–year limited warranty, we warrant that
the battery will provide at least 24 hours of backup coverage during the 1-year
limited warranty period. To prolong battery life, do not store or operate the
BBU in temperatures exceeding 60°C.
30About PERC 6 and CERC 6/i Controllers
Loading...
+ 130 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.