Version 10.0—Supports the 9750 and 9000 Series
(9750, 9690SA, and 9650SE)
PN: 45413-00, Rev. A
November 2009
Document Description
Document 45413-00, Rev. A. November 2009.
This document will remain the official reference source for all revisions and
releases of this product until rescinded by an update.
Disclaimer
It is the policy of LSI Corporation to improve products as new technology,
components, software, and firmware become available. LSI reserves the right
to make changes to any products herein at any time without notice. All
features, functions, and operations described herein may not be marketed by
LSI in all parts of the world. In some instances, photographs and figures are of
equipment prototypes. Therefore, before using this document, consult your
LSI representative for information that is applicable and current. LSI DOES
NOT ASSUME ANY RESPONSIBILITY OR LIABILITY FOR THE USE
OF ANY PRODUCTS DESCRIBED HEREIN EXCEPT AS EXPRESSLY
AGREED TO IN WRITING BY LSI.
LSI products are not intended for use in life-support appliances, devices, or
systems. Use of any LSI product in such applications without written consent
of the appropriate LSI officer is prohibited.
License Restriction
The purchase or use of an LSI Corporation product does not convey a license
under any patent, copyright, trademark, or other intellectual property right of
LSI or third parties.
LSI, the LSI logo design, 3ware®, 3DM®, StorSwitch®, and T winS tor® are all
registered trademarks of LSI Corporation. StorSave, and StreamFusion are
®
trademarks of LSI. Linux
United States, other countries, or both. SUSE
Novell, Inc. Windows
the United States and other countries. Firefox
the Mozilla Foundation. Safari
registered in the U.S. and other countries. PCI Express
trademark of PCI-SIG
trademarks of their respective companies.
is a registered trademark of Linus Torvalds in the
®
is a registered trademark of
®
is a registered trademark of Microsoft Corporation in
3ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0 provides
instructions for configuring and maintaining your 3ware
3ware’s command line interface (CLI).
This guide assumes that you have already installed your 3ware RAID
controller in your system. If you have not yet done so, see the installation
guide that came with your 3ware RAID controller for instructions. This guide
is available in PDF format on your 3ware CD, or can be downloaded from the
®
LSI
website at http://www.lsi.com/channel/ChannelDownloads.
®
controller using
Table 1: Sections in this CLI Guide
ChapterDescription
1. Introduction to 3ware
Command Line Interface
2. CLI Syntax ReferenceDescribes individual commands using the
There are often multiple ways to accomplish the same configuration and
maintenance tasks for your 3ware controller. While this manual includes
instructions for performing tasks using the command line interface, you can
also use the following applications:
Installation, features, concepts
primary syntax
•3ware BIOS Manager
•3DM
For details, see the user guide or the 3ware HTML Bookshelf.
viiiwww.lsi.com/channel/products
®
2 (3ware Disk Manager)
Introduction to the 3ware
Command Line Interface
The 3ware SATA+SAS Controller Card Command Line Interface (CLI)
manages multiple 9750, 9690SA, and 9650SE 3ware RAID controllers.
Note: Older 3ware RAID controllers also share the vast majority of CLI commands.
Wherever possible, commands are labeled to indicate when they are supported for
only a subset of controllers.
For example, commands that apply only to 3ware 9000 series controllers are
labeled as such and are not supported for 3ware 7000/8000 controllers.
Within the 9000 series, some commands apply to only to models 9750, 9690SA and
9650SE, some apply to 9690SA, 9650SE, 9590SE, and 9550SX(U), but not to
9500S, and are so labeled. A few commands apply only to models 9500S, and are
labeled as such.
If a command is labeled as applying to the SX controller, it is available for both
9550SX and 9550SXU.
1
You may need to install particular firmware and drivers for some features to take
effect. See the Release Notes for details.
Important!
For all of the functions of the 3ware CLI to work properly, you must have the proper
CLI, firmware, and driver versions installed. For the latest versions and upgrade
instructions, check http://www.lsi.com/channel/ChannelDownloads.
This chapter includes the following sections:
•“Features of the CLI” on page 2
•“Installing the 3ware CLI” on page 2
•“Working with 3ware CLI” on page 5
•“Understanding RAID Levels and Concepts” on page 8
www.lsi.com/channel/products 1
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Features of the CLI
3ware CLI is a command line interface for managing 3ware RAID
Controllers. It provides controller, logical unit, drive, enclosure, and BBU
(Battery Backup Unit) management. It can be used in both interactive and
batch mode, providing higher level API (application programming interface)
functionalities.
You can use the CLI to view unit status and version information and perform
maintenance functions such as adding or removing drives. 3ware CLI also
includes advanced features for creating and dele ting RAID units online.
For a summary of what you can do using the CLI, see “Common Tasks
Mapped to CLI Commands” on page 18.
Supported Operating Systems
The 10.0 version of the 3ware CLI is supported under the following op erating
systems:
•Windows®. Windows 7, Vista, Windows Server 2008, and Windows
Server 2003 SP2 (32-bit and 64-bit versions of each).
•Linux®. Redhat Enterprise, openSUSE Linux, SUSE
Server, and other versions of Linux, using the open source Linux 2.6
kernel driver sources
Additional operating systems will be supported in later releases. For specific
operating system versions that are supported in a given release, see the
Release Notes available at http://www.lsi.com/channel/ChannelDownloads,
or the filefile versions.txt, available on the 3ware CD.
Installing the 3ware CLI
This section section includes information on installing the 3ware CLI under
various operating systems.
Installing the 3ware CLI on Windows
3ware CLI can be installed or run directly from the 3ware software CD, or the
latest version can be downloaded from the LSI web site,
http://www.lsi.com/channel/ChannelDownloads. Online manual pages are
also available in nroff and html formats. These are located in
cli/tw_cli.8.html
or tw_cli.8.nroff.
®
Linux Enterprise
/packages/
23ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Installing the 3ware CLI
To install 3ware CLI on Windows
Do one of the following:
•
Run the installer from the 3ware CD. Start the 3ware CD and at the
3ware menu, click
Step through the pages of the installation wizard and make sure that
Command Line Interface (tw_cli) is selected.
•
Copy the file from the 3ware CD. Copy the file tw_cli.exe to the
directory from which you want to run the program.
Install Software.
CLI is located on the 3ware CD in the directory
Note: CLI comes in both 32-bit and 64-bit versions. If you are
copying the file directly, be sure to copy the correct version for the
version of the operating system you are using.
\packages\cli\windows
Permissions Required to Run CLI
To run CLI, you can be logged onto Windows with one of the following sets
of permissions:
•Administrator
•User with administrator rights
•Domain administrator
•Domain user with Domain Admin or Administrator membership
Without the correct privileges, CLI will prompt and then exit when the
application is executed.
If you are uncertain whether you have the correct permissions, contact your
network administrator.
To start CLI, do one of the following:
•Start the 3ware CD and at the 3ware menu, click
Run CLI.
•Or, open a console window, change to the directory where tw_cli is
located, and at the command prompt, enter
tw_cli
•OR, double-click the CLI icon in a folder.
The CLI prompt is displayed in a DOS console window.
www.lsi.com/channel/products 3
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Installing the 3ware CLI on Linux
3ware CLI can be installed or run directly from the 3ware software CD, or the
latest version can be downloaded from the LSI web site,
http://www.lsi.com/channel/ChannelDownloads.
To install 3ware CLI on Linux
Do one of the following:
•
Copy the file. Copy the file tw_cli to the directory from which you want
to run the program.
CLI is located on the 3ware CD in the following directory:
/packages/cli/linux
Online manual pages are also available in nroff and html formats. These
are located in
/packages/cli/tw_cli.8.html or tw_cli.8.nroff.
You will need to be root or have root privileges to install the CLI to
and to run the CLI.
/usr/sbin
Notes:
The installation location needs to be in the environment path for root to
execute the CLI without using complete paths (i.e., if installed to /usr/sbin/, you
can type tw_cli on the command line, otherwise you will have to type the
complete path:
/home/user/tw_cli
The 3ware CLI comes in both 32-bit and 64-bit versions. If you are copying the
file directly, be sure to copy the correct version for the version of the operating
system you are using.
•Use the setup command from a command line
•Navigate to one of the following directories on the 3ware CD
You can use 3ware CLI with line arguments, processing a single command at
a time. To do so, simply enter the command and the arguments.
Single commands can be useful when you want to perform a task such as
redirecting the output of the command to a file. It also allows you to use the
command line history to eliminate some typing.
Syntax
tw_cli <command_line_arguments>
Example
tw_cli /c0 show diag > /tmp/3w_diag.out
Using an input file to execute a script
You can operate 3ware CLI scripts by executing a file. The file is a text file
containing a list of CLI commands which you have entered in advance. Each
command must be on a separate line.
Syntax
tw_cli -f <filename>
Where <filename> is the name of the text file you want to execute.
Example
tw_cli -f clicommand.txt
63ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Working with 3ware CLI
This example executes the file clicommand.txt, and runs the CLI commands
included in that file.
Scripting examples
Following is a scripting example for a 4-port controller using a text file called
config_unit.txt, containing three commands. This example sets up a 4-port
controller with two units, each with 2 drives mirrored. It then prints the
configurations for verification. The commands included in the script file are:
/c0 add type=raid1 disk=0-1
/c0 add type=raid1 disk=2-3
/c0 show
Following is a scripting example for a 12-port controller using a text file
called config_unit.txt, containing three commands. This example sets up a 12port controller with two units: one with the first 2 drives mirrored, and another
with the remaining drives in a RAID 5 array. It then prints the configurations
for verification. The commands included in the script file are:
/c0 add type=raid1 disk=0-1
/c0 add type=raid5 disk=2-11
/c0 show
To run either of the scripts, enter:
tw_cli -f config_unit.txt
Outputting the CLI to a Text File
You can have the output of the 3ware CLI, including errors, sent to a text file
by adding 2>&1 to the end of the line. This could be useful, for example, if
you want to email the output to LSI Technical Support.
Examples
tw_cli /c2/p0 show >> controller2port0info.txt 2>&1
or
tw_cli /c0 show diag >> Logfile.txt 2>&1
Conventions
The following conventions are used through this guide:
•In text,
•In descriptions and explanations of commands, a bold font indicates the name of commands and parameters, for example, /c0/p0 show all.
monospace font is used for code and for things you type.
•In commands, an italic font indicates items that are variable, but that you
must specify, such as a controller ID, or a unit ID, for example,
attribute, and /cx/px show all
show
www.lsi.com/channel/products 7
/c0/p0
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
•In commands, brackets around an item indicates that it is optional.
•In commands, ellipses (...) indicate that more than one parameter at a time
can be included, for example, /c0/p0 showattribute [attribute ...], or that
there is a range between two values from which you can pick a value, for
example, /cx set carvesize=[1024...2048].
•In commands, a vertical bar (|) indicates an 'or' situation where the user
has a choice between more than one attribute, but only one can be
specified.
Example: In the command to rescan all ports and reconstitute all units, the
syntax appears as /cx rescan [noscan]. The brackets [ ] indicate that you may
omit the noscan parameter, so that the operation will be reported to the
operating system.
Understanding RAID Levels and Concepts
3ware RAID controllers use RAID (Redundant Array of Independent Disks)
to increase your storage system’s performance and provide fault tolerance
(protection against data loss).
This section organizes information about RAID concepts and configuration
levels into the following topics:
•“RAID Concepts” on page 8
•“Available RAID Configurations” on page 9
•“Determining What RAID Level to Use” on page 15
RAID Concepts
The following concepts are important to understand when working with a
RAID controller:
•
•
Arrays and Units. In the storage industry, the term “array” is used to
describe two or more disk drives that appear to the operating system as a
single unit. When working with a 3ware RAID controller, “unit” is the
term used to refer to an array of disks that is configured and managed
through the 3ware software. Single-disk units can also be configured in
the 3ware software.
Mirroring. Mirrored arrays (RAID 1) write data to paired drives
simultaneously . If one drive fails, the data is preserved on the paired
drive. Mirroring provides data protection through redundancy. In
addition, mirroring using a 3ware RAID controller provides improved
performance because 3ware’s TwinStor® technology reads from both
drives simultaneously.
83ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Understanding RAID Levels and Concepts
•Striping. Striping across disks allows data to be written and accessed on
more than one drive, at the same time. Striping combines each drive’s
capacity into one large volume. Striped disk arrays (RAID 0) achieve
highest transfer rates and performance at the expense of fault tolerance.
•
Distributed Parity . Parity works in combination with striping on RAID 5,
RAID 6, and RAID 50. Parity information is written to each of the striped
drives, in rotation. Should a failure occur, the data on the failed drive can
be reconstructed from the data on the other drives.
Hot Swap. The process of exchanging a drive without having to shut
•
down the system. This is useful when you need to exchange a defective
drive in a redundant unit.
Array Roaming. The process of removing a unit from a controller and
•
putting it back later, either on the same controller, or a different one, and
having it recognized as a unit. The disks may be attached to different ports
than they were originally attached to, without harm to the data.
Available RAID Configurations
RAID is a method of combining several hard drives into one unit. It can offer
fault tolerance and higher throughput levels than a single hard drive or group
of independent hard drives. LSI's 3ware controllers support RAID 0, 1, 5, 6,
10, 50, and Single Disk. The information below provides a more in-depth
explanation of the different RAID levels.
RAID 0
RAID 0 provides improved performance, but no fault tolerance. Since the
data is striped across more than one disk, RAID 0 disk arrays achieve high
transfer rates because they can read and write data on more than one drive
simultaneously. The stripe size is configurable during unit creation. RAID 0
requires a minimum of two drives.
When drives are configured in a striped disk array (see Figure 1), large files
are distributed across the multiple disks using RAID 0 techniques.
Striped disk arrays give exceptional performance, particularly for data
intensive applications such as video editing, computer-aided design and
geographical information systems.
RAID 0 arrays are not fault tolerant. The loss of any drive results in the loss of
all the data in that array, and can even cause a system hang, depending on
your operating system. RAID 0 arrays are not recommended for high
availability systems unless additional precautions are taken to prevent system
hangs and data loss.
www.lsi.com/channel/products 9
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Figure 1. RAID 0 Configuration Example
RAID 1
RAID 1 provides fault tolerance and a speed advantage over non-RAID disks.
RAID 1 is also known as a mirrored array. Mirroring is done on pairs of
drives. Mirrored disk arrays write the same data to two different drives using
RAID 1 algorithms (see Figure 2). This gives your system fault tolerance by
preserving the data on one drive if the other drive fails. Fault tolerance is a
basic requirement for critical systems like web and database servers.
3ware uses a patented technology, TwinStor
performance during sequential read operations. With TwinStor technology,
read performance is twice the speed of a single drive during sequential read
operation.
The adaptive algorithms in TwinStor technology boost performance by
distinguishing between random and sequential read requests. For the
sequential requests generated when accessing large files, both drives are used,
with the heads simultaneously reading alternating sections of the file. For the
smaller random transactions, the data is read from a single optimal drive head.
Figure 2. RAID 1 Configuration Example
®
, on RAID 1 arrays for improved
RAID 5
RAID 5 provides performance, fault tolerance, high capacity, and storage
efficiency. It requires a minimum of three drives and combines striping data
with parity (exclusive OR) to restore data in case of a drive failure.
Performance and efficiency increase as the number of drives in a unit
increases.
Parity information is distributed across all of the drives in a unit rather than
being concentrated on a single disk (see Figure 3). This avoids throughput
loss due to contention for the parity drive.
103ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Understanding RAID Levels and Concepts
RAID 5 is able to tolerate 1 drive failure in the unit.
Figure 3. RAID 5 Configuration Example
RAID 6
RAID 6 provides greater redundancy and fault tolerance than RAID 5. It is
similar to RAID 5, but has two blocks of parity information (P+Q) distributed
across all the drives of a unit, instead of the single block of RAID 5.
Due to the two parities, a RAID 6 unit can tolerate two hard drives failing
simultaneously. This also means that a RAID 6 unit may be in two different
states at the same time. For example, one sub-unit can be degraded, while
another may be rebuilding, or one sub-unit may be initializing, while another
is verifying.
The 3ware implementation of RAID 6 requires a minimum of five drives.
Performance and storage efficiency also increase as the number of drives
increase.
www.lsi.com/channel/products 11
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Figure 4. RAID 6 Configuration Example
RAID 10
RAID 10 is a combination of striped and mirrored arrays for fault tolerance
and high performance.
When drives are configured as a striped mirrored array, the disks are
configured using both RAID 0 and RAID 1 techniques, thus the name RAID
10 (see Figure 5). A minimum of four drives are required to use this
technique. The first two drives are mirrored as a fault tolerant array using
RAID 1. The third and fourth drives are mirrored as a second fault tolerant
array using RAID 1. The two mirrored arrays are then grouped as a striped
RAID 0 array using a two tier structure. Higher data transfer rates are
achieved by leveraging TwinStor and striping the arrays.
In addition, RAID 10 arrays offer a higher degree of fault tolerance than
RAID 1 and RAID 5, since the array can sustain multiple drive failures
without data loss. For example, in a twelve-drive RAID 10 array, up to six
drives can fail (half of each mirrored pair) and the array will continue to
function. Please note that if both halves of a mirrored pair in the RAID 10
array fail, then all of the data will be lost.
123ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Understanding RAID Levels and Concepts
Figure 5. RAID 10 Configuration Example
RAID 50
RAID 50 is a combination of RAID 5 with RAID 0. This array type provides
fault tolerance and high performance. RAID 50 requires a minimum of six
drives.
Several combinations are available with RAID 50. For example, on a 12-port
controller, you can hav e a grouping of 3, 4, or 6 drives. A grouping of 3 means
that the RAID 5 arrays used have 3 disks each; four of these 3-drive RAID 5
arrays are striped together to form the 12-drive RAID 50 array. On a 16-port
controller, you can have a grouping of 4 or 8 drives.
No more than four RAID 5 subunits are allowed in a RAID 50 unit. For
example, a 24-drive RAID 50 unit may have groups of 12, 8, or 6 drives, but
not groups of 4 or 3.
In addition, RAID 50 arrays offer a higher degree of fault tolerance than
RAID 1 and RAID 5, since the array can sustain multiple drive failures
without data loss. For example, in a twelve-drive RAID 50 array, up to one
drive in each RAID 5 set can fail and the array will continue to function.
Please note that if two or more drives in a RAID 5 set fail, then all of the data
will be lost.
www.lsi.com/channel/products 13
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Figure 6. RAID 50 Configuration Example
Single Disk
A single drive can be configured as a unit through 3ware software. (3BM,
3DM 2, or CLI). Like disks in other RAID configurations, single disks
contain 3ware Disk Control Block (DCB) information and are seen by the OS
as available units.
Single drives are not fault tolerant and therefore not recommended for high
availability systems unless additional precautions are taken to prevent system
hangs and data loss.
JBOD
A JBOD (acronym for “Just a Bunch of Disks”) is an unconfigured disk
attached to your 3ware RAID controller. Creation of JBOD configuration is
not supported in the 3ware 9750 series. New single disk units must be created
as “Single Disk.”
JBOD units are not fault tolerant and therefore not recommended for high
availability systems unless additional precautions are taken to prevent system
hangs and data loss.
143ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Understanding RAID Levels and Concepts
Hot Spare
A hot spare is a single drive, available online, so that a redundant unit can be
automatically rebuilt in case of drive failure.
Determining What RAID Level to Use
Your choice of which type of RAID unit (array) to create will depend on your
needs. You may wish to maximize speed of access, total amount of storage, or
redundant protection of data. Each type of RAID unit offers a different blend
of these characteristics.
The following table provides a brief summary of RAID type characteristics.
Table 2: RAID Configuration Types
RAID TypeDescription
RAID 0Provides performance, but no fault tolerance.
RAID 1Provides fault tolerance and a read speed advantage over non-
RAID disks.
RAID 5This type of unit provides performance, fault tolerance, and high
storage efficiency. RAID 5 units can tolerate one drive failing
RAID 6Provides very high fault tolerance with the ability to protect
RAID 10A combination of striped and mirrored units for fault tolerance
RAID 50A combination of RAID 5 and RAID 0. It provides high fault
Single DiskNot a RAID type, but supported as a configuration.
before losing data.
against two consecutive drive failures. Performance and
efficiency increase with higher numbers of drives.
and high performance.
tolerance and performance.
Provides for maximum disk capacity with no redundancy.
You can create one or more units, depending on the number of drives you
have installed.
Table 3: Possible Configurations Based on Number of Drives
# DrivesPossible RAID Configurations
1Single disk
2RAID 0 or RAID 1
www.lsi.com/channel/products 15
Chapter 1. Introduction to the 3ware Com m a nd Line Int er fa ce
Table 3: Possible Configurations Based on Number of Drives
# DrivesPossible RAID Configurations
3RAID 0
RAID 1 with hot spare
RAID 5
4RAID 5 with hot spare
RAID 10
Combination of RAID 0, RAID 1, single disk
5RAID 6
RAID 5 with hot spare
RAID 10 with hot spare
Combination of RAID 0, RAID 1, hot spare, single disk
6 or moreRAID 6
RAID 6 with hot spare
RAID 50
Combination of RAID 0, 1, 5, 6,10, hot spare, single disk
Using Drive Capacity Efficiently
To make the most efficient use of drive capacity, it is advisable to use drives
of the same capacity in a unit. This is because the capacity of each drive is
limited to the capacity of the smallest drive in the unit.
The total unit capacity is defined as follows:
Table 4: Drive Capacity
RAID LevelCapacity
Single DiskCapacity of the drive
RAID 0(number of drives) X (capacity of the smallest drive)
RAID 1 Capacity of the smallest drive
RAID 5(number of drives - 1) X (capacity of the smallest drive)
Storage efficiency increases with the number of disks:
storage efficiency = (number of drives -1)/(number of drives)
RAID 6(number of drives - 2) x (capacity of the smallest drive)
RAID 10(number of drives / 2) X (capacity of smallest drive)
RAID 50 (number of drives - number of groups of drives) X (cap acity of the
smallest drive)
163ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Through drive coercion, the capacity used for each drive is rounded down so
that drives from differing manufacturers are more likely to be able to be used
as spares for each other. The capacity used for each drive is rounded down to
the nearest GB for drives under 45 GB (45,000,000,000 bytes), and rounded
down to the nearest 5 GB for drives over 45 GB. For example, a 44.3 GB
drive will be rounded down to 44 GB, and a 123 GB drive will be rounded
down to 120 GB.
Note: All drives in a unit must be of the same type, either SAS or SATA.
Support for Over 2 Terabytes
Legacy operating systems such as Windows 2000, Windows XP (32-bit),
Windows 2003 (32-bit and 64-bit without SP1), and Linux 2.4, do not
recognize unit capacity in excess of 2 TB.
Understanding RAID Levels and Concepts
If the combined capacity of the drives to be connected to a unit exceeds 2
Terabytes (TB), you can enable auto-carving when you configure your units.
Auto-carving divides the available unit capacity into multiple chunks of 2 TB
or smaller that can be addressed by the operating systems as separate
volumes. The carve size is adjustable from 1024 GB to 2048 GB (default)
prior to unit creation.
If a unit over 2 TB was created prior to enabling the auto-carve option, its
capacity visible to the operating system will still be 2 TB; no additional
capacity will be registered. To change this, the unit has to be recreated.
You may also want to refer to Knowledge Base article # 13431, at
https://selfservice.lsi.com/service/main.jsp. (Use Advanced search and enter
the KB # as a keyword.)
www.lsi.com/channel/products 17
2
CLI Syntax Reference
This chapter provides detailed information about using the command syntax
for the 3ware CLI.
Throughout this chapter the examples reflect the interactive method of
executing 3ware CLI.
Note: The output of some commands varies somewhat for different types of
controllers, and may vary if you have an enclosure attached. For most commands
where this is the case, examples are provi d ed to show the differences.
Common Tasks Mapped to CLI Commands
The table below lists many of the tasks people use to manage their RAID
controllers and units, and lists the primary CLI command associated with
those tasks.
Table 5: Common Tasks Mapped to CLI Commands
T askCLI CommandPage
Controller Configuration Tasks
View information about a controller /cx show33
View controller policies and other
details
View drive performance statistics /cx show dpmstat
/cx show [attribute] [attribute]35
[type=inst|ra|ext]
38
183ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Common Tasks Mapped to CLI Commands
Table 5: Common Tasks Mapped to CLI Commands
T askCLI CommandPage
Set policies for a controller
•Modify staggered spinup
•Disable write cache on unit
degrade
•Enable/disable autocarving
•Enable/disable autorebuild
•Set the autocarve volume size
•Enable/disable drive
performance monitoring
statistics (dpmstat)
Unit Configuration Tasks
View information about a uni t/cx/ux show80
Create a new unit/cx add56
/cx set stagger and /cx set spinup
/cx set ondegrade
/cx set autocarve
/cx set autorebuild
/cx set carvesize
/cx set dpmstat
75
75
75
76
75
68
Create a hot spare/cx add56
Enable/disable unit write cache/cx/ux set cache
/cx/ux set wrcache
Enable Basic or Intelligent read
cache, or disable both.
Set the queue policy/cx/ux set qpolicy94
Set the rapid RAID recovery policy/cx/ux set rapidrecovery95
Set the storsave profile/cx/ux set storsave95
Unit Configuration Changes
Change RAID level/cx/ux migrate97
Change stripe size/cx/ux migrate97
Expand unit capacity/cx/ux migrate97
Delete a unit/cx/ux del87
Remove a unit (export)/cx/ux remove87
Name a unit/cx/ux set name94
Controller Maintenance Tasks
/cx/ux set rdcache 92
92
Update controller with new
firmware
Add a time slot to a rebuild
schedule
/cx update63
/cx add rebuild64
www.lsi.com/channel/products 19
Chapter 2. CLI Syntax Reference
Table 5: Common Tasks Mapped to CLI Commands
T askCLI CommandPage
Controller Maintenance Tasks (continued)
Add a time slot to a verify
schedule
Add a time slot to a selftest
schedule
Enable/disable the initialize/
rebuild/migrate schedule and set
the task rate
Enable/disable the verify schedule
and set the task rate
Set the verify schedule to
advanced or basic
Set the rebuild/migrate task rate/cx set rebuildrate70
Set the rebuild/migrate task mode/cx set rebuildmode70
Set the verify task rate/cx set verifyrate74
Set the verify task mode/cx set verifymode73
Set the basic verify start time and
day
Enable/disable the selftest
schedule
/cx add verify65
/cx add selftest67
/cx set rebuild69
/cx set verify71
/cx set
verify=advanced|basic|1..5
/cx set verify=basic [pref=ddd:hh]72
/cx set selftest74
72
View controller alarms/cx show alarms
/cx show events
/cx show AENs
Unit Maintenance Tasks
Start a rebuild/cx/ux start rebuild88
Start a verify/cx/ux start verify88
Pause/resume rebuild/cx/ux pause rebuild and /cx/ux
resume rebuild
Stop verify/cx/ux stop verify90
Enable/disable autoverify/cx/ux set autoverify90
Identify all drives that make up a
unit by blinking associated LEDs
Port Tasks
Locate drive by blinking an LED/cx/px set identify113
/cx/ux set identify64
46
90
203ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Common Tasks Mapped to CLI Commands
Table 5: Common Tasks Mapped to CLI Commands
T askCLI CommandPage
Check if LED is set to on or off/cx/px show identify106
View information for specific dri v e/cx/px show104
View the status of specific drive/cx/px show status107
Show statistics for the drive on a
particular port
Clear statistics counters for a
particular drive
PHY Tasks
View details about link speed for a
specified phy
Set the link speed for a specified
phy
BBU Tasks
Check on charge and condition of
battery
Start a test of the battery
Enclosure Tasks
View information about an
enclosure and its components
Locate a drive slot in an enclosure
by blinking an LED
/cx/px show dpmstat
type=inst|ra|lct|histdata|ext
/cx/px set dpmstat=clear
[type=ra|lct|ext]
/cx/phyx show115
/cx/phyx set link=auto|1.5|3.0|6.0115
/cx/bbu/ show status1 18
/cx/bbu test [quiet]120
/cx/ex show123
/cx/ex/slotx set identify128
110
114
Locate a fan in an enclosure by
blinking an LED
Set the speed for a fan in an
enclosure
Locate a power supply in an
enclosure by blinking an LED
Locate a temperature sensor in an
enclosure by blinking an LED
Turn off or mute an audible alarm
in an enclosure
/cx/ex/fanx set identify129
/cx/ex/fanx set speed129
/cx/ex/pwrsx set identify131
/cx/ex/tempx set identify132
/cx/ex/almx set alarm132
www.lsi.com/channel/products 21
Chapter 2. CLI Syntax Reference
Terminology
3ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0 uses the
following terminology:
Logical Units. Usually shortened to “units.” These are block devices
presented to the operating system. A logical unit can be a one-tier, two-tier , or
three-tier arrangement. Spare and Single logical units are examples of one-tier
units. RAID 1 and RAID 5 are examples of two-tier units and as such will
have sub-units. RAID 10 and RAID 50 are examples of three-tier units and as
such will have sub-sub-units.
Port. 3ware controller models up to the 9650SE series have one or many ports
(typically 4, 8, 12, 16, or 24). Each port can be attached to a single disk drive.
On a controller such as the 9650SE with a multilane serial port connector, one
connector supports four ports. On 9750 and 9690SA series controllers,
connections are made with phys and vports (virtual port).
Phy. Phys are transceivers that transmit and receive the serial data stream that
flows between the controller and the drives. 3ware 9750 and 9690SA
controllers have 8 phys. These “controller phys” are associated with virtual
ports (vports) by 3ware software to establish up to 128 potential connections
with SAS or SATA hard drives. Each controller phy can be connected directly
to a single drive, or can be connected through an expander to additional
drives.
VPort. Connections from 3ware 9750 and 9690SA controllers to SAS or
SATA drives are referred to as virt ual po rts , or VPorts. A VPort indicates the
ID of a drive, whether it is directly connected to the controller, or cascaded
through one or more expanders. The VPort, in essence, is a handle in the
software to uniquely identify a drive. The VPort ID or port ID allows a drive
to be consistently identified, used in a RAID unit, and managed. For dualported drives, although there are two connections to a drive the drive is still
identified with one VPort handle.
Note: For practical purposes, port and VPort are used interchangeab ly in this
document in reference to a drive (or disk). Therefore, unless otherwise specified,
the mention of port implies VPort as well. For example, when “port” is used to
indicate a drive, it is implied that for the applicable controller series, the reference
also applies to VPort.
For additional information about 3ware controller concepts and terminology,
see the user guide PDF for your 3ware RAID controller or the user guide
portions of the 3ware HTML Bookshelf.
223ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Syntax Overview
The command syntax uses the general form:
Object Command Attributes
Objects are shell commands, controllers, units, ports (drives), BBUs (battery
backup units), and enclosures.
Commands can either select (show , get, present, read) attributes or alter (add,
change, set, write) attributes.
Attributes are either Boolean Attributes or Name-Value Attributes.
•The value of a boolean attribute is deduced by presence or lack of—that
is, the attribute is either specified, or not. For example, the command
show alarms by default lists controller alarms with the oldest alarm first.
If you include the attribute reverse, as in the command show alarms reverse, alarms are listed in reverse order, with the most recent alarm
first.
•The value of name-value attributes are expressed in the format
attribute=value.
Syntax Overview
Example: When adding (creating) a unit to the controller with the following
command string,
/c1 add type=raid1 disk=0-1
c1
is the object, add is the command, type (for type of array) is an attribute
raid1 as the value of the attribute, and disk is another attribute with
with
0-1 as the value (ports 0 through 1).
Information about commands is organized by the object on which the
commands act:
Shell Object Commands. Shell object commands set the focus or provide
information (such as alarms, diagnostics, rebuild schedules, and so forth)
about all controllers in the system. For details, see “Shell Object Commands”
on page 24.
Controller Object Commands. Controller object commands provide
information and perform actions related to a specific controller. For example,
you use controller object commands for such tasks as seeing alarms specific
to a controller, creating schedules during which background tasks are run, and
setting policies for the controller. You also use the controller object command
/cx add type to create RAID arrays. For details, see “Controller Object
Commands” on page 31.
Unit Object Commands. Unit object commands provide information and
perform actions related to a specific unit on a specific controller. For example,
you use unit object commands for such tasks as seeing the rebuild, verify, or
initialize status of a unit, starting, stopping, and resuming verifies, starting
and stopping rebuilds, and setting policies for the unit. You also use the
www.lsi.com/channel/products 23
Chapter 2. CLI Syntax Reference
controller object command /cx/ux migrate to change the configuration of a
RAID array. For details, see “Unit Object Commands” on page 79.
Phy Object Commands. Phy object commands provide information and
perform actions related to a specific phy on a 9750 or 9690SA controller.
Port Object Commands. Port object commands provide information and
perform actions related to a drive on a specific port or vport. You can use port
object commands for such tasks as seeing the status, model, or serial number
of the drive. For details, see “Port Object Commands” on page 104.
BBU Object Commands. BBU object commands provide information and
perform actions related to a Battery Backup Unit on a specific controller. For
details, see “BBU Object Commands” on page 116.
Enclosure Object Commands. Enclosure object commands provide
information and perform actions related to a particular enclosure. For
example, you can use enclosure object commands to see information about an
enclosure and its elements (slots, fan, and temperature sensor elements).
Help Commands. Help commands allow you to display help information for
all commands and attributes. For details, see “Help Commands” on page 133.
Shell Object Commands
Shell object commands are either applicable to all the controllers in the
system (such as show, rescan, flush, commit), or redirect the focused object.
Syntax
focus object
commit
flush
rescan
show [attribute [modifier]]
alarms [reverse]
diag
rebuild
selftest
ver
verify
update fw=filename_with_path [force]
243ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Shell Object Commands
focus
Object
The focus command is active in interactive mode only and is provided to
reduce typing.
The focus command will set the specified object in focus and change the
prompt to reflect this. This allows you to enter a command that applies to the
focus, instead of having to type the entire object name each time.
For example, where normally you might type:
/c0/u0 show
If you set the focus to /c0/u0, the prompt changes to reflect that, and you
only have to type
in a file system and requesting a listing of the current directory.
object
can have the following forms:
/cx/ux specifies the fully qualified URI (Universal Resource Identifier) of an
object on controller
.. specifies one level up (the parent object).
/ specifies the root
./object specifies the next level of the object.
/c0/bbu specifies a relative path with respect to the current focused
hostname.
show. The concept is similar to being in a particular location
cx, unit ux.
commit
Example:
//localhost> focus /c0/u0
//localhost/c0/u0>
//localhost/c0/u0> focus..
//localhost/c0>
//localhost> focus u0
//localhost/c0/u0>
//localhost/c0> focus /
//localhost>
The focus command is available by default. You can disable focus by setting
TW_CLI_INPUT_STYLE to old. (See “Return Code” on page 141.)
This command sends a commit command to all 3ware controllers in the
system. For more information, see “/cx commit” on page 63.
www.lsi.com/channel/products 25
Chapter 2. CLI Syntax Reference
flush
This command sends a flush command to all 3ware controllers in the system.
For more information, see “/cx flush” on page 63.
rescan
This command sends a rescan command to all 3ware controllers in the system.
For more information, see “/cx rescan [noscan]” on page 62.
show
This command shows a general summary of all detected controllers and
enclosures.
The output of this command will vary, depending upon your cont roller model
and whether there is an enclosure with an expander attached.
Note that the device drivers for the appropriate operating system should be
loaded for the list to show all controllers. The intention is to provide a global
view of the environment.
Example for controller without an enclosure and expander:
T ypical outp ut of the Show command for a controller looks like the following:
//localhost> show
Ctl Model Ports Drives Units NotOpt RRate VRate BBU
The output above indicates that Controller 0 is a 9590SE-4ME model with 4
Ports, with 4 Drives detected (attached), total of 1 Unit, with no units in a
NotOpt (Not Optimal) state, RRate (Rebuild Rate) of 2, VRate (Verify Rate)
of 5, BBU of '-' (Not Applicable). Not Optimal refers to any state except OK
and VERIFYING. Other states include VERIFY-PAUSED, INITIALIZING,
INIT-PAUSED, REBUILDING, REBUILD-PAUSED, DEGRADED,
MIGRATING, MIGRATE-PAUSED, RECOVERY, INOPERABLE, and
UNKNOWN. RRate also applies to initializing, migrating, and recovery
background tasks. (Definitions of the unit statuses are available in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.)
263ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Example for 9690SA-414E with enclosure and expander:
Typical output of the Show command for a system with an enclosure,
expander, and a 9690SA-4 I4E controller looks like the following:
//localhost> show
Ctl Model (V)Ports Drives Units NotOpt RRate VRate BBU
This command shows the controller alarms or events, also known as AEN
(Asynchronous Event Notification) messages, of all controllers in the system.
The default is to display the most recent messages at the bottom. The reverse
attribute displays the most recent message at the top. For more information,
see “/cx show alarms [reverse]” on page 46.
Shell Object Commands
show events [reverse]
This command is the same as “show alarms [reverse]”. Please see above for
details.
show AENs [reverse]
This command is the same as “show alarms [reverse]”. Please see above for
details.
show diag
This command shows the diagnostic information of all controllers in the
system. The enclosure diagnostic log may be requested by 3ware Customer
Support to help troubleshoot problems on your controller.
show rebuild
This command displays all rebuild schedules for the 9000 series controllers in
the system.
The rebuild rate is also applicable for initializing, migrating, and recovery
background tasks.
www.lsi.com/channel/products 27
Chapter 2. CLI Syntax Reference
Example:
//localhost> show rebuild
Rebuild Schedule for Controller /c0
========================================================
SlotDayHourDurationStatus
For additional information about rebuild schedules, see “/cx add
rebuild=ddd:hh:duration” on page 64, and see the discussion of background
tasks and schedules in 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
show selftest
This command displays all selftest schedules for the 9000 series controllers in
the system.
Example:
//localhost> show selftest
Selftest Schedule for Controller /c0
========================================================
SlotDayHourUDMASMART
For additional information about selftest schedules, see “/cx add
selftest=ddd:hh” on page 67, and see the discussion of background tasks and
schedules in 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
283ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
show ver
show verify
Shell Object Commands
This command will show the CLI and API version.
Example:
//localhost> show ver
CLI Version = 2.00.03.0xx
API Version = 2.01.00.xx
In the above example, “xx” stands for the actual version. See the Release
Notes for details.
This command displays all verify schedules for the 9000 series controllers in
the system. The output shown will be either the advanced or the basic verify
schedule, depending upon which is enabled for each controller . Basic verify is
supported on 9750 and 9690SA controllers, and on 9650SE cont rollers
running 9.5.1 or later.
Example:
This example shows two controllers, one with an advanced verify schedule
and one with a basic verify schedule.
//localhost> show verify
Verify Schedule for Controller /c2
========================================================
SlotDayHourDurationAdvVerify
-------------------------------------------------------1Sun12:00am24 hr(s) on
2Mon12:00am24 hr(s) on
3Wed4:00pm24 hr(s) on
4Wed12:00am24 hr(s) on
5Thu12:00am24 hr(s) on
6Fri12:00am24 hr(s) on
7Sat12:00am24 hr(s) on
For additional information about verify schedules, see “/cx add
verify=ddd:hh:duration” on page 65, “/cx set verify=basic [pref=ddd:hh]” on
page 72, and see the discussion of background tasks and schedules in 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
www.lsi.com/channel/products 29
Chapter 2. CLI Syntax Reference
update fw=
filename_with_path
This command downloads the specified firmware image to the controllers that
are compatible with it and iterates through all the controllers in the system,
updating the firmware. For more information, see “/cx update
fw=filename_with_path [force]” on page 63.
[force]
303ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Controller object commands provide information and perform actions related
to a specific controller, such as /c0. For example, you use controller object
commands to see alarms specific to a controller, to create schedules for when
background tasks are run, and to set policies for the controller. You also use
the controller object command /cx add type to create RAID arrays.
Note: Features indicated as “9690SA only,” “9000 series,” or “9000 series SE/SA
only” also apply to 9750 controllers.
Syntax
/cx show
/cx show attribute [attribute ...] where attributes are:
achip|allunitstatus|
autocarve(9000 series SX/SE/SA only)|
autorebuild(9000 series SX/SE/SA only)|bios|
carvesize(9000 series SX/SE/SA only)|
ctlbus(9000 series SX/SE/SA only|
dpmstat[type=inst|ra|ext](9000 series SX/SE/SA only;
however type=ext is only for SE/SA)
driver|drivestatus|firmware|memory|model|monitor|
numdrives|numports|numunits|ondegrade(9500S only)|pcb|
pchip|serial|spinup|stagger|unitstatus|
/cx show all (where all means attributes and configurations)
/cx show diag
/cx show alarms [reverse]
/cx show events [reverse]
/cx show AENS [reverse]
/cx show rebuild (9000 series)
/cx show rebuildmode (9000 series SE/SA only)
/cx show rebuildrate (9000 series SE/SA only)
/cx show verify (9000 series)
/cx show verifymode (9000 series SE/SA only)
/cx show verifyrate (9000 series SE/SA only)
/cx show selftest (9000 series)
/cx show phy (9750 and 9690SA only)
Controller Object Commands
/cx add type=<RaidType>
(RaidType={raid0,raid1,raid5,raid6(9650SE and higher
/cx set dpmstat=on|off (9000 series SX/SE/SA only)
/cx del rebuild=slot_id (9000 series)
/cx del verify=slot_id (9000 series)
/cx del selftest=slot_id (9000 series)
/cx set ondegrade=cacheoff|follow (9500S only)
/cx set spinup=nn (9000 series)
/cx set stagger=nn (9000 series)
/cx set autocarve=on|off (9000 series SX/SE/SA only)
/cx set carvesize=[1024...32768] (9000 series SX/SE/SA only)
/cx set rebuild=enable|disable|1..5 (9000 series)
/cx set rebuildmode=<adaptive|lowlatency> (9000 series SE/SA
only)
/cx set rebuildrate=<1..5> (9000 series SE/SA only)
/cx set autorebuild=on|off (9000 series SX/SE/SA only)
/cx set autodetect=on|off disk=<p:-p>|all
/cx set verify=enable|disable|1..5 (9000 series)
/cx set verify=advanced|basic|1..5 (9000 series SE/SA only)
/cx set verifymode=<adaptive|lowlatency> (9000 series SE/SA
only)
/cx set verifyrate=<1..5> (9000 series SE/SA only)
/cx set verify=basic [pref=ddd:hh] (9000 series SE/SA only)
/cx set selftest=enable|disable [task=UDMA|SMART](9000
series)
(9000 series SE/SA)
/cx flush
/cx update fw=filename_with_path [force] (9000 series)
/cx commit (Windows only. Also known as shutdown)
/cx start mediascan (7000/8000 only)
/cx stop mediascan (7000/8000 only)
/cx rescan [noscan]
323ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx show
Controller Object Commands
This command shows summary information on the specified controller /cx.
This information is organized into a report containing two to three parts:
•A Unit summary listing all present units
•A Port summary section listing of all ports (or virtual ports) and disks
attached to them.
•A BBU summary section listing, if a BBU is installed on the controller.
The Unit summary section lists all present unit and specifies their unit
number, unit type (such as RAID 5), unit status (such as INITIALIZING), %R
(percent completion of rebuilding), % V/I/M (percent completion of
verifying, initializing, or migrating), stripe size, size (usable capacity) in
gigabytes, the write cache setting, the read cache setting (if supported by your
controller) and the auto-verify policy status (on/off)
Possible unit statuses include OK, RECOVERY, INOPERABLE,
UNKNOWN, DEGRADED, INITIALIZING, INIT-PAUSED, VERIFYING,
VERIFY -PAUSED, REBUILDING , REBUILD-PAUSED, MIGRA TING, and
MIGRATE-PAUSED. Definitions of the unit statuses are available in the
3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
Note: If an asterisk (*) appears next to the status of a unit, there is an error on one
of the drives in the unit. This feature provides a diagnostic capability for potential
problem drives. The error may not be a repeated error, and may be caused by an
ECC error, SMART failure, or a device error . Rescanning the controller will clear the
drive error status if the condition no longer exists
.
For controllers with read cache support (9650SE and newer controllers with
release 9.5.2 or later), the 'Cache' column displays the settings of both the read
cache and the write cache.
Below is a summary of the possible settings in the Cache column:
W - only the write cache is enabled
Rb - only the read cache Basic Mode is enabled
Ri - only the read cache Intelligent Mode is enabled
RbW - the read cache Basic Mode and the write cache are both enabled
RiW - the read cache Intelligent Mode and the write cache are both enabled
OFF - all caches are disabled
Note that when the Intelligent Mode of the read cache is enabled, the Basic
Mode features are also enabled. For details, see “/cx/ux set
rdcache=basic|intelligent|off” on page 92.
For earlier controllers, the Cache column displays only the write cache setting
of ON or OFF
www.lsi.com/channel/products 33
Chapter 2. CLI Syntax Reference
For the 9750 and 9690SA controller models, and 9650SE controllers with
Release 9.5.2 or later, this section lists the ports or virtual ports present, and
for each port, specifies the port or vport number, drive status, unit af filication,
drive type, phy number ( if direct attached), the enclosure and slot (if
expander attached), and model number of the drive.
For earlier controller models, up to the 9550SX and the 9650SE with Release
9.5.1 or earlier, the Port summary section lists all present ports and for each
port specifies the port number, disk status, unit affiliation, size (in gigabytes)
and blocks (512 bytes), and the serial number assigned by the disk vendor.
Note: For 9750 and 9690SA controllers, and for 9650SE controllers with Release
9.5.2 or later, if a drive is not present, that port entry is not listed. This is different
from displays for the 9550SX and older models, which showed the port with the
status NOT-PRESENT with dashes ('-') across the columns in the summary table.
Consequently, for newer controllers, the port numbers in the list may not be
sequential. Moreover, if there are no drives present at all for the specified controller,
the output of its Port Summary will show an empty summary consisting of only the
header
The BBU summary lists details about the BBU, if one is installed. It lists the
online state, readiness, and status of the BBU unit, along with the voltage,
temperature, charge capacity expressed as time remaining in hours, and the
BBU's last test date..
Additional attributes about controllers, units, ports and disks can be obtained
by querying for them explicitly . For details, see the other show subcommands.
Example output for 9750, 9690SA, and 9650SE with Release 9.5.2 or
later:
Note that the port information is represented by VPort (virtual port) and
Cache indicates both Read Cache and Write cache.
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
-------------------------------------------------------------------------- u0 SPARE OK - - - 149.042 - OFF
u1 Single OK - - - 149.051 RiW OFF
VPort Status Unit Size Type Phy Encl-Slot Model
-------------------------------------------------------------------------- p0 OK - 149.05 GB SATA 3 - WDC WD1600JS-22NCB1
p1 OK u0 149.05 GB SATA 0 - WDC WD1600JS-22NCB1
p2 OK u1 149.05 GB SATA 2 - WDC WD1600JS-22NCB1
p3 OK - 34.18 GB SAS 6 - SEAGATE ST936701SS
343ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Example output for earlier controllers:
//localhost> /c2 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
-------------------------------------------------------------------------- u0 RAID-5 OK - - 64K 596.004 ON OFF
u1 RAID-0 OK - - 64K 298.002 ON OFF
u2 SPARE OK - - - 149.042 - OFF
u3 RAID-1 OK - - - 149.001 ON OFF
Port Status Unit Size Blocks Serial
-------------------------------------------------------------- p0 OK u0 149.05 GB 312581808 WD-WCANM1771318
p1 OK u0 149.05 GB 312581808 WD-WCANM1757592
p2 OK u0 149.05 GB 312581808 WD-WCANM1782201
p3 OK u0 149.05 GB 312581808 WD-WCANM1753998
p4 OK u2 149.05 GB 312581808 WD-WCANM1766952
p5 OK u3 149.05 GB 312581808 WD-WCANM1882472
p6 OK u0 149.05 GB 312581808 WD-WCANM1883862
p7 OK u3 149.05 GB 312581808 WD-WCANM1778008
p8 OK - 149.05 GB 312581808 WD-WCANM1770998
p9 NOT-PRESENT - - - p10 OK u1 149.05 GB 312581808 WD-WCANM1869003
p11 OK u1 149.05 GB 312581808 WD-WCANM1762464
/cx show
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
------------------------------------------------------------------------ bbu On Yes OK OK OK 241 22-Jun-2004
attribute [attribute
...]
This command shows the current setting of the specified attributes on the
specified controller. One or many attributes can be specified. Specifying an
invalid attribute will terminate the loop. Possible attributes are: achip,
allunitstatus, autocarve (9000 series SX/SE/SA only), autorebuild (9000
series SX/SE/SA only), bios, carvesize (9000 series SX/SE/SA only), driver,
drivestatus, firmware, memory, model, monitor, numdrives, numports,
numunits, ctlbus (9000 series SX/SE/SA only), ondegrade (9500S), pcb,
pchip, qpolicy, serial, spinup (9000 series), stagger (9000 series), and
unitstatus.
Example: To see the driver and firmware installed on controller 0, enter the
following:
//localhost> /c0 show driver firmware
/c0 Driver Version = 2.x
/c0 Firmware Version = FE9X 3.x
(In the sample output above, “x” will be replaced with the actual version
number.)
www.lsi.com/channel/products 35
Chapter 2. CLI Syntax Reference
/cx show achip
For 9750 and 9690-SA controllers, this command displays the SAS+SATA
IOC (i/o controller) version of the specified controller /cx. For older
controllers, this command reports the ACHIP (ATA Interface Chip) version of
the specified controller /cx.
Example:
//localhost> /c0 show achip
/c0 ACHIP Version = 3.x
/cx show allunitstatus
This command presents a count of total and Not Optimal units managed by
the specified controller /cx. For more about the meaning of Not Optimal, see
“Shell Object Commands” on page 24.
Example:
//localhost> /c0 show allunitstatus
/c0 Total Optimal Units = 2
/c0 Not Optimal Units = 0
/cx show autocarve
This feature only applies to 9750 controllers and 9000 series SX/SE/SA
controllers.
This command reports the Auto-Carve policy. If the policy is on, all newly
created or migrated units larger than the carvesize will be automatically
carved into multiples of carvesize volumes plus one remainder volume. Each
volume can be treated as an individual drive with its own file system. The
default carvesize is 2TB. For more information see, “/cx show memory”,
below.
For operating systems that support units larger than 2TB, there is no need to
set the policy to on unless you want the operating system to have multiple
smaller volumes.
If you use a 32-bit operating system, it is recommended that you keep the
policy on unless you know that your operating system supports disks that are
larger than 2 TB.
When the autocarve policy is off, all newly created units will consist of one
single volume.
Example:
//localhost> /c0 show autocarve
/c0 Auto-Carving Policy = on
363ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx show autorebuild
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command shows the Auto-Rebuild policy. If the policy is enabled, the
firmware will select drives to use for rebuilding a degraded unit using the
following priority order. For more information, see “/cx set
autorebuild=on|off” on page 76.
3. Smallest usable failed drive.
If the policy is disabled, only spare drives will be used for an automatic
rebuild operation.
Example:
//localhost> /c0 show autorebuild
/c0 Auto-Rebuild Policy = on
Controller Object Commands
/cx show bios
This command reports the BIOS version of controller /cx.
Example:
//localhost> /c0 show bios
/c0 BIOS Version = BG9X 2.x
/cx show carvesize
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command shows the maximum size of the volumes that will be created if
the autocarve policy is set to on. The carvesize can be set between 1024 GB
and 32768 GB (1 TB to 32 TB). Default carvesize is 2048 GB (2 TB). For
more information see, “/cx show autocarve” above.
Example:
//localhost> /c0 show carvesize
/c0 Auto-Carving Size = 2000 GB
www.lsi.com/channel/products 37
Chapter 2. CLI Syntax Reference
/cx show ctlbus
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command reports the controller host bus type, bus speed, and bus width.
Example for 9690SA:
//localhost> /c2 show ctlbus
/c2 Controller Bus Type = PCIe
/c2 Controller Bus Width = 8 lanes
/c2 Controller Bus Speed = 2.5 Gbps/lane
/cx show driver
This command reports the device driver version associated with controller
/cx.
Example:
//localhost> /c0 show driver
/c0 Driver Version = 3.x
/cx show dpmstat [type=inst|ra|ext]
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers. The type=ext feature is only for SE/SA controllers.
This command shows the configuration and setting of the Drive Performance
Monitor, and a summary of statistics for drives attached to the controller.
The optional type attribute specifies which statistics will be displayed. The
available options are: inst for Instantaneous, ra for Running Average, and ext
for Extended Drive Statistics. If you do not specify a type, the display will
show the default set of drive statistics, which is the type inst.
inst (Instantaneous). This measurement provides a short duration average.
ra (Running Average). Running average is a measure of long-term averages
that smooth out the data, and results in older results fading from the average
over time.
ext (Extended Drive Statistics). The extended drive statistics refers to
statistics of a drive's read commands, write commands, write commands with
FUA (Force Unit Access), flush commands, and a drive sectors's read, write,
and write commands with FUA.
Additional statistics are available for drives at specific ports. For details, see
“/cx/px show dpmstat type=inst|ra|lct|histdata|ext” on page 110.
Drive Performance Monitoring can be turned on and off using the command
“/cx set dpmstat=on|off” on page 68.
383ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Example of inst drive statistics:
To display a summary of instantaneous data for the set of drives attached to
the controller , use command
/cx show dpmstat. (since inst is the default, you
do not have to explicitly enter it in the command).
Since this is a controller-level command, the output provides summary
information for the set of drives attached to the controller. For statistics about
a drive attached to a specific port, see “/cx/px show dpmstat
type=inst|ra|lct|histdata|ext” on page 110.
In the configuration information displayed below , the Performance Monitor is
shown to be On, “Version” refers to the firmware version of the Performance
Monitor, “Max commands for averaging” refers to the maximum number of
commands that can be saved and used for calculating the average, and “Max
latency commands to save” refers to the maximum number of commands with
high latency that are saved. The amount of statistics data in the buffer is
determined by these configurations and the memory constraints of the system.
These configuration settings cannot be changed at this time.
//localhost> /c0 show dmpstat
Drive Performance Monitor Configuration for /c0 ...
Performance Monitor: ON
Version: 1
Max commands for averaging: 100
Max latency commands to save: 10
Requested data: Instantaneous Drive Statistics
Queue Xfer Resp
Port Status Unit Depth IOPs Rate(MB/ s) Time( ms)
Note: Depending on the amount of I/O and the rate or duration of the data
transfer, overflow of the buffers containing this data can occur. In this case,
the overflow is marked with “#######”, as shown in the example below. If
this occurs, you may want to zero out the counters by using the clear
command, “/cx/px set dpmstat=clear [type=ra|lct|ext]” on page 114.
Example of drive statistics overflow:
//localhost> /c3 show dpmstat type=ext
Extended Drive Statistics for /c3 ...
Sectors Commands
----------------------------------- ------------------------------------- Port Read Write Write-FUA Read Write Write-FUA Flush
403ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx show drivestatus
This command reports a list of drives and their port assignment, status, the
unit with which they are associated, their size in gigabytes and blocks, and the
serial number assigned by the drive manufacturer. (Definitions of the drive
statuses are available in the 3ware SA T A+SAS RAID Contr oller Car d Softwar e User Guide, Version 10.0.)
Example for 9650SE and earlier controllers:
//localhost> /c0 show drivestatus
Port Status Unit Size Blocks Serial
------------------------------------------------------------- p0 OK u0 149.05 GB 312581808 3JS0TF14
p1 OK u0 149.05 GB 312581808 3JS0TETZ
p2 OK u1 149.05 GB 312581808 3JS0VG85
p3 OK u1 149.05 GB 312581808 3JS0VGCY
p4 OK u1 149.05 GB 312581808 3JS0VGGQ
p5 OK u2 149.05 GB 312581808 3JS0VH1P
p6 OK - 149.05 GB 312581808 3JS0TF0P
p7 OK - 149.05 GB 312581808 3JS0VF43
p8 OK - 149.05 GB 312581808 3JS0VG8D
p9 NOT-PRESENT - - - p10 NOT-PRESENT - - - p11 NOT-PRESENT - - - -
Controller Object Commands
Example for 9750 and 9690SA controller:
//localhost> /c2 show drivestatus
VPort Status Unit Size Type Phy Encl-Slot Model
-------------------------------------------------------------------------p0 OK u0 34.25 GB SAS - /c2/e0/slt0 MAXTOR ATLAS15K2_36
p1 OK u0 34.25 GB SAS - /c2/e0/slt1 MAXTOR ATLAS15K2_36
p2 OK u0 34.25 GB SAS - /c2/e0/slt2 MAXTOR ATLAS15K2_36
p3 OK u0 34.18 GB SAS - /c2/e1/slt0 HITACHI HUS151436VL
p4 OK u0 34.18 GB SAS - /c2/e1/slt1 HITACHI HUS151436VL
p5 OK u0 34.18 GB SAS - /c2/e1/slt2 HITACHI HUS151436VL
p6 OK u0 34.25 GB SAS - /c2/e0/slt3 MAXTOR ATLAS15K2_36
p7 OK u0 34.25 GB SAS - /c2/e0/slt4 MAXTOR ATLAS15K2_36
p8 OK u0 34.25 GB SAS - /c2/e0/slt5 MAXTOR ATLAS15K2_36
p9 OK u0 34.25 GB SAS - /c2/e0/slt6 MAXTOR ATLAS15K2_36
p10 OK u0 34.18 GB SAS - /c2/e1/slt3 HITACHI HUS151436VL
p11 OK u0 34.18 GB SAS - /c2/e1/slt4 HITACHI HUS151436VL
p12 OK u0 34.18 GB SAS - /c2/e1/slt5 HITACHI HUS151436VL
p13 OK u0 34.18 GB SAS - /c2/e1/slt6 HITACHI HUS151436VL
p14 OK u0 34.25 GB SAS - /c2/e0/slt7 MAXTOR ATLAS15K2_36
p15 OK u0 34.25 GB SAS - /c2/e0/slt8 MAXTOR ATLAS15K2_36
p16 OK u0 34.25 GB SAS - /c2/e0/slt9 MAXTOR ATLAS15K2_36
p17 OK u0 34.25 GB SAS - /c2/e0/slt10 MAXTOR ATLAS15K2_36
p18 OK u0 34.18 GB SAS - /c2/e1/slt7 HITACHI HUS151436VL
p19 OK u0 34.18 GB SAS - /c2/e1/slt8 HITACHI HUS151436VL
p20 OK u0 34.18 GB SAS - /c2/e1/slt9 HITACHI HUS151436VL
p21 OK u0 34.18 GB SAS - /c2/e1/slt10 HITACHI HUS151436VL
p22 OK u0 34.25 GB SAS - /c2/e0/slt11 MAXTOR ATLAS15K2_36
www.lsi.com/channel/products 41
Chapter 2. CLI Syntax Reference
p23 OK u0 34.25 GB SAS - /c2/e0/slt12 MAXTOR ATLAS15K2_36
p24 OK - 34.25 GB SAS - /c2/e0/slt13 MAXTOR ATLAS15K2_36
p25 OK - 34.25 GB SAS - /c2/e0/slt14 MAXTOR ATLAS15K2_36
p26 OK - 34.18 GB SAS - /c2/e1/slt11 HITACHI HUS151436VL
p27 OK - 34.18 GB SAS - /c2/e1/slt12 HITACHI HUS151436VL
p28 OK - 34.18 GB SAS - /c2/e1/slt13 HITACHI HUS151436VL
p29 OK - 34.18 GB SAS - /c2/e1/slt14 HITACHI HUS151436VL
p30 OK - 34.25 GB SAS - /c2/e0/slt15 MAXTOR ATLAS15K2_36
p31 OK - 34.18 GB SAS - /c2/e1/slt15 HITACHI HUS151436VL
/cx show firmware
This command reports the firmware version of controller /cx.
Example:
//localhost> /c0 show firmware
/c0 Firmware Version = FE9X 3.03.06.X03
/cx show memory
/cx show model
/cx show monitor
This command reports the available memory on the controller.
Note: Some memory is reserved for use by the controller, so the amount of
memory available will be less than the controller actually has installed. For
example, the 9690SA controller has 512MB of memory of which 448MB is
available.
Example:
//localhost> /c2 show memory
/c2 Available Memory = 448MB
This command reports the controller model of controller /cx.
Example:
//localhost> /c0 show model
/c0 Model = 9690SA-8E
This command reports the monitor (firmware boot-loader) version of
controller /cx.
Example:
//localhost> /c0 show monitor
/c0 Monitor Version = BLDR 2.x
423ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
cx show numdrives
This command reports the number of drives currently managed by the
specified controller /cx. This report does not include (logically) removed or
exported drives.
On 9500S and earlier controllers, physically-removed disk(s) will still be
counted. For a workaround, see “/cx/px show smart” on page 107.
Example:
//localhost> /c0 show numdrives
/c0 Number of Drives = 5
/cx show numports
This command reports how many physical connectio ns are made to the
controller and the total number of physical ports possible for the controller.
Example for a 9650SE-16ML with no drives attached:
//localhost> /c0 show numports
/c0 Number of Ports = 16
Example for 9690SA-8E with 8 dual-port SAS drives:
//localhost> /c3 show numports
/c3 Connections = 16 of 128
/cx show numunits
Controller Object Commands
This command reports the number of units currently managed by the specified
controller /cx. This report does not include off-line units (or removed units).
Example:
//localhost> /c0 show numunits
/c0 Number of Units = 1
/cx show ondegrade
This feature only applies to 9500S controllers.
This command reports the write cache policy for degraded units. If the
ondegrade policy is “Follow Unit Policy,” a unit write cache policy stays the
same when the unit becomes degraded. If the ondegrade policy is off, a unit
write cache policy will be forced to “off” when the unit becomes degraded.
Example:
//localhost> /c0 show ondegrade
/c0 Cache on Degraded Policy = Follow Unit Policy
www.lsi.com/channel/products 43
Chapter 2. CLI Syntax Reference
/cx show pcb
This command reports the PCB (Printed Circuit Board) version of the
specified controller /cx.
Example:
//localhost> /c0 show pcb
/c0 PCB Version = RevX
/cx show pchip
This command reports the PCHIP (PCI Interface Chip) version of the
specified controller /cx.
Example:
//localhost> /c0 show pchip
/c0 PCHIP Version = 1.x
/cx show serial
/cx show spinup
/cx show stagger
This command reports the serial number of the specified controller /cx.
Example:
//localhost> /c0 show serial
/c0 Serial Number = F12705A3240009
This feature only applies to 9000 series controllers.
This command reports the number of concurrent SAS and SATA disks that
will spin up when the system is powered on, after waiting for the number of
seconds specified with the
with SAS or SATA disks attached to an expander.
Example:
//localhost> /c0 show spinup
/c0 Disk Spinup Policy = 1
This feature only applies to 9000 series controllers.
This command reports the time delay between each group of spinups at the
power on. Spinup does not work with SAS or SATA disks attached to an
expander.
set stagger command. Spinup does not work
Example:
//localhost> /c0 show stagger
/c0 Spinup Stagger Time Policy (sec) = 2
443ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx show unitstatus
This command presents a list of units currently managed by the specified
controller /cx, and shows their types, capacity, status, and unit policies.
Possible statuses include: OK, VERIFYING, VERIFY-PAUSED,
INITIALIZING, INIT-PAUSED, REBUILDING, REBUILD-PAUSED,
DEGRADED, MIGRATING, MIGRATE-PAUSED, RECOVERY,
INOPERABLE, and UNKNOWN. (Definitions of the unit statuses are
available in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.)
Example:
//localhost> /c2 show unitstatus
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------ u0 RAID-5 OK - - 64K 596.004 ON OFF
u1 RAID-0 OK - - 64K 298.002 ON OFF
u2 SPARE OK - - - 149.042 - OFF
u3 RAID-1 OK - - - 149.001 ON OFF
Controller Object Commands
Note: If an asterisk (*) appears next to the status of a unit, there is an error on one
of the drives in the unit. This feature provides a diagnostic capability for potential
problem drives. The error may not be a repeated error, and may be caused by an
ECC error, SMART failure, or a device error . Rescanning the controller will clear the
drive error status if the condition no longer exists.
/cx show all
This command shows the current setting of all of the following attributes on
the specified controller: achip, allunitstatus, autocarve, bios, driver,
drivestatus, firmware, memory, model, monitor, numports, nu mu nits,
numdrives, ondegrade, pcb, pchip, serial, spinup, stagger, and unitstatus
Example for 9650SE:
//localhost>> /c2 show all
------------------------------------------------/c2 Driver Version = 2.26.08.004-2.6.22
/c2 Model = 9650SE-16ML
/c2 Available Memory = 224MB
/c2 Firmware Version = FE9X 4.05.00.026
/c2 Bios Version = BE9X 4.05.00.013
/c2 Boot Loader Version = BL9X 3.08.00.001
/c2 Serial Number = L322623A7320106
/c2 PCB Version = Rev 032
/c2 PCHIP Version = 2.00
/c2 ACHIP Version = 1.90
/c2 Number of Ports = 16
/c2 Number of Drives = 7
/c2 Number of Units = 1
/c2 Total Optimal Units = 0
/c2 Not Optimal Units = 1
www.lsi.com/channel/products 45
Chapter 2. CLI Syntax Reference
/c2 Disk Spinup Policy = 4
/c2 Spinup Stagger Time Policy (sec) = 1
/c2 Auto-Carving Policy = on
/c2 Auto-Carving Size = 4000 GB
/c2 Auto-Rebuild Policy = on
/c2 Controller Bus Type = PCIe
/c2 Controller Bus Width = 8 lanes
/c2 Controller Bus Speed = 2.5 Gbps/lane
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
--------------------------------------------------------------------------u0 RAID-5 REBUILD-PAUSED 0% - 64K 372.476 RiW ON
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
--------------------------------------------------------------------------bbu On Yes OK OK OK 0 xx-xxx-xxxx
/c
x
show alarms [reverse]
Asynchronous event notifications (also referred to as AENs or controller
alarms) are originated by controller firmware or an SES attached enclosure
(9750, 9690SA, or 9650SE only) and captured by the 3ware device drive r s.
These events reflect warnings, errors, and/or informative messages. These
events are kept in a finite queue inside the kernel, and can be listed by CLI
and 3DM 2. They are also stored in the operating system events log.
The /cx show alarms command displays all available events on a given
controller. The default is to display the events in ascending order—that is, the
oldest event messages appear at the top, and the most recent event messages
appear at the bottom. You can use the [reverse] attribute to display the most
recent event message at the top.
Events generated on 7000/8000 series controllers do not have dates, so you
will see a '-' in the Date column. This means that it is not applicable. In
463ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
addition, alarm messages on 7000/8000 controllers contain the severity in the
message text, so the Severity column also shows a '-'.
Example: Typical output looks like:
//localhost> /c1 show alarms
Ctl Date Severity AEN Message
-------------------------------------------------------------------------- c0 [Fri Mar 21 2008 14:19:00] WARNING Drive removed: port=1
c0 [Fri Mar 21 2008 14:19:00] ERROR Degraded unit: unit=1, port=1
c0 [Fri Mar 21 2008 14:19:25] INFO Drive inserted: port=1
c0 [Fri Mar 21 2008 14:19:25] INFO Unit operational: unit=1
c0 [Fri Mar 21 2008 14:28:18] INFO Migration started: unit=0
c0 [Sat Mar 22 2008 05:16:49] INFO Migration completed: unit=0
c0 [Tue Apr 01 2008 12:34:02] WARNING Drive removed: port=1
c0 [Tue Apr 01 2008 12:34:22] ERROR Unit inoperable: unit=1
c0 [Tue Apr 01 2008 12:34:23] INFO Drive inserted: port=1
c0 [Tue Apr 01 2008 12:34:23] INFO Unit operational: unit=1
/c
x
show events [reverse]
Controller Object Commands
This command is the same as “/cx show alarms [reverse]”. See details above.
/c
x
show AENs [reverse]
This command is the same as “/cx show alarms [reverse]”. See details above.
/cx show diag
This command extracts controller diagnostics suitable for technical support
usage. Note that some characters might not be printable or rendered correctly
(human readable). It is recommended to save the output from this command to
a file, where it can be communicated to technical support or further studied
with Linux utilities such as od(1).
In order to redirect the output you must run the following command from a
command line, not from within the tw_cli shell.
tw_cli /c0 show diag > diag.txt
/cx show phy
This command is only for 9750 and 9690SA controllers, and 9650SE with
Release 9.5.2 or higher controllers.
It reports a list of the phys with related information for the specified
controller. The 'Device Type' column indicates whether the connected device
is an enclosure, or a drive of type SATA or SAS. The 'Device' column is the
www.lsi.com/channel/products 47
Chapter 2. CLI Syntax Reference
device ID or handle. There are three 'Link Speed' columns: 'Supported'
denotes the link speed capability of the phy/device, 'Enable' denotes the
current link speed setting, and 'Control' denotes the link control setting.
Example of 9690SA-8E connected to drives in an enclosure:
// localhost> /c3 show phy
Device --- Link Speed (Gbps) --Phy SAS Addesss Type Device Supported Enabled Control
-------------------------------------------------------------------------phy0 500050e000030232 ENCL N/A 1.5-3.0 3.0 Auto
phy1 500050e000030232 ENCL N/A 1.5-3.0 3.0 Auto
phy2 500050e000030232 ENCL N/A 1.5-3.0 3.0 Auto
phy3 500050e000030232 ENCL N/A 1.5-3.0 3.0 Auto
phy4 500050e000030236 ENCL N/A 1.5-3.0 3.0 Auto
phy5 500050e000030236 ENCL N/A 1.5-3.0 3.0 Auto
phy6 500050e000030236 ENCL N/A 1.5-3.0 3.0 Auto
phy7 500050e000030236 ENCL N/A 1.5-3.0 3.0 Auto
In the above example, for phy1, the link speeds supported are 1.5 and 3.0
Gbps. The current link speed for phy1 is 3.0 Gbps, and the link control setting
is 'Auto'. The link control setting could be either 1.5, 3.0, or Auto. 'Auto'
denotes Automatic Negotiation, where the best negotiated speed possible for
that link will be used.
(Note that if SAS 2.0 is used with a 9750 controller , the link speeds can be up
to 6.0 Gbps.)
Example of 9690SA-8I with direct attached drives:
//localhost> /c3 show phy
Device --- Link Speed (Gbps) --Phy SAS Addesss Type Device Supported Enabled Limit
------------------------------------------------------------------------- phy0 500050e000000002 SATA /c3/p0 1.5-3.0 3.0 Auto
phy1 500050e000000002 SATA /c3/p1 1.5-3.0 3.0 Auto
phy2 500050e000000002 SATA /c3/p2 1.5-3.0 3.0 Auto
phy3 500050e000000002 SATA /c3/p3 1.5-3.0 3.0 Auto
phy4 - - - 1.5-3.0 - Auto
phy5 - - - 1.5-3.0 - Auto
phy6 500050e000000006 SAS /c3/p6 1.5-3.0 3.0 Auto
phy7 - - - 1.5-3.0 - Auto
483ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx show rebuild
9000 series controllers support background tasks and allow you to schedule a
regular time when they occur.
Rebuild is one of the supported background tasks. Migrate and initialize are
other background tasks that follow the same schedule as rebuild. Other
background tasks for which there are separate schedules are verify and
selftest. For each background task, up to 7 time periods can be registered,
known as slots 1 through 7. Each task schedule can be managed by a set of
commands including add, del, show and set a task. Background task
schedules have a slot id, start-day-time, duration, and status attributes.
For details about setting up a schedule for background rebuild tasks, see
“Setting Up a Rebuild Schedule” on page 65.
Rebuild activity attempts to (re)synchronize all members of redundant units
such as RAID-1, RAID-10, RAID-5, RAID-6, and RAID-50. Rebuild can be
started manually or automatically if a spare has been defined. Scheduled
rebuilds will take place during the scheduled time slot, if enabled the
schedules are enabled. For in depth information about rebuild and other
background tasks, see “About Background Tasks” in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
Controller Object Commands
The show rebuild command displays the current rebuild background task
schedule as illustrated below.
//localhost> /c1 show rebuild
Rebuild Schedule for Controller /c1
========================================================
Slot Day Hour Duration Status
A status of “disabled” indicates that the task schedule is disabled. In this case,
the controller will not use the defined schedule timeslots. If the rebuild
command is entered manually , rebuilding will start within 10 to 15 minutes. It
will begin automatically if a rebuild is needed and a proper spare drive is set
up.
If the rebuild schedule is enabled while a rebuild process is underway, the
rebuild will pause until a scheduled time slot.
www.lsi.com/channel/products 49
Chapter 2. CLI Syntax Reference
Example for 9650SE controller:
If a unit is in the initialization state at noon on Wednesday and the rebuild
schedule shown above is in use (with schedules disabled), you would see the
following status using the show command:
$ tw_cli /c1 show
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------- u0 RAID-5 INITIALIZING 0 - 64K 521.466 RiW OFF
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
------------------------------------------------------------------------- bbu On Yes OK OK OK 0 xx-xxx-xxxx
If you then enable the rebuild schedules, the unit initialization will be paused
until the next scheduled time slot, as reflected in the examples below:
//localhost> /c1 set rebuild=enable
Enabling scheduled rebuilds on controller /c1 ...Done.
//localhost> /c1 show rebuild
Rebuild Schedule for Controller /c1
========================================================
Slot Day Hour Duration Status
503ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
p6 NOT-PRESENT - - - p7 OK u0 76.33 GB 160086528 Y2NXM4VE
p8 OK u0 74.53 GB 156301488 3JV3WTSE
p9 OK u0 74.53 GB 156301488 3JV3WRHC
p10 OK u0 74.53 GB 156301488 3JV3WQLQ
p11 OK u0 74.53 GB 156301488 3JV3WQLZ
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
-------------------------------------------------------------------------- bbu On Yes OK OK OK 0 xx-xxx-xxxx
/cx show rebuildmode
This command is only supported on 9750, 9690SA, and 9650SE controllers.
This command shows the current rebuild mode setting of the specified
controller. The rebuild mode has two settings: Adaptive and Low latency.
Rebuild mode works in conjunction with the rebuild task rate (see “/cx show
rebuildrate” on page 51).
The Adaptive setting is the default rebuild mode. It allows the firmware to
adjust the interaction of rebuild tasks with host I/Os to maximize the speed of
both host I/O and rebuild tasks. The Low Latency setting minimizes latency
(delay) in reading data from a RAID unit by slowing down the rebuild task
process. For some applications, such as video server applications and audio
applications, it is important to minimize the latency of read commands, so that
users do not perceive a lag when viewing video or listening to audio.
Controller Object Commands
For a more complete discussion of background task modes, see “Working
with the Background Task Mode” in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
/cx set rebuildmode=<adaptive|lowlatency>
/cx set rebuildrate=<1...5>
/cx show rebuildrate
/cx show rebuildrate
This command shows the current rebuild task rate of the specified controller.
The rebuild task rate sets the rebuild execution priority relative to I/O
operations.
This task rate is of the range [1..5], where 5 denotes the setting of fastest
background task and slowest I/O, as follows:
www.lsi.com/channel/products 51
Chapter 2. CLI Syntax Reference
5 = fastest rebuild; slowest I/O
4 = faster rebuild; slower I/O
3 = balanced between rebuild and I/O
2 = faster I/O; slower rebuild
1 = fastest I/O; slowest rebuild
This command applies to the 7000, 8000, and 9000 models controllers.
/cx set rebuildmode=<adaptive|lowlatency>
/cx set rebuildrate=<1...5>
/cx show rebuildmode
/cx show selftest
9000 series controllers support background tasks and allow you to schedule a
regular time when they occur.
Selftest is one of the supported background tasks. Rebuild and verify are other
background tasks for which there are separate schedules. Migrate and
initialize are additional background tasks that follow the same schedule as
rebuild. For each background task, up to 7 time periods can be registered,
known as slots 1 through 7. Each task schedule can be managed by a set of
commands including add, del, show and set a task. Background task
schedules have a slot id, start-day-time, duration, and status attributes.
For details about setting up a schedule for background selftest tasks, see
“Setting Up a Selftest Schedule” on page 67.
Selftest activity provides two types of selftests; UDMA (Ultra Direct
Memory Access) and SMART (Self Monitoring Analysis and Reporting).
Both self tests are checked once each day by default.
Note: UDMA mode is applicable only for PATA (parallel ATA) drives on earlier
3ware controllers. It is not applicable for SATA or SAS drives.
UDMA self test entails checking the current ATA bus speed (between
controller and attached disk), which could have been throttled down during
previous operations and increase the speed for best performance (usually one
level higher). Possible speeds include 33, 66, 100 and 133 Mhz.
523ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
SMAR T activity instructs the controller to check certain SMART supported
thresholds by the disk vendor. An AEN is logged to the alarms page if a drive
reports a SMART failure.
The show selftest command displays the current selftest background task
schedule as illustrated below. Selftests do not have a time duration since they
are completed momentarily.
//localhost> /c1 show selftest
Selftest Schedule for Controller /c1
========================================================
Slot Day Hour UDMA SMART
9000 series controllers support background tasks and allow you to schedule a
regular time when they occur.
Verify is one of the supported background tasks, and show verify shows you
the current verify schedule.
For 9750, 9690SA, and 9650SE RAID controllers, the Verify Task Schedule
can be either “basic” or “advanced.” (For details about the associated
comands, see “/cx set verify=advanced|basic|1..5” on page 72).
The basic Verify Task Schedule sets a weekly day and time for verification to
occur, and is designed to be used with the auto-verification of units.
The advanced Verify Task Schedule provides more control, and is equivalent
to the Verify Task Schedule available for 9550SX and earlier 9000 RAID
controllers.
For the advanced Verify Task Schedule, up to 7 time periods can be registered,
known as slots 1 through 7. This task schedule can be managed by a set of
commands including add, del, show and set a task. The task schedule has a
slot id, start-day-time, duration, and status attributes. Rebuilds, migrations,
and initializations follow similar background task schedules.
For details about setting up a schedule for verify tasks, see “Setting Up a
Verify Schedule” on page 66.
Verify activity verifies all units based on their unit type. Verifying RAID 1
involves checking that both drives contain the same data. On RAID 5 and
RAID 6, the parity information is used to verify data integrity. RAID 10 and
50 are composite types and follow their respective array types. On 9000
series, non-redundant units such as RAID 0, single, and spare, are also
www.lsi.com/channel/products 53
Chapter 2. CLI Syntax Reference
verified (by reading and reporting un-readable sectors). If any parity
mismatches are found, the array will be automatically background initialized.
(For information about the initialization process, see the user guide that came
with your 3ware RAID controller.)
Example 1: Advanced Verify Schedule
For 9550SX and earlier controllers, and when verify=advanced for 9750
controllers, and for 9690SA and 9650SE controllers running 9.5.1 or later , t he
show verify command displays the current verify background task schedule
as illustrated below.
//localhost> /c1 show verify
Verify Schedule for Controller /c1
========================================================
Slot Day Hour Duration Status
A status of “disabled” indicates that the controller will not use the defined
schedule timeslots and will start verifying within 10 to 15 minutes, if the
verify command is entered manually, or it will begin automatically if the
autoverify option is set. Rebuilds, migrations, and initializations will take
priority over verifies.
Example 2: Basic Verify Schedule
For 9750, 9690SA, and 9650SE controllers, if the “basic” Verify Task
Schedule is selected, the show verify command displays a schedule as
illustrated below:
This command is only supported on 9750, 9690SA, and 9650SE controllers.
This command shows the current rebuild mode setting of the specified
controller. The verify mode has two settings: Adaptive and Low latency.
Verify mode works in conjunction with the verify task rate (see “/cx show
verifyrate” on page 55).
The Adaptive setting is the default verify mode. It allows the firmware to
adjust the interaction of verify tasks with host I/Os to maximize the speed of
543ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
both host I/O and verify tasks. The Low Latency setting minimizes latency
(delay) in reading data from a RAID unit by slowing down the verify task
process. For some applications, such as video server applications and audio
applications, it is important to minimize the latency of read commands, so that
users do not perceive a lag when viewing video or listening to audio.
For a more complete discussion of background task modes, see “Working
with the Background Task Mode” in the 3ware SATA+SAS RAID Controller
/cx set verifymode=<adaptive|lowlatency>
/cx set verifyrate=<1..5>
/cx show verifyrate
/cx show verifyrate
This command shows the current verify task rate of the specified controller.
The verify task rate sets the verify execution priority relative to I/O
operations.
This task rate is of the range [1..5], where 5 denotes the setting of fastest
background task and slowest I/O, as follows:
5 = fastest verify; slowest I/O
4 = faster verify; slower I/O
3 = balanced between verify and I/O
2 = faster I/O; slower verify
1 = fastest I/O; slowest verify
This command applies to the 7000, 8000, and 9000 models controllers.
This command allows you to create a new unit on the specified controller . You
specify type, disks, and optional stripe size. By default the host operating
system will be informed of the new block device, write cache will be enabled,
Intelligent read cache will be enabled, a storsave policy of balance will be set,
a rapid raid recovery policy of All will be set, and the drive queuing policy is
enabled. In case of RAID 50, you can also specify the layout of the unit by
specifying the number of disks per disk group with the
Note: By default, write cache is enabled. However, if the controller does not have a
BBU installed, a message will warn you that you could lose data in the event of a
power failure.
Enabling write cache will improve write performance greatly, but you are at risk of
losing data if a power failure occurs when data is still in the cache. You may want to
obtain a BBU and UPS to safeguard against power loss.
group attribute.
/cx is the controller name, for example /c0, /c1, and so forth.
type=RaidType specifies the type of RAID unit to be created. Possible unit
types include raid0, raid1, raid5, raid6 (9650SE and higher only), raid10,
raid50, single, and spare.
Example: type=raid5
When a new unit is created, it is automatically assigned a unique serial
number. In addition, users can assign the unit a name.
Note: The unit’s serial number cannot be changed.
563ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
The following table shows supported types and controller models.
disk=p:-p consists of a list of ports (disks) to be used in the construction of
the specified unit type. One or more ports can be specified. Multiple ports can
be specified using a colon (:) or a dash (-) as port index separators. A dash
indicates a range and can be mixed with colons. For example
5:9:12
indicates port 0, 1, 2 through 5 (inclusive), 9 and 12.
disk=0:1:2-
If you have a 9750 or 9690SA controller, the syntax is the same even though
you are technically addressing vports.
stripe=size consists of the stripe size to be used. The following table
illustrates the supported and applicable stripes on unit types and controller
models. Stripe size units are in K (kilobytes). If no stripe size is specified,
256K is used by default, if applicable. If you need to change the stripe size
after the unit is created, you can do so by migrating the unit.
group=3|4|5|6|7|8|9|10|11|12/13|14|15|16 indicates the number of disks per
group for a RAID 50 type. (This attribute can only be used when
type=raid50.) Group=13-16 is only applicable to 9690SA.
Recall that a RAID 50 is a multi-tier array. At the bottom-most layer, N
number of disks per group are used to form the RAID 5 layer. These RAID 5
arrays are then integrated into a RAID 0. This attribute allows you to specify
the number of disks in the RAID 5 level. Valid values are 3 through 16.
However, no more than 4 RAID 5 subunits are allowed in a RAID 50 unit.
Note that a sufficient number of disks are required for a given pattern or disk
group. For example, given 6 disks, specifying 3 will create two RAID 5
arrays. With 12 disks, specifying 3 will create four RAID 5 arrays under the
RAID 0 level. With only 6 disks a grouping of 6 is not allowed, as you would
basically be creating a RAID 5.
The default RAID 50 grouping varies, based on number of disks. For 6 and 9
disks, default grouping is 3. For 8 disks, the default grouping is 4. For 10
disks, the default grouping is 5, and for 12 disks, the disks can be grouped into
groups of 3, 4, or 6 drives (the group of 4 drives is set by default as it prov ides
the best of net capacity and performance). For 15 disks, the disks can be
grouped into 5 drives (3 drive groups would make 5 subunits, you can have a
maximum of 4 subunits). For 16 disks, the disks can be grouped into groups
of 4 or 8 drives.
16N/A1616N/A16N/A
6464646464
256256256256256
Note that the indicated group number that is supported depends on the
number of ports on the controller. group=16 is the maximum and it is
available on the 9690SA.
noscan attribute instructs CLI not to notify the operating system of the
creation of the new unit. By default CLI will inform the operating system.
One application of this feature is to prevent the operating system from
creating block special devices such as /dev/sdb and /dev/sdc as some
implementations might create naming fragmentation and a moving target.
nocache or nowrcache attribute instructs CLI to disable the write cache on
the newly created unit. Enabling write cache increases write performance at
the cost of potential data loss in case of sudden power loss (unless a BBU or
UPS is installed). By default the write cache is enabled. To avoid the
possibility of data loss in the event of a sudden power loss, it is recommended
583ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
to set nocache or nowrcache unless there is a BBU (battery backup unit) or
UPS (uninterruptible power supply) installed.
nordcache attribute instructs CLI to disable the read cache on the newly
created unit. Enabling the read cache increases performance. The
rdcachebasic attribute instructs CLI to set the read cache mode on the newly
created unit to Basic. By default (if you do not set a read cache attribute), the
read cache mode is set to Intelligent. This command is supported on 9650SE
and later controllers. For more information, see “/cx/ux set
rdcache=basic|intelligent|off” on page 92.
autoverify attribute enables the autoverify attribute on the unit that is to be
created. For more details on this feature, see “/cx/ux set autoverify=on|off” on
page 90. This feature is not supported on model 7000/8000. For 9750,
9690SA, and 9650SE controllers that support basic verify, autoverify will be
set to ON by default for a new unit. For other 9000-series controllers that do
not support basic verify, autoverify is set to OFF by default for a new unit.
noqpolicy attribute instructs CLI to disable the qpolicy (drive queuing for
SATA driv es only) on the ne wly created unit. The default is for the qpolicy to
be on (in other words, noqpolicy is not specified). For a spare unit, drive
queuing is not meaningful, so the noqpolicy cannot be set. During unit
creation, specifying noqpolicy for a spare returns an error. (If the spare unit
becomes a true unit, it will adopt the qpolicy of the “new” unit.) For more
about drive queuing, see “/cx/ux show qpolicy” on page 84 and “/cx/ux set
qpolicy=on|off” on page 94.
ignoreECC attribute enables the ignoreECC/OverwriteECC attribute on the
unit that is to be created. For more details on this feature, see “/cx/ux set
ignoreECC=on|off” on page 94. The following table illustrates the supported
Model-Unit Types. This table only applies to setting this feature at unit
creation time. IgnoreECC only applies to redundant units.
For the 7/8000
series, this setting is only applicable during rebuild; it is not applicable
during creation.
Table 8: Supported Model-Unit Types for ignoreECC
Model R-0 R-1 R-5 R-6 R-10 R-50 Single Spare
7K/8K No No NoN/ANoNoNoNo
a
9000
9750,
9690SA,
and
9650SE
a. Models 9500S, 9550SX, and 9590SE
No Yes Yes N/AYes YesNoNo
NoYesYesYesYesYesNoNo
www.lsi.com/channel/products 59
Chapter 2. CLI Syntax Reference
name=string attribute allows you to name the new unit. (This feature is for
9000 series and above controllers.) The string can be up to 21 characters and
cannot contain spaces. In order to use reserved characters (‘<‘, ‘>’, ‘!’, ‘&’,
etc.) put double quotes (" ") around the name string. The name can be changed
after the unit has been created. For more information, see “/cx/ux set name=string” on page 94 and “/cx/ux show name” on page 83.
storsave=protect|balance|perform attribute allows user to set the storsave
policy of the new unit. This feature is only for 9000 series SX/SE/SA
controllers. For more information, see “/cx/ux set
storsave=protect|balance|perform [quiet]” on page 95.
rapidrecovery=all|rebuild|disable attribute specifies the Rapid RAID
Recovery setting for the unit being created. Rapid Raid Recovery can speed
up the rebuild process, and it can speed up initialize and verify tasks that may
occur in response to an unclean system shutdown. Setting this option to all
applies this policy to both these situations. Setting it to rebuild applies it only
to rebuild tasks. If the policy is set to disable, then none of the tasks will be
sped up.
Notes: Once the rapidrecovery policy has been disabled for a unit, it cannot be
changed again. Disabling this policy is required if you want to move a unit to a
controller that has firmware earlier than 9.5.1.
There is some system overhead from setting rapidrecovery to all. If you have a
BBU, you can set rapid recovery to rebuild, as a BBU provides protection against
data loss in the event of an unclean shutdown.
This attribute is only for redundant units created on controller models 9750 and
9690SA controllers, and 9650SE controllers with the 9.5.1 firmware or later.
Rapid RAID Recovery is not supported over migration.
v0=n or vol=a:b:c:d may be used to divide the unit up into multiple volumes.
v0=n can be used if you only want two volumes, in which case v0=n is used
to define the size of the first volume, and the second volume will use the
remaining space. One way in which this can be useful is if you want to create
a special volume to function as a boot volume, with a separate volume for
data.
vol=a:b:c:d can be used to specify sizes for up to four volumes.
The value(s) should be positive integer(s) in units of gigabytes (GB), with a
maximum of 32 TB. If you specify a size that exceeds the size of the unit, the
volume will be left “uncarved.”
603ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Both v0=n or vol=a:b:c:d work in conjunction with auto-carving, if that
feature is enabled. When auto-carving is used, v0=n and vol=a:b:c:d are
used to specify the size of the first few volumes, after which the auto-carve
size is used for additional volumes. (For more about auto-carving, see “/cx set
autocarve=on|off” on page 75 and “/cx set carvesize=[1024..32768]” on
page 76.)
Notes: If the total size of the specified volumes (up to 4) exceeds the size of the
array, the volume(s) that exceeded the array’s size boundary will not be carved.
Example of RAID 5 unit created with first volume set to 10 GB:
//localhost> /c0 add type=raid5 disk=2-5 v0=10
Creating new unit on Controller /c0 ... Done. The new unit is /
c0/u0.
Setting write cache=ON for the new unit ... Done.
Setting default Command Queuing Policy for unit /c0/u0 to [on]
... Done.
After the unit creation, a subsequent show command for the unit shows the
the volume size(s):
//localhost> /c0/u0 show
Unit UnitType Status %RCmpl %V/I/M VPort Stripe Size(GB)
------------------------------------------------------------- u0 RAID-5 OK - - - 256K 1117.56
u0-0 DISK OK - - p2 - 372.519
u0-1 DISK OK - - p3 - 372.519
u0-2 DISK OK - - p4 - 372.519
u0-3 DISK OK - - p5 - 372.519
u0/v0 Volume - - - - - 10
u0/v1 Volume - - - - - 1107.56
Example of RAID 0 unit created with volume sizes set to 2000, 500, 1024,
and 700 GB:
The example below combines auto-carving and vol=a:b:c:d. Notice that the
last volume (u0/v5) is odd-sized (247.188 GB).
Volumes 0 through 3 are carved using the first four sizes as specified.
Volumes 4, 5, and 6 are the auto carved volumes (1024 GB each). Volume 6 is
the remainder of the carve size.
//localhost> /c2 add type=raid0 disk=0:1:2:4:5:6:
7 vol=2000:500:1024:700
Creating new unit on controller /c2 ... Done. The new unit is /
c2/u0.
Setting default Command Queuing Policy for unit /c2/u0 to [on]
... Done.
Setting write cache=ON for the new unit ... Done.
www.lsi.com/channel/products 61
Chapter 2. CLI Syntax Reference
After the unit creation, a subsequent show command for the unit shows the
volume sizes:
//localhost> /c2/u0 show
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
-----------------------------------------------------------------------u0 RAID-0 OK - - - 256K 6519.19
u0-0 DISK OK - - p0 - 931.312
u0-1 DISK OK - - p1 - 931.312
u0-2 DISK OK - - p2 - 931.312
u0-3 DISK OK - - p4 - 931.312
u0-4 DISK OK - - p5 - 931.312
u0-5 DISK OK - - p6 - 931.312
u0-6 DISK OK - - p7 - 931.312
u0/v0 Volume - - - - - 2000
u0/v1 Volume - - - - - 500
u0/v2 Volume - - - - - 1024
u0/v3 Volume - - - - - 700
u0/v4 Volume - - - - - 1024
u0/v5 Volume - - - - - 1024
u0/v6 Volume - - - - - 247.188
/cx rescan [noscan]
This command instructs the controller to rescan all ports, vports, and phys and
reconstitute all units. The controller will update its list of disks, and attempts
to read every DCB (Disk Configuration Block) in order to re-assemble its
view and awareness of logical units. Any newly found unit(s) or drive(s) will
be listed.
noscan is used to not inform the operating system of the unit discovery. The
default is to inform the operating system.
Note: If you are adding new drives, add them physically before issuing the rescan
commands. Hot swap bays are required unless you first power-down the system to
prevent system hangs and electrical damage.
Example:
//localhost> /c1 rescan
Rescanning controller /c1 for units and drives ...Done
Found following unit(s): [/c1/u3]
Found following drive(s): [/c1/p7, /c1/p8]
623ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx commit
/cx flush
Controller Object Commands
This command only applies to the Windows operating system. It commits all
changes if a faster shutdown method is needed when running certain database
applications. Linux and FreeBSD file systems do not require this command
since they have their own ways of notifying the controller to do clean up for
shut down.
This command forces the controller to write all cached data to disk for the
specified controller.
/cx update fw=
This command is only for 9750 and 9000 series controllers.
This command allows the downloading of the specified firmware image to the
corresponding controller.
Note: Before issuing this command, you must have already obtained the
firmware image and placed it on your system. You can obtain the firmware
image from the LSI website: http://www.lsi.com/channel/ChannelDownloads.
Important: Before you update the firmware on your controller, please follow these
recommendations:
1) Back up your data. Updating the firmware can render the device driver and/or
management tools incompatible.
2) Make sure you have a copy of the current firmware image so that you can roll
back to it, if required.
3) Close all applications before beginning the update of the firmware.
fw=filename_with_path attribute allows you to specify the firmware image
file name along with its absolute path.
Note: filename_with_path must not have spaces in the directory names of its
path (as Windows allows).
filename_with_path
[force]
The new image specified by this filename_with_path is checked for
compatibility with the current controller, current driver, and current
application versions. A recommendation is then made as to whether an update
is needed, and you are asked to confirm whether you want to continue. If you
confirm that you want to continue, the new firmware image is downloaded to
the specified controller.
A reboot is required for the new firmware image to take effect.
www.lsi.com/channel/products 63
Chapter 2. CLI Syntax Reference
Note: The prom image number will vary for different controllers. Prom0006.img is
for the 9650SE, prom0008.img is for the 9690SA, and prom0011.img is for the
9750.
Example:
//localhost> /c0 update fw=/tmp/prom0006.img
Warning: Updating the firmware can render the device driver and/or management
tools incompatible. Before you update the firmware, it is recommended that
you:
1) Back up your data.
2) Make sure you have a copy of the current firmware image so that you can
roll back, if necessary.
3) Close all applications.
Examining compatibility data from firmware image and /c0 ...
Done.
New-Firmware Current-Firmware Current-Driver Current-API
Current firmware version is the same as the new firmware.
Recommendation: No need to update.
Given the above recommendation...
Do you want to continue ? Y|N [N]: y
Downloading the firmware from file /tmp/prom0006.img ... Done.
The new image will take effect after reboot.
force attribute is optional. If you include it, the compatibility checks are
bypassed.
/cx add rebuild=
This command adds a new task slot to the Rebuild Task Schedule on the day
ddd (where ddd is Sun, Mon, Tue, Wed, Thu, Fri, and Sat), at the hour hh
(range 0 .. 23), for a duration of duration (range 1 .. 24) hours. A maximum of
seven rebuild task slots can be included in the schedule. This command will
fail if no (empty) task slot is available.
Example:
//localhost> /c1 add rebuild=Sun:16:3
Adding scheduled rebuild to slot 7 for [Sun, 4:00PM, 3hr(s)] ... Done
In this example, a rebuild task slot is added to the schedule, so that rebuilds
can be executed on Sundays at 16 hours (4:00 PM) for a duration of 3 hours.
ddd:hh:duration
643ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Setting Up a Rebuild Schedule
Setting up a rebuild schedule requires several steps, and several different CLI
commands in addition to /cx add rebuild.
To set up the rebuild schedule you want to use, follow this
process:
1Use the /cx show rebuild command to display the current schedule for
rebuild tasks. (For details, see page 49.)
2If any of the scheduled tasks do not match your desired schedule, use the
/cx del rebuild command to remove them. (For details, see page 67.)
3Use the /cx add rebuild command to create the rebuild schedule slots you
want (described above.)
4Use the /cx set rebuild=enable command to enable the schedule (this
enables all rebuild schedule slots). (For details, see page 69.)
Warning: If all time slots are removed from the rebuild task schedule, be sure to
also disable the schedule. Otherwise the rebuild task will never occur.
/cx add verify=
This command adds a new task slot to the Verify Task Schedule on the day
ddd (where ddd is Sun, Mon, Tue, Wed, Thu, Fri, and Sat), at hour hh
(range 0 .. 23), for a duration of duration (range 1 .. 24) hours. A maximum of
seven verify task slots can be included in the schedule. This command will
fail if no (empty) task slot is available.
Note: This Verify Task Schedule is used when /cx set verify=advanced, for 9750
controllers, and 9690SA and 9650SE controllers running firmware 9.5.1 or later,
and for earlier controllers when /cx set verify=enabled.
If you have a 9750 controllers, and 9690SA or 9650SE controllers running firmware
9.5.1 or later, and would prefer a simpler verification schedule, consider using the /cx set verify=basic command to specify a weekly day and time and make sure
that the auto-verify policy is set for your RAID units. For more information, see “/cx
set verify=basic [pref=ddd:hh]” on page 72.
Example:
//localhost> /c1 add verify=Sun:16:3
Adding scheduled verify to slot 3 for [Sun, 4:00PM, 3hr(s)] ... Done.
In this example, a verify task slot is added to the schedule so that verifies can
be executed on Sundays at 16 hours (4:00 PM) for a duration of 3 hours.
ddd:hh:duration
www.lsi.com/channel/products 65
Chapter 2. CLI Syntax Reference
Setting Up a Verify Schedule
Setting up a verify schedule requires several steps, and several different CLI
commands in addition to /cx add verify.
To set up the verify schedule you want to use, follow this
process:
1Use the /cx show verify command to display the current schedule for
verify tasks. (For details, see page 53.)
2If any of the scheduled tasks do not match your desired schedule, use
the /cx del verify command to remove them. (For details, see page 68.)
3Use the /cx add verify command to create the verify schedule slots you
want (described above.)
4Use the /cx set verify=enable command or the /cx set verify=advanced
to enable the schedule (this enables all rebuild schedule slots). (For
details, see page 71.)
5Use the /cx/ux set autoverify=on command to turn on autoverify for each
unit you want to follow the schedule. (For details, see page 90.)
Note: If you do not enable autoverify for units or start a verification manually, no
verifies will run during your verify task schedule, even if the verify schedule is
enabled with the /cx set verify=enable command or the /cx set verify=advanced.
Warning: If all time slots are removed from the verify task schedule, be sure to
also disable the schedule. Otherwise verify tasks will never occur.
663ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
/cx add selftest=
This command adds a new task slot to the Selftest Task Schedule on the day
ddd (where ddd is Sun, Mon, Tue, Wed, Thu, Fri, and Sat), at hour hh
(range 0 .. 23). Notice that selftest runs to completion and as such no duration
is provided. A maximum of seven selftest task slots can be included in the
schedule. This command will fail if no (empty) task slot is available.
In order to run at the specified times, selftests must be enabled, using the
command “/cx set selftest=enable|disable [task=UDMA|SMART]” on
page 74.
Note: Adding self tests to the schedule is different from adding slots to the
rebuild and verify schedules. Adding a self-test directly schedules the test, as
well as defining a time slot during which the task can occur.
Example:
//localhost> /c1 add selftest=Sun:16
Adding scheduled verify to slot 7 for [Sun, 4:00PM] ... Done.
In this example, a selftest background task is scheduled to be executed on
Sundays at 16 hours (4:00 PM).
Setting Up a Selftest Schedule
Setting up a selftest schedule requires several steps, and several different CLI
commands in addition to /cx add selftest.
ddd:hh
To set up the selftest schedule you want to use, follow this
process:
1Use the /cx show selftest command to display the current schedule for
selftest tasks. (For details, see page 64.)
2If any of the scheduled tasks do not match your desired schedule, use
the /cx del selftest command to remove them. (For details, see page 68.)
3Use the /cx add selftest command to create the selftest schedule slots you
want (described above.)
4Use the /cx set selftest=enable command to enable the schedule (this
enables all selftest schedule slots). (For details, see page 74.)
/cx del rebuild=
This command removes the rebuild background task slot slot_id from the
Rebuild Task Schedule.
Example:
//localhost> /c1 del rebuild=2
removes the rebuild background task in slot 2.
slot_id
www.lsi.com/channel/products 67
Chapter 2. CLI Syntax Reference
Warning: If all time slots are removed, be sure to also disable the schedule.
Otherwise rebuilds will never occur.
/cx del verify=
slot_id
This command removes the verify background task slot slot_id from the
Verify Task Schedule.
Example:
//localhost> /c1 del verify=3
removes verify background task in slot 3.
Warning: If all time slots are removed, be sure to also disable the schedule.
Otherwise the verification tasks will never occur.
/cx del selftest=
This command removes (or unregisters) the selftest background task slot
slot_id from the Self Test Task Schedule.
Example:
//localhost> /c1 del selftest=3
Will remove selftest background task in slot 3.
Warning: If all time slots are removed, be sure to also disable the schedule.
Otherwise the selftest background task will never occur.
slot_id
/cx set dpmstat=
This command applies only to 9000 series SX/SE/SA controllers.
This command allows you to enable or disable the Drive Performance
Monitor (DPM).
By setting dpmstat to on you can enable the gathering of statistics for drives
when I/O is running. These statistics can be helpful when troubleshooting
performance problems.
You can see whether the Performance Monitor is currently running and
display a statistics summary by using the command “/cx show dpmstat
[type=inst|ra|ext]” on page 38.
683ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
on|off
For a description of each of the statistics that can be gathered and viewed, see
“/cx/px show dpmstat type=inst|ra|lct|histdata|ext” on page 110.
DPM is disabled by default since there is overhead in maintaining the
statistics. DPM is also disabled following a reboot or power-on.
Note that turning off DPM does not clear the statistical data that has been
recorded. To clear the data, use the command “/cx/px set dpmstat=clear
[type=ra|lct|ext]” on page 114.
For more information, see “Drive Performance Monitoring” on page 241 of
the 3ware SATA+SAS RAID Controller Card Software User Guide, Version
10.0.
Example:
//localhost> /c0 set dpmstat=off
Setting Drive Performance Monitoring on /c0 to [off]... Done.
/cx set rebuild=enable|disable|1..5
Controller Object Commands
This command enables or disables the Rebuild Task Schedule defined for
controller /cx and sets the priority of rebuild versus I/O operations. When
enabled, rebuild tasks will only be run during the time slots scheduled for
rebuilds. If a rebuild is taking place when the schedule is enabled, it will be
paused until the next scheduled time.
The priority of rebuild versus I/O operations is specified with 1..5, where 5 is
more resources and 1 the least. Setting the value to 5 gives maximum
processing time to rebuilds rather than I/O. Setting the value to 1 gives
maximum processing time to I/O rather than rebuilds.
5 = fastest rebuild; slowest I/O
4 = faster rebuild; slower I/O
3 = balanced between rebuild and I/O
2 = faster I/O; slower rebuild
1 = fastest I/O; slowest rebuild
Enabling and disabling rebuild schedules is only for 9000 series controllers,
however the rebuild rate (1..5) applies to all controllers.
7000- and 8000-series controllers have only one setting for Task Rate; it
applies to both rebuild and verify rates. This rate is not persistent following a
reboot for 7000- and 8000-series controllers.
www.lsi.com/channel/products 69
Chapter 2. CLI Syntax Reference
/cx set rebuildmode=<adaptive|lowlatency>
This command is only supported on 9750, 9690SA, and 9650SE controllers.
This command sets the rebuild mode. The rebuild mode has two settings:
Adaptive and Low Latency.
Rebuild mode works in conjunction with the rebuild task rate (see “/cx set
rebuildrate=<1...5>” on page 70).
The Adaptive setting is the default rebuild mode. It allows the firmware to
adjust the interaction of rebuild tasks with host I/Os to maximize the speed of
both host I/O and rebuild tasks. When a rebuild background task is active, if
the task rate is set to a fast rebuild rate (i.e., low I/O rate), the system latency
increases and performance may be negatively affected, especially for
applications such as video server applications and audio applications. The
Low Latency setting will minimize the latency (delay) in reading data from
the RAID unit by slowing down the rebuild task, which allows host Reads to
complete, thus improving performance.
For a more complete discussion of background task modes, see “Working
with the Background Task Mode” in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
Important: Setting rebuildmode to 'low latency' and rebuildrate to '5' is not
recommended when I/O is active, because in that case, the rebuild as a
background task may never complete. Thus, this setting should be used with care.
Example:
//localhost> /c1 set rebuildmode=lowlatency
Setting Rebuild background task mode of /c1 to [lowlatency] ... Done.
Related commands:
/cx show rebuildmode
/cx set rebuildrate=<1...5>
/cx show rebuildrate
/cx set rebuildrate=<1...5>
This command sets the rebuild task rate of the specified controller. The
rebuild task rate sets the rebuild execution priority relative to I/O operations.
This task rate is of the range [1..5], where 5 denotes the setting of fastest
background task and slowest I/O, as follows:
5 = fastest rebuild; slowest I/O
4 = faster rebuild; slower I/O
3 = balanced between rebuild and I/O
703ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
2 = faster I/O; slower rebuild
1 = fastest I/O; slowest rebuild
This command applies to the 7000, 8000, and 9000 models controllers.
Example:
//localhost> /c1 set rebuildrate=2
Setting Rebuild background task rate on /c1 to [2] (faster I/O) ... Done.
Related Commands
/cx show rebuildmode
/cx set rebuildmode=<adaptive|lowlatency>
/cx show rebuildmode
/cx set verify=enable|disable|1..5
Enabling and disabling verify schedules is only for 9000 series controllers.
This command enables or disables the advanced Verify Task Schedule defined
for controller /cx and (when enabled) sets the priority of verification versus
I/O operations. When enabled, verify tasks will only be run during the time
slots identified in the verify task schedule. If a verify is taking place when the
schedule is enabled, it will be paused until the next scheduled time.
The priority of verify versus I/O operations is specified with 1..5, where 5 is
more resources and 1 the least. Setting this value to 5 implies fastest verify,
and 1 implies fastest I/O.
Controller Object Commands
5 = fastest verify; slowest I/O
4 = faster verify; slower I/O
3 = balanced between verify and I/O
2 = faster I/O; slower verify
1 = fastest I/O; slowest verify
For 9550SX(U) and earlier controllers, and for SE/SA controllers running
firmware 9.5 and 9.5.0.1, disabling verify with this command turns off the
verify schedule. In this case, if a verify is manually started, it should begin
right away.
For 9750 controllers, and 9690SA and 9650SE controllers running firmware
9.5.1 or later, enabling verify wi th this command is equivalent to using the /cx set verify=advanced command, while disabling verify with this command is
equivalent to using the /cx set verify=basic command without specifying a
preferred start day and time (the default of Friday at midnight is used.) For
more information, see “/cx set verify=advanced|basic|1..5” on page 72.
www.lsi.com/channel/products 71
Chapter 2. CLI Syntax Reference
Note: If you want verifications to occur automatically, when enabling the verify
schedule you must also remember to enable the autoverify setting for the units to be
verified. For more information see “/cx/ux set autoverify=on|off” on page 90.
You can view the verify schedule to be enabled or disabled with the command
“/cx show verify” on page 53. You can add verify task slots to the schedule
using the command “/cx add verify=ddd:hh:duration” on page 65. You can
remove verify task slots from the schedule with the “/cx del verify=slot_id”
on page 68.
/cx set verify=advanced|basic|1..5
This command only applies to 9750 controllers, and 9690SA and 9650SE
RAID controllers running 9.5.1 or later.
This command is effectively the same as the set verify command. Setting
verify to advanced enables the advanced Verify Task Schedule, which can
include a series of up to 7 days and times. Setting verify to basic creates a
weekly schedule with one specific day and time, and disables the series of
scheduling slots associated with the advanced Verify Task Schedule. For more
about the basic schedule, see “/cx set verify=basic [pref=ddd:hh]”, below.
The priority of verify versus I/O operations is specified with 1..5, where 5 is
more resources and 1 the least. Setting this value to 1 implies fastest I/O, and
5 implies fastest verify.
For information on the verify schedule, see “/cx add verify=ddd:hh:duration”
on page 65
/cx set verify=basic [pref=ddd:hh]
This command only applies to 9750 controllers, and 9690SA and 9650SE
RAID controllers running 9.5.1 or later.
Using the verify=basic option allows you to set a basic verify schedule that
starts each week at the same date and time. With verify=basic, you can
specify your preferred day and time, or you can omit the day and time and use
the default of Friday at midnight.
When you set verify=basic, the series of scheduled days and times associated
with the advanced Verify Task Schedule is ignored.
Verify=basic is intended to be used with the auto-verify policy for RAID
units, to insure that a verification of the unit occurs on a regular basis.
723ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Note: When verify=basic, if you start a manual verify, it will start immediately .
When verify=advanced, if you start a manual verify, it will follow the advanced
Verify Task Schedule. For more information, see “/cx/ux start verify” on page 88.
Example:
//localhost> /c3 set verify=basic pref=Fri:23
Setting /c3 basic verify preferred start time to [Fri, 11:00PM]
... Done.
/cx set verifymode=<adaptive|lowlatency>
This command is only supported on 9750, 9690SA, and 9650SE controllers.
This command sets the verify mode. The verify mode has two settings:
Adaptive and Low Latency.
Verify mode works in conjunction with the verify task rate (see “/cx set
verifyrate=<1..5>” on page 74).
The Adaptive setting is the default verify mode. It allows the firmware to
adjust the interaction of verify tasks with host I/Os to maximize the speed of
both host I/O and verify tasks.
When a verify background task is active, if the task rate is set to a fast verify
rate (i.e., low I/O rate), the system latency increases and performance may be
negatively affected, especially for applications such as video server
applications and audio applications. The Low Latency setting will minimize
the latency (delay) in reading data from the RAID unit by slowing down the
rebuild task, which allows host Reads to complete, thus improving
performance.
For a more complete discussion of background task modes, see “Working
with the Background Task Mode” in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.
Important: Setting verifymode to 'low latency' and verifyrate to '5' is not
recommended when I/O is active, because in that case, the verify as a background
task may never complete. Thus, this setting should be used with care.
Example:
//localhost> /c1 set verifymode=lowlatency
Setting Verify background task mode of /c1 to [lowlatency] ... Done.
Related commands:
/cx show verifymode
/cx set verifyrate=<1..5>
/cx show verifyrate
www.lsi.com/channel/products 73
Chapter 2. CLI Syntax Reference
/cx set verifyrate=<1..5>
This command sets the verify task rate of the specified controller. The verify
task rate sets the verify execution priority relative to I/O operations.
This task rate is of the range [1..5], where 5 denotes the setting of fastest
background task and slowest I/O, as follows:
5 = fastest verify; slowest I/O
4 = faster verify; slower I/O
3 = balanced between verify and I/O
2 = faster I/O; slower verify
1 = fastest I/O; slowest verify
This command applies to the 7000, 8000, and 9000 models controllers.
Example:
//localhost> /c1 set verifyrate=2
Setting Verify background task rate on /c1 to [2] (faster I/O) ... Done.
Related commands:
/cx show verifyrate
/cx set verifymode=<adaptive|lowlatency>
/cx show verifymode
/cx set selftest=enable|disable
[task=UDMA|SMART]
This command enables or disables all selftest tasks or a particular
selftest_task (UDMA or SMART).
The selftest schedule is always enabled.
For 3ware RAID controllers older than the 9690SA, two self-tests can be set:
one to check whether UDMA Mode can be upgraded (applies to PATA drives
only), and another to check whether SMART thresholds have been exceeded.
For 9750 and 9690SA controllers, you can only check the SM ART thresholds
for drives. 7000/8000 series have the same internal schedule, but it is not
viewable or changeable.
Example:
//localhost> /c0 selftest=enable task=UDMA
enables UDMA selftest on controller c0.
743ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx set ondegrade=cacheoff|follow
This command is only for 9500S controllers.
This command allows you to set a controller-based write cache policy. If the
policy is set to cacheoff and a unit degrades, the firmware will disable the
write-cache on the degraded unit, regardless of what the unit-based write
cache policy is. If the policy is set to follow and a unit degrades, firmware will
follow whatever cache policy has been set for that unit. (For details about the
unit-based policy, see “/cx/ux set cache=on|off [quiet]” on page 92.)
Controller Object Commands
/cx set spinup=
nn
This command is only for 9750 and 9000 series controllers.
This command allows you to set a controller-based Disk Spinup Policy that
specifies how many drives can spin up at one time. The value must be a
positive integer between 1 and the number of disks/ports supported on the
controller (4, 8, or 12). The default is 1.
This policy is used to stagger spinups of disks at boot time in order to spread
the power consumption on the power supply. For example, given a spinup
policy of 2, the controller will spin up two disks at a time, pause, and then spin
up another 2 disks. The amount of time to pause can be specified with the
Spinup Stagger Time Policy (/cx set stagger=nn).
Not all drives support staggered spinup. If you enable staggered spinup and
have drives that do not support it, the setting will be ignored.
/cx set stagger=
This command is only for 9750 and 9000 series controllers.
This command allows you to set a controller-based Disk Spinup Stagger T ime
Policy that specifies the delay between spin-ups. The value must be a positive
integer between 0 to 60 seconds. This policy, in conjunction with Disk Spinup
Policy, specifies how the controller should spin up disks at boot time. The
default is 6 seconds.
nn
/cx set autocarve=on|off
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command allows you to set the auto-carve policy to on or off. By default,
autocarve is off.
When the auto-carve policy is set to on, any unit larger than the carvesize is
created or migrated into one or more carvesize volumes and a remaining
www.lsi.com/channel/products 75
Chapter 2. CLI Syntax Reference
volume. Each volume can then be treated as an individual disk with its own
file system. The default carvesize is 2 TB.
This feature is useful for operating systems limited to 2 TB file systems.
For example, using the 2 TB default carvesize, a 3 TB unit will be configured
into one 2 TB volume and one 1 TB volume. A 5 TB unit will be configured
into two 2 TB volumes and one 1 TB volume.
When auto-carve policy is set to off, all new units are created as a single large
volume. If the operating system can only recognize up to 2 TBs, space over 2
TB will not be available.
Example:
//localhost> /c0 set autocarve=on
Setting Auto-Carving Policy on /c0 to on ... Done.
/cx set carvesize=[1024..32768]
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command allows you to set the carve size in GB. This feature works
together with autocarve. See “/cx set autocarve=on|off” above for details.
Note that for some operating systems are limited to 2 TB. (For details, see
“Support for Over 2 Terabytes” on page 10.)
Example:
//localhost> /c0 set carvesize=2000
Setting Auto-Carving Size on /c0 to 2000 GB ... Done.
/cx set autorebuild=on|off
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command turns the Auto-Rebuild policy on or off. By default,
autorebuild is on.
If the policy is on the firmware will select drives to use for rebuilding a
degraded unit using the following priority order.
763ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Controller Object Commands
Note: Failed drives can be drives that have mechanically failed, or they can be
drives that have been disconnected from the controller long enough to cause a
drive timeout error and for the controller to classify them as failed.
Enabling Auto-Rebuild allows you to add a drive to the controller and have it
be available for a rebuild as soon as you tell the controller to rescan, without
having to specify it as a spare. It also means that if you accidentally
disconnect a drive (causing the controller to see it as a failed drive) and then
reconnect it, the controller will automatically try to use it again.
If the policy is off, spares are the only candidates for rebuild operations.
Example:
//localhost> /c0 set autorebuild=enable
Setting Auto-Rebuild Policy on /c0 to enable ... Done.
/cx set autodetect=on|off disk=<p:-p>|all
This command is only for 9750 and 9000 series controllers.
This command is associated with the staggered spin-up feature when hot-
swapping drives. When staggered spin-up is enabled (see command /cx set spinup and /cx set stagger), during a reset or power on, the controller will
spin up all detected drives with a delay between each spinup, allowing the
spread of power consumption on the power supply. When a drive is hotswapped, (as opposed to when it has just been powered on or reset), the
default behavior of the system is to automatically detect and immediately spin
up the drives. This command can change the default behavior and set the
controller to do a staggered spinup for hot-swapped drives.
Note: The autodetect setting cannot be shown in CLI or displayed in 3DM 2 or
3BM. This feature may be added in a future release.
autodetect=on|off enables or disables automatic detection of drives on the
controller’s ports for staggered spin-up.
disk=<p:-p>|all specifies one or many disks (that is, drives, ports, or vports).
If a port is empty (no drive is inserted), the echo message of the command
refers to a port. If there is already a drive inserted, the message refers to a
disk. The example below shows that autodetect has been set to off to initiate
staggered spin-up during hot-swapping, where port 3 was empty and ports 5
and 6 had drives inserted.
www.lsi.com/channel/products 77
Chapter 2. CLI Syntax Reference
Example:
//localhost>> /c0 set autodetect=off disk=3:5-6
Setting Auto-Detect on /c0 to [off] for port [3] and for disk
[5,6]... Done
If “disk=all,” then all of the drives or ports for that controller are specified.
For a 9750 or 9690SA controller, it would spinup all directly attached SAS
and SATA drives, but not any drives attached to an expander.
Example:
//localhost>> /c0 set autodetect=off disk=all
Setting Auto-Detect on /c2 to [off] for all disks/ports... Done.
Usage Scenario:
If you are hot-plugging a large number of drives at the same time and are
concerned that you might overload the power supply, you might use this
command as follows:
1Issue the command (set autodetect=off) to disable automatic detection of
the ports for staggered spin-up.
2If the ports are not empty, pull the drives out of the specified ports.
3Insert (or replace) the drives at the ports specified.
4Issue the command (set autodetect=on) to enable auto detect of the ports
with the newly inserted drives.
The preceding steps would spin up the newly inserted drives in a staggered
manner. Please note that the command takes longer for ports that do not have
drives inserted, since the controller allows time for the empty ports to
respond.
/cx start mediascan
This command applies only to 7000/8000 controllers. For 9000 series
controllers, use the verify command.
This command provides media scrubbing for validating the functionality of a
disk, including bad block detection, remapping, and so forth. The command
starts a media scan operation on the specified controller /cx.
/cx stop mediascan
This command applies only to 7000/8000 controllers.
This commands stops a media scan operation on the specified controller /cx.
(Media scans are started using /cx start mediascan.)
783ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Unit Object Commands
Unit Object commands provide information and perform actions related to a
/c0/u1 (unit 1 on controller 0). For example, you use
Syntax
specific unit, such as
logical disk object commands for such tasks as seeing the rebuild, verify, or
initialize status of a unit, starting, stopping, and resuming rebuilds and
verifies, and setting policies for the unit.
Note: Features indicated as “9690SA only,” “9000 series,” or “9000 series SE/SA
only” also apply to 9750 controllers.
/cx/ux show
/cx/ux show attribute [attribute ...] where attributes are:
autoverify (9000 series)| initializestatus|cache|
wrcache|rdcache|name(9000 series) |qpolicy(9000 series
SX/SE/SA only)|rebuildstatus |serial(9000 series)|status
|verifystatus|storsave(9000 series SX/SE/SA only)
|rapidrecovery (9000 series SE/SA)|
|volumes(9000 series)|ignoreECC (9000 series)|
identify (9000 series SX/SE/SA only)
/cx/ux show all
/cx/ux start rebuild disk=<p:-p...> [ignoreECC]
/cx/ux start verify
/cx/ux pause rebuild (7000/8000 only)
/cx/ux resume rebuild (7000/8000 only)
/cx/ux stop verify
/cx/ux flush
/cx/ux del [noscan] [quiet]
/cx/ux set autoverify=on|off
/cx/ux set cache=on|off [quiet]
/cx/ux set wrcache=on|off [quiet]
/cx/ux set rdcache=basic|intelligent|off
/cx/ux set identify=on|off (9000 series SX/SE/SA only))
/cx/ux set ignoreECC=on|off
/cx/ux set qpolicy=on|off (9000 series SX/SE/SA only)
/cx/ux set name=string (9000 series)
/cx/ux set rapidrecovery=all|rebuild|disable [quiet](9000
series SE/SA only)
/cx/ux set storsave=protect|balance|perform [quiet](9000
series SX/SE/SA only)
/cx/ux migrate type=RaidType [disk=p:-p]
[group=3|4|5|6|7|8|9|10|11|12|13|14|15|16]
[stripe=size] [noscan] [nocache] [autoverify]
(9000 series) RaidType = {raid0, raid1, raid5,
raid6(9650SE and later only), raid10, raid50, single}
/cx/ux remove [noscan] [quiet]
Unit Object Commands
www.lsi.com/channel/products 79
Chapter 2. CLI Syntax Reference
/cx/ux show
This command shows summary information about the specified unit /cx/ux. If
the unit consists of sub-units, as in the case of RAID-10 and RAID-50, then
each sub-unit is further presented. If the Auto-Carving policy was on at the
time the unit was created and the unit is over the carve size, multiple volumes
were created and are displayed at the end of the summary information.
Similarly , if the unit was created using the 3ware BIOS utility 3BM and a size
was entered in the Boot Volume Size field, multiple volumes were created and
will be displayed. Note that a volume created using the Boot Volume Size
feature does not have to be used as a boot volume.
Note: In the output of unit information tables that follows, the column “Port”
may be “VPort” depending on the applicable controller.
Example for 9750 and 9690SA controllers:
//localhost> /c0/u1 show
Unit UnitType Status %RCmpl %V/I/M VPort Stripe Size(GB)
----------------------------------------------------------------------- u1 RAID-0 OK - - - 64K 3576.06
u1-0 DISK OK - - p0 - 298.01
u1-1 DISK OK - - p1 - 298.01
u1-2 DISK OK - - p2 - 298.01
u1-3 DISK OK - - p3 - 298.01
u1-4 DISK OK - - p4 - 298.01
u1-5 DISK OK - - p5 - 298.01
u1-6 DISK OK - - p6 - 298.01
u1-7 DISK OK - - p7 - 298.01
u1-8 DISK OK - - p8 - 298.01
u1-9 DISK OK - - p9 - 298.01
u1-10 DISK OK - - p10 - 298.01
u1-11 DISK OK - - p11 - 298.01
u1/v0 Volume - - - - - 2047.00
u1/v1 Volume - - - - - 1529.06
Example for 9650SE and earlier controllers:
//localhost> /c0/u0 show
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
----------------------------------------------------------------------- u0 RAID-50 OK - - - 64K 596.05
u0-0 RAID-5 OK - - - 64K -
u0-0-0 DISK OK - - p0 - 149.10
u0-0-1 DISK OK - - p2 - 149.10
u0-0-2 DISK OK - - p3 - 149.10
u0-1 RAID-5 OK - - - 64K -
u0-1-0 DISK OK - - p4 - 149.10
u0-1-1 DISK OK - - p5 - 149.10
u0-1-2 DISK OK - - p6 - 149.10
//localhost> /c0/u1 show
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
----------------------------------------------------------------------- u1 RAID-0 OK - - - 64K 3576.06
u1-0 DISK OK - - p0 - 298.01
u1-1 DISK OK - - p1 - 298.01
803ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Unit Object Commands
u1-2 DISK OK - - p2 - 298.01
u1-3 DISK OK - - p3 - 298.01
u1-4 DISK OK - - p4 - 298.01
u1-5 DISK OK - - p5 - 298.01
u1-6 DISK OK - - p6 - 298.01
u1-7 DISK OK - - p7 - 298.01
u1-8 DISK OK - - p8 - 298.01
u1-9 DISK OK - - p9 - 298.01
u1-10 DISK OK - - p10 - 298.01
u1-11 DISK OK - - p11 - 298.01
u1/v0 Volume - - - - - 2047.00
u1/v1 Volume - - - - - 1529.06
One application of the /cx/ux show command is to see which sub-unit of a
degraded unit has caused the unit to degrade and which disk within that subunit is the source of degradation. Another application is to see the source and
destination units during a migration.
The unit information shows the percentage completion of the processes
associated with the unit with %RCompl (percent Rebuild completion) and %V/I/M (percent Verifying, Initializing, or Migrating).
Unlike other unit types, RAID-6 may potentially have 2 or more parity drives
and can tolerate two or more failures within a unit. As a result, an added
notation is used to describe %RCompl and %V/I/M, and these are (A) and
(P). (A) denotes that the percentage completion of the process is for the
current active process, and (P) denotes that the percentage completion of the
process is for the current paused process.
Example:
/localhost> /c0 show unitstatus
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
-------------------------------------------------------------------------- u0 RAID-6 REBUILD-VERIFY 50%(A) 70%(P) 256k 298.22 RiW OFF
Here, the RAID-6 unit u0 is in the Rebuild-Verify state, with percentage
Rebuild completion of 50% and is the current active process. The process of
either Verifying, Initializing, or Migrating is at 70% and it is a paused process.
For the unit display:
//localhost> /c0/u0 show
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
----------------------------------------------------------------------- u0 RAID-6 REBUILD-VERIFY 50%(A) 70%(P) - 64K 2683.80
u0-0 DISK OK - - p0 - 298.20
u0-1 DISK OK - - p1 - 298.20
u0-2 DISK OK - - p2 - 298.20
u0-3 DISK REBUILDING 80% - p3 - 298.20
u0-4 DISK OK - - p4 - 298.20
u0-5 DISK OK - - p5 - 298.20
u0-6 DISK OK - - p6 - 298.20
u0-7 DISK OK - - p7 - 298.20
u0-8 DISK REBUILD-PAUSE 20% - p8 - 298.20
u0-9 DISK OK - - p9 - 298.20
u0-10 DISK OK - - p10 - 298.20
u0-11 DISK OK - - p11 - 298.20
www.lsi.com/channel/products 81
Chapter 2. CLI Syntax Reference
In the above example, the RAID-6 unit u0 has 2 parity drives. Currently , it has
two REBUILDING drives; one is in the active rebuilding state and anoth er is
in the paused rebuild state. The unit is also in the paused VERIFY state. Like
the output of the /cx show unitstatus command, the top-level unit status and
percentage show the composite unit status and composite rebuild percentage.
/cx/ux show
attribute [attribute
This command shows the current setting of the specified attributes. One or
many attributes can be requested. Specifying an invalid attribute will
terminate the loop. Possible attributes are: initializestatus, name (9000 series),
autoverify (9000 series), cache, ignoreECC (9000 series), identify (9000
series SX/SE/SA only), qpolicy (9000 series SX/SE/SA only), rapidrecovery
(9000 series SE/SA only), rebuildstatus, serial (9000 series), status, storsave
(9000 series SX/SE/SA only), verifystatus, and volumes (9000 series).
/cx/ux show autoverify
This feature only applies to 9000 series controllers.
This command shows the current autoverify setting of the specified unit.
Example:
//localhost> /c0/u0 show autoverify
/c0/u0 Auto Verify Policy = off
/cx/ux show cache
This command shows the current write cache state of the specified unit. (It
provides the same information as the command /cx/ux show wrcache.)
...]
Example:
//localhost> /c0/u0 show cache
/c0/u0 Write Cache State = on
/cx/ux show wrcache
This command shows the current write cache state of the specified unit. (It
provides the same information as the command /cx/ux show cache.)
Example:
//localhost> /c0/u0 show wrcache
/c0/u0 Write Cache State = on
823ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx/ux show rdcache
This command shows the current read cache state of the specified unit.
The state of the read cache can be either Basic, Intelligent, or Off. This
feature is supported on 9650SE and later controllers. This feature is supported
with all RAID array types. For more information, see “/cx/ux set
rdcache=basic|intelligent|off” on page 92
Example:
//localhost> /c0/u0 show rdcache
/c0/u0 Read Cache = Intelligent
/cx/ux show identify
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers. This feature requires a supported enclosure.
Unit Object Commands
This command is related to the /cx/ux set identify=on|off
the identify status of the specified unit (either on or off).
Example:
//localhost> /c0/u0 show identify
/c0/u0 Identify status = on
/cx/ux show ignoreECC
This feature only applies to 9000 series controllers.
This command shows the current setting of the ignoreECC policy for the
specified unit.
Example:
//localhost> /c0/u0 show ignoreECC
/c0/u0 Ignore ECC policy = off
/cx/ux show initializestatus
This command reports the initializestatus (if any) of the specified unit.
Example:
//localhost> /c0/u5 show initializestatus
/c0/u5 is not initializing, its current state is OK
command. It shows
/cx/ux show name
This feature only applies to 9000 series controllers.
This command reports the name (if any) of the specified unit.
Example:
//localhost> /c0/u5 show name
/c0/u5 name = Joe
www.lsi.com/channel/products 83
Chapter 2. CLI Syntax Reference
/cx/ux show qpolicy
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command reports the queue policy of the firmware for SATA drives.
Qpolicy is not applicable to SAS drives. If the queue policy is on, the
firmware utilizes the drive’s queueing policy. If any drives do not support a
queueing policy, this policy will have no effect on those drives.
For a spare unit, drive queuing is not meaningful or applicable. When a spare
becomes part of a true unit during a rebuild, it will adopt the queue policy of
the ''new'' parent unit. Thus, this command does not show the queue policy for
the spare unit type.
Note that currently only NCQ will be enabled, not tag-queueing.
Note that queuing information is not available for SAS drives.
Example:
//localhost> /c0/u5 show qpolicy
/c0/u5 Command Queuing Policy = on
/cx/ux show rapidrecovery
This command only applies to 9750 model controllers and 9000 series
controllers, models SE and SA, and only for redundant units. Firmware 9.5.1
or later is required for 9000 series SE/SA models. Firmware 10.0 or later is
required for 9750 model controllers.
This command shows the Rapid RAID Recovery policy for the specified unit.
This policy can be all, rebuild, or disable.
For information about the policy settings, please see the description about the
rapidrecovery attribute for the /cx add command on page 56.
This command reports the rebuildstatus (if any) of the specified unit.
Example:
//localhost> /c0/u5 show rebuildstatus
/c0/u5 is not rebuilding, its current state is OK
If the unit is in the process of migrating, the command will return the
following:
//localhost> /c0/u5 show rebuildstatus
/c0/u5 is not rebuilding, its current state is MIGRATING
843ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
/cx/ux show serial
/cx/ux show status
Unit Object Commands
This feature only applies to 9000 series controllers.
This command reports the unique serial number of the specified unit.
Example:
//localhost> /c0/u5 show serial
/c0/u5 Serial Number = 12345678901234567890
This command reports the status of the specified unit.
Possible statuses include: OK, VERIFYING, VERIFY-PAUSED,
INITIALIZING, INIT-PAUSED, REBUILDING, REBUILD-PAUSED,
DEGRADED, MIGRATING, MIGRATE-PAUSED, RECOVERY,
INOPERABLE, and UNKNOWN. (Definitions of the unit statuses are
available in the 3ware SATA+SAS RAID Controller Card Software User Guide, Version 10.0.)
Example:
//localhost> /c0/u0 show status
/c0/u5 status = OK
/cx/ux show storsave
This feature only applies to 9750 model controllers and 9000 series SX/SE/
SA model controllers.
This command reports the storsave policy on the unit.
For more information see, “/cx/ux set storsave=protect|balance|perform
[quiet]” on page 95.
Example:
//localhost> /c0/u5 show storsave
/c0/u5 Command Storsave Policy = protect
/cx/ux show verifystatus
This command reports the verifystatus (if any) of the specified unit.
Example:
//localhost> /c0/u5 show verifystatus
/c0/u5 is not verifying, its current state is OK
www.lsi.com/channel/products 85
Chapter 2. CLI Syntax Reference
/cx/ux show volumes
This feature only applies to 9000 series controllers.
This command reports the number of volumes in the specified unit. The
number of volumes will normally be “1” unless auto-carving is enabled and/
or a boot LUN was specified.
Example:
//localhost> /c0/u0 show volumes
/c0/u0 volume(s) = 1
/cx/ux show all
This command shows the current setting of all above attributes.
If the auto-carve policy was on at the time the unit was created and the unit is
over the carve size, multiple volumes were created and are displayed at the
end of the summary information. Similarly, if the unit was created using the
3ware BIOS utility 3BM and a size was entered in the Boot Volume Size field,
multiple volumes were created and will be displayed. Note that a volume
created using the Boot Volume Size feature does not have to be used as a boot
volume.
Example:
//localhost> /c0/u1 show all
/c0/u1 status = OK
/c0/u1 is not rebuilding, its current state is OK
/c0/u1 is not verifying, its current state is OK
/c0/u1 is not initializing, its current state is OK
/c0/u1 Write Cache = on
/c0/u1 Read Cache = Intelligent
/c0/u1 volume(s) = 2
/c0/u1 name = myarray
/c0/u1 serial number = C6CPR7JMF98DA8001DF0
/c0/u1 Ignore ECC policy = on
/c0/u1 Auto Verify Policy = on
/co/u1 Storsave policy = protection
/c0/u1 Command Queuing Policy = on
/c0/u1 Rapid RAID Recovery setting = all
Unit UnitType Status %RCmpl %V/I/M VPort Stripe Size(GB)
----------------------------------------------------------------------- u1 RAID-0 OK - - - 64K 3576.06
u1-0 DISK OK - - p0 - 298.01
u1-1 DISK OK - - p1 - 298.01
u1-2 DISK OK - - p2 - 298.01
u1-3 DISK OK - - p3 - 298.01
u1-4 DISK OK - - p4 - 298.01
u1-5 DISK OK - - p5 - 298.01
u1-6 DISK OK - - p6 - 298.01
u1-7 DISK OK - - p7 - 298.01
u1-8 DISK OK - - p8 - 298.01
863ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
u1-9 DISK OK - - p9 - 298.01
u1-10 DISK OK - - p10 - 298.01
u1-11 DISK OK - - p11 - 298.01
u1/v0 Volume - - - - - 2047.00
u1/v1 Volume - - - - - 1529.06
/cx/ux remove [noscan] [quiet]
This command allows you to remove (previously called “export”) a unit.
Removing a unit instructs the firmware to remove the specified unit from its
poll of managed units, but retains the DCB (Disk Configuration Block)
metadata. A removed unit can be moved to a different controller.
noscan is used to not inform the operating system of this change. The default
is to inform the operating system.
quiet is used for non-interactive mode. No confirmation is given and the
command is executed immediately. This is useful for scripting purposes.
Example of interactive mode:
Unit Object Commands
//localhost> /c0/u0 remove
Removing /c0/u0 will take the unit offline.
Do you want to continue?
Y|N [N]:
Note: After the unit is removed through the CLI, the unit can be physically
removed. Hot swap bays are required to do this while the system is online.
Otherwise you must power down the system to prevent system hangs and damage.
/cx/ux del [noscan] [quiet]
This command allows you to delete a unit. Deleting a unit not only removes
the specified unit from the controller's list of managed units, but also destroys
the DCB (Disk Configuration Block) metadata. After deleting a unit, ports (or
disks) associated with the unit will be part of the free pool of managed disks.
Warning: This is a destructive command and should be used with care. All data on
the specified unit will be lost after executing this command.
noscan is used to not inform the operating system of this change. The default
is to inform the operating system.
quiet is used for non-interactive mode. No confirmation is given and the
command is executed immediately. This is useful for scripting purposes.
www.lsi.com/channel/products 87
Chapter 2. CLI Syntax Reference
Example of interactive mode:
//localhost> /c0/u0 del
Deleting /c0/u0 will cause the data on the unit to be
permanently lost.
Do you want to continue ? Y|N [N]:
/cx/ux start rebuild disk=
This command allows you to rebuild a degraded unit using the specified
disk=p. Rebuild only applies to redundant arrays such as RAID 1, RAID 5,
RAID 6, RAID 10, and RAID 50.
During rebuild, bad sectors on the source disk will cause the rebuild to fail.
RAID 6 arrays are less susceptible to failing since two copies of the data exist.
You can allow the operation to continue by using ignoreECC.
The rebuild process is a background task and will change the state of a unit to
REBUILDING. V arious show commands also show the percent completion as
rebuilding progresses.
Note that the disk used to rebuild a unit (specified with disk=p) must be a
SPARE or a unconfigured disk. You must first remove the degraded drive(s)
before starting the rebuild. Refer to the command “/cx/px remove [noscan]
[quiet]” on page 113 for details. Also refer to the command “/cx rescan
[noscan]” on page 62 to add new drives or to retry the original drive.
If you are rebuilding a RAID 50, RAID 6, or RAID 10 unit, multiple drives
can be specified if more than one sub-array is degraded.
When you issue this command, the specified rebuild will begin if schedules
are disabled; otherwise it will pause until the next scheduled rebuild. A file
system check is recommended following rebuild when using the ignoreECC
option.
<p:-p...>
[ignoreECC]
/cx/ux start verify
Also referred to as a ‘manual verify’, this command starts a background
verification process on the specified unit /cx/ux. The following table shows
the relationship between the controller model and logical unit type.
N/A (Not Applicable) refers to cases where the RAID type is not supported on
that controller model.
Table 9: Supported RAID (Logical Unit) Types for Verification
Model R0 R1 R5 R6 R10 R50 Single Spare
7K/8K No Yes Yes N/AYes N/A N/A No
883ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Unit Object Commands
Table 9: Supported RAID (Logical Unit) Types for Verification
Model R0 R1 R5 R6 R10 R50 Single Spare
a
9000
Yes Yes Yes N/AYes Yes Yes Yes
9690SA
and
9650SE
9750YesYesYesYesYesYesYesYes
YesYesYesYesYesYesYesYes
a. Models 9500S, 9550SX, and 9590SE
For 9550SX and earlier controllers and for 9650SE and 9690SA running
pre-9.5.1, when you issue this command the specified verify will begin if the
verify schedule is disabled; otherwise it will pause until the next scheduled
verify. If after starting a verify, you enable the Verify Task Schedule, this ondemand task will be paused until the next scheduled timeslot.
For 9750 controllers, and for 9650SE and 9690SA controllers running
firmware 9.5.1 or later, if verify=basic, the verify will begin immediately. If
verify=advanced, the verify will pause until the next scheduled verify. For
more information, see “/cx set verify=advanced|basic|1..5” on page 72.
Verify will pause if a rebuild, migration, or initialization is currently in
progress.
www.lsi.com/channel/products 89
Chapter 2. CLI Syntax Reference
/cx/ux pause rebuild
This command allows you to pause the rebuild operation on the specified unit
/cx/ux.
This feature is only supported on the 7000/8000 series con trollers. 9000 series
controllers have an on-board scheduler where rebuild operations can be
scheduled to take place at specified start and stop times. The /cx/ux pause rebuild command is provided to enable 7000/8000 users to achieve similar
functionality with use of Linux-provided schedulers such as cron(8) or at(1),
or user-supplied programs.
/cx/ux resume rebuild
This command allows you to resume the rebuild operation on the specified
unit /cx/ux.
This feature is intended only for 7000/8000 series controllers. 9000 series
controllers have an on-board scheduler where rebuild operations can be
scheduled to take place at specified start and stop times. The /cx/ux resume rebuild function is provided to enable 7000/8000 users to achieve similar
functionality with use of Linux-provided schedulers such as cron(8) or at(1),
or user supplied programs.
/cx/ux stop verify
This command stops a background verification process on the specified unit
/cx/ux. Table 9 on page 88 shows the supported matrix as a function of the
controller model and logical unit type.
/cx/ux flush
This command allows you to flush the write cache on the specified unit /ux
associated with controller /cx. Note that this command does not apply to spare
unit types.
/cx/ux set autoverify=on|off
This feature only applies to 9000 series controllers.
This command allows you to turn on and off the autoverify operation on a
specified unit /cx/ux.
903ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Unit Object Commands
By default, autoverify is on for 9750 controllers, and for 9650SE and 9690SA
controllers running firmware 9.5.1 or later, and off all earlier controller
models.
For 9750 controllers, and 9650SE and 9690SA controllers running firmware
9.5.1 or later, auto-verify works in conjunction with the basic verify schedule.
When autoverify is on and the basic verify schedule is used (verify=basic), a
verify will automatically run at the basic verify time (Friday at midnight, by
default). If the system is not on at that time, verification will start the next
time the system is powered on. When the autoverify is on and the advanced
verify schedule is used (verify=advanced), autoverify will run during the
times specified with the advanced schedule. You can use the show verify
command to display the existing schedule windows. For more information
about using basic or advanced verify, see “/cx set verify=advanced|basic|1..5”
on page 72.
For all 9000 series controllers running pre-9.5.1 firmware, auto-verify allows
the controller to run the verify function once every 24 hours. If verify
schedule windows are set up and enabled, then the controller will only start an
automatic verify task during the scheduled time slots. If the verify takes
longer than the schedule window, the verify process will be paused and
restarted during the next verify schedule window.
Table 11: Autoverify Behavior (when enabled)
Basic Verify Schedule
(verify=basic)
Advanced Verify
Schedule
(verify=advanced)
Verify Schedule is
Disabled
(verify=disable)
Verify Schedule is
Enabled
(verify=enable)
9750, and
9650SE and
9690SA with
9.5.1 or later
Runs at weekly
day and time
Follows
Advanced Verify
Schedule
Runs at weekly
day and time
(Same as Basic)
Follows
Advanced Verify
Schedule
9650SE and
9690SA with
firmware 9.5 or
9.5.0.1
N/AN/A
N/AN/A
Runs at any
time, as
determined by
firmware
Follows verify
schedule
For more about setting up verify schedules, see “Setting Up a Verify
Schedule” on page 66.
9550SX and
earlier
Runs at any
time, as
determined by
firmware
Follows verify
schedule
www.lsi.com/channel/products 91
Chapter 2. CLI Syntax Reference
/cx/ux set cache=on|off [quiet]
This command is the same as “/cx/ux set wrcache=on|off [quiet]”. Please see
below for details.
/cx/ux set wrcache=on|off [quiet]
This command allows you to turn on or off the write cache for a specified unit
/cx/ux. This feature is supported on all controllers.
By default, write cache is on.
Write cache includes the disk drive cache and controller cache.
When write cache is on, data will be stored in 3ware controller cache and
drive cache before the data is committed to disk. This allows the system to
process multiple write commands at the same time, thus improving
performance. However when data is stored in cache, it could be lost if a power
failure occurs. With a Battery Backup Unit (BBU) installed, the data stored on
the 3ware controller can be restored.
The following table shows the supported RAID types for write caching as a
function of controller model and logical unit type. N/A (Not Applicable)
refers to cases where the given logical unit type is not supported on a
particular controller model.
Table 12: Supported RAID Types for Write Caching
Model R0 R1 R5 R6 R10 R50 Single Spare
7K/8K Yes Yes Yes N/A Yes N/A N/A No
a
9000
9750,
9690SA,
and
9650SE
a. Models 9500S, 9550SX, and 9590SE
The quiet attribute turns off interactive mode, where no confirmation is
requested to proceed.
Yes Yes Yes N/A Yes Yes Yes No
YesYesYesYesYesYesYesNo
/cx/ux set rdcache=basic|intelligent|off
This command allows you to set the read cache to either Basic, Intelligent, or
Off on a specified unit. Setting this to Intelligent enables both Intelligent
Mode features and Basic Mode features. Setting it to Off disables both.
This command is supported on the 9750, 9690SA, and 9650SE controllers.
This feature is supported in all types of RAID units.
923ware SATA+SAS RAID Controller Card CLI Guide, Version 10.0
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.