( 4/8-port Innband connector and 12/16-port Multi-lane
connector PCI-X SATA RAID Controllers )
ARC-1210/1220/1230/1260/1280
( 4/8/12/16/24-port PCI-Express SATA RAID Controllers )
ARC-1231ML/1261ML/1280ML
(12/16/24-port PCI-Express SATA RAID Controllers)
USER Manual
Version: 3.3
Issue Date: November, 2006
Page 2
Microsoft WHQL Windows Hardware Compatibility
Test
ARECA is committed to submitting products to the Microsoft Windows
Hardware Quality Labs (WHQL), which is required for participation in the
Windows Logo Program. Successful passage of the WHQL tests results
in both the “Designed for Windows” logo for qualifying ARECA PCI-X and
PCI-Express SATA RAID controllers and a listing on the Microsoft Hardware Compatibility List (HCL).
Copyright and Trademarks
The information of the products in this manual is subject to change
without prior notice and does not represent a commitment on the part
of the vendor, who assumes no liability or responsibility for any errors
that may appear in this manual. All brands and trademarks are the
properties of their respective owners. This manual contains materials
protected under International Copyright Conventions. All rights
reserved. No part of this manual may be reproduced in any form or by
any means, electronic or mechanical, including photocopying, without
the written permission of the manufacturer and the author. All inquiries
should be addressed to ARECA Technology Corp.
FCC STATEMENT
This equipment has been tested and found to comply with the limits for
a Class B digital device, pursuant to part 15 of the FCC Rules. These
limits are designed to provide reasonable protection against interference in a residential installation. This equipment generates, uses, and
can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio
communications. However, there is no guarantee that interference will
not occur in a particular installation.
This section presents a brief overview of the SATA RAID Series
controller, ARC-1110/1110ML/1120/1120ML/1130/1130ML/1160/
1160ML/1170 (4/8/12/16/24-port PCI-X SATA RAID Controllers) and
ARC-1210/1220/1230/1230/1231ML/1260/1261ML/1280/1280ML
(4/8/12/16/24-port PCI-Express SATA RAID Controllers).
1.1 Overview
The ARC-11xx and ARC-12xx Series of high-performance Serial ATA
RAID controllers support a maximum of 4, 8, 12, 16, or 24 SATA
II peripheral devices (depending on model) on a single controller.
The ARC-11xx series for the PCI-X bus and the ARC-12xx Series
for the PCI-Express bus. When properly congured, these SATA
controllers provide non-stop service with a high degree of fault
tolerance through the use of RAID technology and can also provide
advanced array management features.
The 4 and 8 port SATA RAID controllers are low-prole PCI cards,
ideal for 1U and 2U rack-mount systems. These controllers utilize
the same RAID kernel that has been eld-proven in Areca existing
external RAID controllers, allowing Areca to quickly bring stable
and reliable RAID controllers to the market.
Unparalleled Performance
The SATA RAID controllers provide reliable data protection for
desktops, workstations, and servers. These cards set the standard with enhancements that include a high-performance Intel I/O
Processor, a new memory architecture, and a high performance PCI
bus interconnection. The 8/12/16/24-port controllers with the RAID
6 engine built-in can offer extreme-availability RAID 6 functionality.
This engine can concurrently compute two parity blocks with performance very similar to RAID 5. The controllers by default support
256MB of ECC SDRAM memory. The 12/16/24 port controllers support one DDR333 SODIMM socket that allows for upgrading up to
1GB of memory. The 12/16/24 port controllers support one DDR2533 DIMM socket that allows for upgrading up to 2GB of memory.
The controllers use Marvell 4/8 channel SATA PCI-X controller
10
Page 11
INTRODUCTION
chips, which can simultaneously communicate with the I/O processor and read or write data on multiple drives.
Unsurpassed Data Availability
As storage capacity requirements continue to rapidly increase, users require greater levels of disk drive fault tolerance, which can be
implemented without doubling the investment in disk drives. RAID
1 (mirroring) provides high fault tolerance. However, half of the
drive capacity of the array is lost to mirroring, making it too costly
for most users to implement on large volume sets due to dobuling
the number of drives required. Users want the protection of RAID 1
or better with an implementation cost comparable to RAID 5. RAID
6 can offer fault tolerance greater than RAID 1 or RAID 5 but only
consumes the capacity of 2 disk drives for distributed parity data.
The 8/12/16/24-port RAID controllers provide RAID 6 functionality
to meet these demanding requirements.
The SATA RAID controllers also provide RAID levels 0, 1, 1E, 3, 5
or JBOD congurations. Its high data availability and protection is
derived from the following capabilities: Online RAID Capacity Expansion, Array Roaming, Online RAID Level / Stripe Size Migration,
Dynamic Volume Set Expansion, Global Online Spare, Automatic
Drive Failure Detection, Automatic Failed Drive Rebuilding, Disk
Hot-Swap, Online Background Rebuilding and Instant Availability/Background Initialization. During the controller rmware ash
upgrade process, it is possible that an error results in corruption of
the controller rmware. This could result in the device becoming
non-functional. However, with our Redundant Flash image feature,
the controller will revert back to the last known version of rmware
and continue operating. This reduces the risk of system failure due
to rmware crashes.
Easy RAID Management
The SATA RAID controller utilizes built-in rmware with an embedded terminal emulation that can access via hot key at BIOS bootup screen. This pre-boot manager utility can be used to simplify
the setup and management of the RAID controller. The controller
rmware also contains a ArcHttp browser-based program that can
be accessed through the ArcHttp proxy server function in Windows,
11
Page 12
INTRODUCTION
Linux, FreeBSD and more environments. This Web browser-based
RAID management utility allows both local and remote creation and
modication RAID sets, volume sets, and monitoring of RAID status
from standard web browsers.
Cache MemoryOne DDR2 DIMM (Default 256MB, Upgrade to 2GB)
Drive Support12 * SATA ll16 * SATA ll24 * SATA ll24 * SATA ll
Disk Connector3*Min SAS 4i4*Min SAS 4i6*Min SAS 4i24*SATA
1.3 RAID Concept
1.3.1 RAID Set
A RAID set is a group of disks connected to a RAID controller. A
RAID set contains one or more volume sets. The RAID set itself
does not dene the RAID level (0, 1, 1E, 3, 5, 6, etc); the RAID
level is dened within each volume set. Therefore, volume sets are
contained within RAID sets and RAID Level is dened within the
volume set. If physical disks of different capacities are grouped
together in a RAID set, then the capacity of the smallest disk will
become the effective capacity of all the disks in the RAID set.
1.3.2 Volume Set
Each volume set is seen by the host system as a single logical device (in other words, a single large virtual hard disk). A volume set
will use a specic RAID level, which will require one or more physical disks (depending on the RAID level used). RAID level refers to
the level of performance and data protection of a volume set. The
capacity of a volume set can consume all or a portion of the available disk capacity in a RAID set. Multiple volume sets can exist in a
RAID set.
For the SATA RAID controller, a volume set must be created either
on an existing RAID set or on a group of available individual disks
(disks that are about to become part of a RAID set). If there are
pre-existing RAID sets with available capacity and enough disks for
the desired RAID level, then the volume set can be created in the
existing RAID set of the user’s choice.
15
Page 16
INTRODUCTION
In the illustration, volume 1 can be assigned a RAID level 5 of
operation while volume 0 might be assigned a RAID level 1E of
operation. Alterantively, the free space can be used to create volume 2, which could then be set to use RAID level 5.
RAID 0 and RAID 1 volume sets can be used immediately after creation because they do not create parity data. However,
RAID 3, 5 and 6 volume sets must be initialized to generate
parity information. In Backgorund Initialization, the initialization proceeds as a background task, and the volume set is fully
accessible for system reads and writes. The operating system
can instantly access the newly created arrays without requiring a reboot and without waiting for initialization to complete.
Furthermore, the volume set is protected against disk failures
while initialing. If using Foreground Initialization, the initialization process must be completed before the volume set is ready
for system accesses.
16
1.3.3.2 Array Roaming
The SATA RAID controllers store RAID conguration information
on the disk drives. The controller therefore protect the conguration settings in the event of controller failure. Array roaming
allows the administrators the ability to move a completele RAID
set to another system without losing RAID conguration infor-
Page 17
INTRODUCTION
mation or data on that RAID set. Therefore, if a server fails, the
RAID set disk drives can be moved to another server with an
Areca RAID controller and the disks can be inserted in any order.
1.3.3.3 Online Capacity Expansion
Online Capacity Expansion makes it possible to add one or more
physical drives to a volume set without interrupting server operation, eliminating the need to backup and restore after reconguration of the RAID set. When disks are added to a RAID set,
unused capacity is added to the end of the RAID set. Then, data
on the existing volume sets (residing on the newly expanded
RAID set) is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the RAID set.
The unused capacity can be used to create additional volume
sets.
A disk, to be added to a RAID set, must be in normal mode (not
failed), free (not spare, in a RAID set, or passed through to
host) and must have at least the same capacity as the smallest
disk capacity already in the RAID set.
Capacity expansion is only permitted to proceed if all volumes
on the RAID set are in the normal status. During the expansion
process, the volume sets being expanded can be accessed by
the host system. In addition, the volume sets with RAID level 1,
1E, 3, 5 or 6 are protected against data loss in the event of disk
failure(s). In the case of disk failure, the volume set transitions
from “migrating” state to “migrating+degraded“ state. When the
expansion is completed, the volume set would then transition to
“degraded” mode. If a global hot spare is present, then it further
transitions to the “rebuilding” state.
17
Page 18
INTRODUCTION
The expansion process is illustrated as following gure.
The SATA RAID controller redistributes the original volume set
over the original and newly added disks, using the same faulttolerance conguration. The unused capacity on the expand
RAID set can then be used to create an additional volume set,
with a different fault tolerance setting (if required by the user.)
18
The SATA RAID controller redistributes the original volume set
over the original and newly added disks, using the same faulttolerance conguration. The unused capacity on the expand raid
set can then be used to create an additional volume sets, with a
different fault tolerance setting if user need to change.
Page 19
INTRODUCTION
1.3.3.4 Online RAID Level and Stripe Size Migration
For those who wish to later upgrade to any RAID capabilities, a
system with Areca online RAID level/stripe size migration allows
a simplied upgrade to any supported RAID level without having
to reinstall the operating system.
The SATA RAID controllers can migrate both the RAID level and
stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/stripe size
migration can prove helpful during performance tuning activities
as well as when additional physical disks are added to the SATA
RAID controller. For example, in a system using two drives in
RAID level 1, it is possible to add a single drive and add capacity and retain fault tolerance. (Normally, expanding a RAID level
1 array would require the addition of two disks). A third disk
can be added to the existing RAID logical drive and the volume
set can then be migrated from RAID level 1 to 5. The result
would be parity fault tolerance and double the available capacity
without taking the system down. A forth disk could be added to
migrate to RAID level 6. It is only possible to migrate to a higher
RAID level by adding a disk; disks in an existing array can’t be
recongured for a higher RAID level without adding a disk.
Online migration is only permitted to begin, It all volumes to be
migrated are in the normal mode. During the migration process,
the volume sets being migrated are accessed by the host system. In addition, the volume sets with RAID level 1, 1E, 3, 5 or
6 are protected against data loss in the event of disk failure(s).
In the case of disk failure, the volume set transitions from migrating state to (migrating+degraded) state. When the migration is completed, the volume set transitions to degraded mode.
If a global hot spare is present, then it further transitions to
rebuilding state.
1.3.3.5 Online Volume Expansion
Performing a volume expansion on the controller is the process
of growing only the size of the lastest volume. A more exible
option is for the array to concatenate an additional drive into the
RAID set and then expand the volumes on the y. This happens
19
Page 20
INTRODUCTION
transparently while the volumes are online, but, at the end of
the process, the operating system will detect free space at after
the existing volume.
Windows, NetWare and other advanced operating systems support volume expansion, which enables you to incorporate the
additional free space within the volume into the operating system partition. The operating system partition is extended to
incorporate the free space so it can be used by the operating
system without creating a new operating system partition.
You can use the Diskpart.exe command line utility, included with
Windows Server 2003 or the Windows 2000 Resource Kit, to extend an existing partition into free space in the dynamic disk.
Third-party software vendors have created utilities that can be
used to repartition disks without data loss. Most of these utilities
work ofine. Partition Magic is one such utility.
1.4 High availability
1.4.1 Global Hot Spares
A Global Hot Spare is an unused online available drive, which is
ready for replacing the failure disk. The Global Hot Spare is one
of the most important features that SATA RAID controllers provide
to deliver a high degree of fault-tolerance. A Global Hot Spare
is a spare physical drive that has been marked as a global hot
spare and therefore is not a member of any RAID set. If a disk
drive used in a volume set fails, then the Global Hot Spare will
automatically take its place and he data previously located on the
failed drive is reconstructed on the Global Hot Spare.
For this feature to work properly, the global hot spare must have
at least the same capacity as the drive it replaces. Global Hot
Spares only work with RAID level 1, 1E, 3, 5, or 6 volume set.
You can congure up to three global hot spares with ARC-11xx/
12xx.
The Create Hot Spare option gives you the ability to dene a
20
Page 21
INTRODUCTION
global hot spare disk drive. To effectively use the global hot
spare feature, you must always maintain at least one drive that
is marked as a global spare.
Important:
The hot spare must have at least the same capacity as the
drive it replaces.
1.4.2 Hot-Swap Disk Drive Support
The SATA controller chip includes a protection circuit that supports
the replacement of SATA hard disk drives without having to shut
down or reboot the system. A removable hard drive tray can deliver “hot swappable” fault-tolerant RAID solutions at prices much
less than the cost of conventional SCSI hard disk RAID controllers. This feature provides advanced fault tolerant RAID protection
and “online” drive replacement.
1.4.3 Auto Declare Hot-Spare
If a disk drive is brought online into a system operating in degraded mode, The SATA RAID controllers will automatically declare the new disk as a spare and begin rebuilding the degraded
volume. The Auto Declare Hot-Spare function requires that the
smallest drive contained within the volume set in which the failure
occurred.
In the normal status, the newly installed drive will be recongured
an online free disk. But, the newly-installed drive is automatically
assigned as a hot spare if any hot spare disk was used to rebuild
and without new installed drive replaced it. In this condition, the
Auto Declare Hot-Spare status will disappeared if the RAID subsystem has since powered off/on.
The Hot-Swap function can be used to rebuild disk drives in arrays
with data redundancy such as RAID level 1, 1E, 3, 5, and 6.
21
Page 22
INTRODUCTION
1.4.4 Auto Rebuilding
If a hot spare is available, the rebuild starts automatically when
a drive fails. The SATA RAID controllers automatically and transparently rebuild failed drives in the background at user-denable
rebuild rates.
If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive
can be automatically rebuilt and so that fault tolerance can be
maintained.
The SATA RAID controllers will automatically restart the system
and the rebuild process if the system is shut down or powered off
abnormally during a reconstruction procedure condition.
When a disk is hot swapped, although the system is functionally
operational, the system may no longer be fault tolerant. Fault
tolerance will be lost until the removed drive is replaced and the
rebuild operation is completed.
During the automatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.
1.4.5 Adjustable Rebuild Priority
Rebuilding a degraded volume incurs a load on the RAID subsystem. The SATA RAID controllers allow the user to select the
rebuild priority to balance volume access and rebuild tasks appropriately. The Background Task Priority is a relative indication of
how much time the controller devotes to a background operation,
such as rebuilding or migrating.
The SATA RAID controller allows user to choose the task priority
(Ultra Low (5%), Low (20%), Medium (50%), High (80%)) to balance volume set access and background tasks appropriately. For
high array performance, specify an Ultra Low value. Like volume
initialization, after a volume rebuilds, it does not require a system
reboot.
22
Page 23
INTRODUCTION
1.5 High Reliability
1.5.1 Hard Drive Failure Prediction
In an effort to help users avoid data loss, disk manufacturers are
now incorporating logic into their drives that acts as an "early
warning system" for pending drive problems. This system is called
S.M.A.R.T. The disk integrated controller works with multiple
sensors to monitor various aspects of the drive's performance,
determines from this information if the drive is behaving normally
or not, and makes available status information to RAID controller
rmware that probes the drive and look at it.
The SMART can often predict a problem before failure occurs.
The controllers will recognize a SMART error code and notify the
administer of an impending hard drive failure.
1.5.2 Auto Reassign Sector
Under normal operation, even initially defect-free drive media can
develop defects. This is a common phenomenon. The bit density
and rotational speed of disks is increasing every year, and so is
the potential of problems. Usually a drive can internally remap
bad sectors without external help using cyclic redundancy check
(CRC) checksums stored at the end of each sector.
SATA drives perform automatic defect re-assignment for both
read and write errors. Writes are always completed - if a location
to be written is found to be defective, the drive will automatically
relocate that write command to a new location and map out the
defective location. If there is a recoverable read error, the correct data will be transferred to the host and that location will be
tested by the drive to be certain the location is not defective. If
it is found to have a defect, data will be automatically relocated,
and the defective location is mapped out to prevent future write
attempts.
In the event of an unrecoverable read error, the error will be
reported to the host and the location agged as potentially defective. A subsequent write to that location will initiate a sector test
and relocation should that location have a defect. Auto Reassign
Sector does not affect disk subsystem performance because it
23
Page 24
INTRODUCTION
runs as a background task. Auto Reassign Sector discontinues
when the operating system makes a request.
1.5.3 Consistency Check
A consistency check is a process that veries the integrity of
redundant data. For example, performing a consistency check
of a mirrored drive assures that the data on both drives of the
mirrored pair is exactly the same. To verify RAID 3, 5 or 6 redundancy, a consistency check reads all associated data blocks, computes parity, reads parity, and veries that the computed parity
matches the read parity.
Consistency checks are very important because they detect and
correct parity errors or bad disk blocks in the drive. A consistency
check forces every block on a volume to be read, and any bad
blocks are marked; those blocks are not used again. This is critical and important because a bad disk block can prevent a disk
rebuild from completing. We strongly recommend that you run
consistency checks on a regular basis—at least once per week.
Note that consistency checks degrade performance, so you should
run them when the system load can tolerate it.
1.6 Data Protection
1.6.1 BATTERY BACKUP
The SATA RAID controllers are armed with a Battery Backup Module (BBM). While a Uninterruptible Power Supply (UPS) protects
most servers from power uctuations or failures, a BBM provides
an additional level of protection. In the event of a power failure, a
BBM supplies power to retain data in the RAID controller’s cache,
thereby permitting any potentially dirty data in the cache to be
ushed out to secondary storage when power is restored.
The batteries in the BBM are recharged continuously through a
trickle-charging process whenever the system power is on. The
batteries protect data in a failed server for up to three or four
days, depending on the size of the memory module. Under normal operating conditions, the batteries last for three years before
replacement is necessary.
24
Page 25
INTRODUCTION
1.6.2 RECOVERY ROM
The SATA RAID controller rmware is stored on the ash ROM and
is executed by the I/O processor. The rmware can also be updated through the PCI-X/PCIe bus port or Ethernet port (if equipped)
without the need to replace any hardware chips. During the controller rmware upgrade ash process, it is possible for a problem
to occur resulting in corruption of the controller rmware. With
our Redundant Flash Image feature, the controller will revert back
to the last known version of rmware and continue operating.
This reduces the risk of system failure due to rmware crash.
1.7 Understanding RAID
RAID is an acronym for Redundant Array of Independent Disks. It
is an array of multiple independent hard disk drives that provides
high performance and fault tolerance. The SATA RAID controller implements several levels of the Berkeley RAID technology.
An appropriate RAID level is selected when the volume sets are
dened or created. This decision should be based on the desired
disk capacity, data availability (fault tolerance or redundancy),
and disk performance. The following section discusses the RAID
levels supported by the SATA RAID controller.
The SATA RAID controller makes the RAID implementation and
the disks’ physical conguration transparent to the host operating
system. This means that the host operating system drivers and
software utilities are not affected, regardless of the RAID level
selected. Correct installation of the disk array and the controller requires a proper understanding of RAID technology and the
concepts.
1.7.1 RAID 0
RAID 0, also referred to as striping, writes stripes of data across
multiple disk drives instead of just one disk drive. RAID 0 does
not provide any data redundancy, but does offer the best highspeed data throughput. RAID 0 breaks up data into smaller blocks
and then writes a block to each drive in the array. Disk striping enhances performance because multiple drives are accessed
25
Page 26
INTRODUCTION
simultaneously; the reliability of RAID Level 0 is less because the
entire array will fail if any one disk drive fails, due to a lack of
redundancy.
1.7.2 RAID 1
RAID 1 is also known as “disk mirroring”; data written to one disk
drive is simultaneously written to another disk drive. Read performance may be enhanced if the array controller can, in parallel,
accesses both members of a mirrored pair. During writes, there
will be a minor performance penalty when compared to writing
to a single disk. If one drive fails, all data (and software applications) are preserved on the other drive. RAID 1 offers extremely
high data reliability, but at the cost of doubling the required data
storage capacity.
26
Page 27
INTRODUCTION
1.7.3 RAID 1E
RAID 1E is a combination of RAID 0 and RAID 1, combing stripping with disk mirroring. RAID Level 1E combines the fast performance of Level 0 with the data redundancy of Leve1 1. In
this conguration, data is distributed across several disk drives,
similar to Level 0, which are then duplicated to another set of
drive for data protection. RAID 1E has been traditionally implemented using an even number of disks, some hybrids can use an
odd number of disks as well. Illustration is an example of a hybrid RAID 1E array comprised of ve disks; A, B, C, D and E. In
this conguration, each strip is mirrored on an adjacent disk with
wrap-around. In fact this scheme - or a slightly modied version
of it - is often referred to as RAID 1E and was originally proposed
by IBM. When the number of disks comprising a RAID 1E is even,
the striping pattern is identical to that of a traditional RAID 1E,
with each disk being mirrored by exactly one other unique disk.
Therefore, all the characteristics for a traditional RAID 1E apply
to a RAID 1E when the latter has an even number of disks. Areca
RAID 1E offers a little more exibility in choosing the number of
disks that can be used to constitute an array. The number can be
even or odd.
1.7.4 RAID 3
RAID 3 provides disk striping and complete data redundancy
though a dedicated parity drive. RAID 3 breaks up data into
smaller blocks, calculates parity by performing an exclusive-or
on the blocks, and then writes the blocks to all but one drive in
27
Page 28
INTRODUCTION
the array. The parity data created during the exclusive-or is then
written to the last drive in the array. If a single drive fails, data is
still available by computing the exclusive-or of the contents corresponding strips of the surviving member disk. RAID 3 is best
for applications that require very fast data- transfer rates or long
data blocks.
1.7.5 RAID 5
RAID 5 is sometimes called striping with parity at byte level. In
RAID 5, the parity information is written to all of the drives in the
controllers rather than being concentrated on a dedicated parity
disk. If one drive in the system fails, the parity information can
be used to reconstruct the data from that drive. All drives in the
array system can be used for seek operations at the same time,
greatly increasing the performance of the RAID system. This
relieves the write bottleneck that characterizes RAID 4, and is the
primary reason that RAID 5 is more often implemented in RAID
arrays.
28
Page 29
INTRODUCTION
1.7.6 RAID 6
RAID 6 provides the highest reliability, but is not yet widely used.
It is similar to RAID 5, but it performs two different parity computations or the same computation on overlapping subsets of
the data. RAID 6 can offer fault tolerance greater than RAID 1 or
RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data. RAID 6 is an extension of RAID 5 but uses a
second, independent distributed parity scheme. Data is striped on
a block level across a set of drives, and then a second set of parity is calculated and written across all of the drives.
Summary of RAID Levels
The SATA RAID controller supports RAID Level 0, 1, 1E, 3, 5 and 6.
The table below provides a summary of RAID levels.
Features and Performance
RAID
Level
0Also known as stripping
DescriptionMin.
Drives
Data distributed across multiple
drives in the array. There is no
data protection.
Data
Reliability
1No data
Protection
Data
Transfer
Rate
Very
High
I/O Request
Rates
Very High for
Both Reads
and Writes
29
Page 30
INTRODUCTION
1Also known as mirroring
All data replicated on N separated disks.
N is almost always 2.
This is a high availability solution, but due to the 100%
duplication, it is also a costly
solution. Half of drive capacity in
array devoted to mirroring.
1EAlso known Block-Interleaved
Parity.
Data and parity information
is subdivided and distributed
across all disks. Parity must be
the equal to the smallest disk
capacity in the array. Parity
information normally stored on a
dedicated parity disk.
3Also known Bit-Interleaved Par-
ity.
Data and parity information
is subdivided and distributed
across all disks. Parity data
consumes the capacity of 1
disk drive. Parity information
normally stored on a dedicated
parity disk.
5Also known Block-Interleaved
Distributed Parity.
Data and parity information
is subdivided and distributed
across all disk. Parity data consumes the capacity of 2 disk
drive.
2Lower
than
RAID 6;
Higher
than
RAID
3, 5
3Lower
than
RAID 6;
Higher
than
RAID
3, 5
3Lower
than
RAID 1,
1E, 6;
Higher
than a
single
drive
3Lower
than
RAID 1,
1E, 6;
Higher
than a
single
drive
Reads
are
higher
than a
single
disk;
Writes
similar
to a
single
disk
Transfer
rates
more
similar
to RAID
1 than
RAID 0
Reads
are
similar
to
RAID
0;
Writes
are
slower
than a
single
disk
Reads
are
similar
to
RAID 0;
Writes
are
slower
than a
single
disk
Reads are
twice as fast
as a single
disk;
Write are
similar to a
single disk.
Reads are
twice as fast
as a single
disk;
Writes are
similar to a
single disk.
Reads are
close to being
twice as fast
as a single
disk;
Writes are
similar to a
single disk.
Reads are
similar to
RAID 0;
Writes are
slower than a
single disk.
30
Page 31
INTRODUCTION
6RAID 6 provides the highest
reliability. Similar to RAID 5, but
does two different parity computations. RAID 6 offers fault
tolerance greater that RAID 1 or
RAID 5. Parity data consumes
the capacity of 2 disk drives.
4highest
reliability
Reads
are
similar
to
RAID 0;
Writes
are
slower
than a
single
disk
Reads are
similar to
RAID 0;
Writes are
slower than a
single disk.
31
Page 32
HARDWARE INSTALLATION
2. Hardware Installation
This section describes the procedures for installing the SATA RAID controllers.
2.1 Before Your begin Installation
Thanks for purchasing the SATA RAID Controller as your RAID data
storage and management system. This user guide gives simple
step-by-step instructions for installing and conguring the SATA
RAID Controller. To ensure personal safety and to protect your
equipment and data, carefully read the information following the
package content list before you begin installing.
Package Contents
If your package is missing any of the items listed below, contact
your local dealer before proceeding with installation (disk drives and disk mounting brackets are not included):
ARC-11xx Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 4/8/12/16/24 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
ARC-11xxML/12xxML Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 1 x Installation CD
• 1 x User Manual
ARC-12xx Series SATA RAID Controller
• 1 x PCI-Express SATA RAID Controller in an ESD-protective bag
• 4/8/12/16/24 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
32
Page 33
HARDWARE INSTALLATION
2.2 Board Layout
Follow the instructions below to install a PCI RAID Card into your
PC / Server.
Figure 2-1, ARC-1110/1120 (4/8-port PCI-X SATA RAID Controller)
Figure 2-2, ARC-1210/1220 (4/8-port PCI-Express SATA RAID Con-
troller)
33
Page 34
HARDWARE INSTALLATION
Figure 2-3, ARC-1110ML/1120ML (4/8-port PCI-X SATA RAID Controller)
Figure 2-4, ARC-1210ML/1220ML (4-port PCI Express SAS RAID
Controller)
34
Page 35
HARDWARE INSTALLATION
Figure 2-5, ARC-1130/1160 (12/16-port PCI-X SATA RAID Controller)
Figure 2-6, ARC-1130ML/1160ML (12/16-port PCI-X SATA RAID
Controller)
35
Page 36
HARDWARE INSTALLATION
Figure 2-7, ARC-1230/1260 (12/16-port PCI-EXpress SATA RAID
Controller)
Figure 2-8, ARC-1170 (24-port PCI-X SATA RAID Controller)
36
Page 37
HARDWARE INSTALLATION
Figure 2-9, ARC-1280 (24-port PCI-Express SATA RAID Controller)
An ESD grounding strap or mat is required. Also required are standard hand tools to open your system’s case.
System Requirement
The controller can be installed in a universal PCI slot and requires
a motherboard that:
ARC-11xx series required one of the following:
• Complies with the PCI Revision 2.3 32/64-bit 33/66MHz, 3.3V.
• Complies with the PCI-X 32/64-bit 66/100/133 MHz, 3.3V.
ARC-12xx series requires:
• Complies with the PCI-Express X8
The SATA RAID controller may be connected to up to 4, 8, 12, 16,
or 24 SATA ll hard drives using the supplied cables.
Optional cables are required to connect any drive activity LEDs and
fault LEDs on the enclosure to the SATA RAID controller.
Installation Tools
The following items may be needed to assist with installing the
SATA RAID controller into an available PCI expansion slot.
• Small screwdriver
• Host system hardware manuals and manuals for the disk or
enclosure being installed.
Personal Safety Information
To ensure personal safety as well as the safety of the equipment:
• Always wear a grounding strap or work on an ESD-protective
mat.
• Before opening the system cabinet, turn off power switches and
unplug the power cords. Do not reconnect the power cords until
you have replaced the covers.
38
Page 39
HARDWARE INSTALLATION
Warning:
High voltages may be found inside computer equipment. Before installing any of the hardware in this package or removing the protective covers of any computer equipment, turn
off power switches and disconnect power cords. Do not reconnect the power cords until you have replaced the covers.
Electrostatic Discharge
Static electricity can cause serious damage to the electronic components on this SATA RAID controller. To avoid damage caused by
electrostatic discharge, observe the following precautions:
• Don’t remove the SATA RAID controller from its anti-static packaging until you are ready to install it into a computer case.
• Handle the SATA RAID Controller by its edges or by the metal
mounting brackets at its each end.
• Before you handle the SATA RAID controller in any way, touch a
grounded, anti-static surface, such as an unpainted portion of the
system chassis, for a few seconds to discharge any built-up static
electricity.
2.3 Installation
Follow the instructions below to install a SATA RAID controller into
your PC / Server.
Step 1. Unpack
Unpack and remove the SATA RAID controller from the package.
Inspect it carefully, if anything is missing or damaged, contact your
local dealer.
Step 2. Power PC/Server Off
Turn off computer and remove the AC power cord. Remove the
system’s cover. See the computer system documentation for instruction.
39
Page 40
HARDWARE INSTALLATION
Step 3. Install the PCI RAID Cards
To install the SATA RAID controller remove the mounting screw
and existing bracket from the rear panel behind the selected PCI
slot. Align the gold-ngered edge on the card with the selected
PCI expansion slot. Press down gently but rmly to ensure that the
card is properly seated in the slot, as shown in Figure 2-11. Next,
screw the bracket into the computer chassis. ARC-11xx controllers
can t in both PCI (32-bit/3.3V) and PCI-X slots. It can get the best
performance installed in a 64-bit/133MHz PCI-X slot. ARC-12xx
controllers require a PCI-Express 8X slot.
Figure 2-11, Insert SATA RAID controller into a PCI-X slot
Step 4. Mount the Cages or Drives
Remove the front bezel from the computer chassis and install the
Cages or SATA Drives in the computer chassis. Loading drives to
the drive tray if cages are installed. Be sure that the power is connected to either the Cage backplane or the individual drives.
40
Page 41
HARDWARE INSTALLATION
Figure 2-12, Mount Cages & Drives
Step 5 Connect the SATA cable
Model ARC-11XX and ARC-12XX controllers have dual-layer SATA
internal connectors. If you have not already connected your SATA
cables, use the cables included with your kit to connect the controller to the SATA hard drives.
The cable connectors are all identical, so it does not matter which
end you connect to your controller, SATA hard drive, or cage backplane SATA connector.
Figure 2-13, SATA Cable
Note:
The SATA cable connectors must match your HDD cage.
For example: Channel 1 of RAID Card connects to channel 1
of HDD cage, channel 2 of RAID Card connects to channel 2
of HDD cage, and follow this rule.
41
Page 42
HARDWARE INSTALLATION
Step 5-2. Connect the Multi-lance cable
Model ARC-11XXML has multi-lance internal connectors, each of
them can support up to four SATA drives. These adapters can be
installed in a server RAID enclosure with a Multi-lance connector
(SFF-8470) backplane. Multi-lance cables are not included in the
ARC-11XXML package.
If you have not already connected your Multi-lance cables, use the
cables included with your enclosure to connect your controller to
the Multi-lance connector backplane. The type of cable will depend
on what enclosure you have. The following diagram shows one example picture of Multi-lane cable.
Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
Figure 2-14, Multi-Lance Cable
Step 5-3. Connect the Min SAS 4i to 4*SATA cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i (SFF-8087)
internal connectors, each of them can support up to four SATA
drives. These adapters can be installed in a server RAID enclosure
with a standard SATA connector backplane. Min SAS 4i to SATA
cables are included in the ARC-1231ML/1261ML/1280ML package.
The following diagram shows the picture of MinSAS 4i to 4*SATA
cables.
Unpack and remove the PCI RAID cards. Inspect it carefully. If
anything is missing or damaged, contact your local dealer.
42
Page 43
HARDWARE INSTALLATION
Figure 2-15, Min SAS 4i to 4*SATA
For Sideband cable signal Please refer to page 51 for SGPIO bus.
Step 5-4. Connect the Min SAS 4i to Multi-lance cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i internal
connectors, each of them can support up to four SATA drives. These
controllers can be installed in a server RAID enclosure with a Multilance connector (SFF-8470) backplane. Multi-lance cables are not
included in the ARC-12XXML package.
If you have not already connected your Min SAS 4i to Multilance cables, buy the Min SAS 4i to Multi-lance cables to t your
enclosure. And connect your controller to the Multi-lance connector
backplane. The type of cable will depend on what enclosure you
have. The following diagram shows one example picture of Min SAS
4i to Multi-lance cable.
Unpack and remove the PCI RAID cards. Inspect it carefully. If
anything is missing or damaged, contact your local dealer.
Figure 2-16, Min SAS 4i to Multi-lance
43
Page 44
HARDWARE INSTALLATION
Step 5-5. Connect the Min SAS 4i to Min SAS 4i cable
Model ARC-1230ML/1260ML/1280ML have Min SAS 4i internal
connectors, each of them can support up to four SATA drives.
These adapters can be installed in a server RAID enclosure with a
Min SAS 4i internal connector backplane. Min SAS 4i cables are not
included in the ARC-12XXML package.
This Min SAS 4i cable has eight signal pins to support four SATA
drives and six pins for the SGPIO (Serial General Purpose Input/
Output) side-band signals. The SGPIO bus is used for efcient LED
management and for sensing drive Locate status. Please see page
51 for the details of the SGPIO bus.
Unpack and remove the PCI RAID cards. Inspect it carefully. If
anything is missing or damaged, contact your local dealer.
Figure 2-17, Min SAS 4i to Min SAS 4i
Step 6 Install the LED cable (optional)
ARC-1XXX Series Fault/Activity Header Intelligent Electronics
Schematic.
44
Page 45
HARDWARE INSTALLATION
The intelligent LED controller outputs a low-level pulse to determine if status LEDs are attached to pin sets 1 and 2. This allows
automatic controller conguration of the LED output. If the logical level is different between the st 2 sets of the HDD LED header
(LED attached to Set 1 but not Set 2), the controller will assign the
rst HDD LED header as the global indicator connector. Otherwise,
each LED output will show only individual drive status.
The SATA RAID controller provides four kinds of LED status connectors.
A: Global indicator connector, which lights when any drive is active.
B: Individual LED indicator connector, for each drive channel.
C: I2C connector, for SATA proprietary backplane enclosure.
D: SGPIO connector for SAS Backplane enclosure
The following diagrams and description describes each type of connector.
Note:
A cable for the global indicator comes with your computer
system. Cables for the individual drive LEDs may come with
a drive cage, or you may need to purchase them.
A: Global Indicator Connector
If the system will use only a single global indicator, attach the
global indicator cable to the two pins HDD LED connector. The following diagrams show the connector and pin locations.
Figure 2-18, ARC1110/1120/1210/1220
global LED connection
for Computer Case.
45
Page 46
HARDWARE INSTALLATION
Figure 2-19, ARC1130/1160/1230/1260
global LED connection
for Computer Case.
Figure 2-20, ARC-1170
global LED connection
for Computer Case.
46
Figure 2-21, ARC-1280
global LED connection for
Computer Case.
Page 47
HARDWARE INSTALLATION
Figure 2-22, ARC-1231ML/
1261ML/1280ML global LED
connection for Computer
Case.
B: Individual LED indicator connector
Connect the cables for the drive activity LEDs and fault LEDs between the backplane of the cage and the respective connector on
the SATA RAID controller. The following describes the fault/activity LED.
LEDNormal StatusProblem Indication
Activity LEDWhen the activity LED is illu-
minated, there is I/O activity
on that disk drive. When the
LED is dark, there is no activity on that disk drive.
Fault LEDWhen the fault LED is solid
illuminated, there is no disk
present.
When the fault LED is off,
that disk is present and status is normal.
N/A
When the Red LED is slow blinking
(2 times/sec), that disk drive has
failed and should be hot-swapped
immediately. When the activity
LED is illuminated and Red LED is
fast blinking (10 times/sec) there
is rebuilding activity on that disk
drive.
47
Page 48
HARDWARE INSTALLATION
Figure 2-23, ARC1110/1120/1210/1220
Individual LED indicators connector, for each
channel drive.
Figure 2-24, ARC1130/1160/1230/1260
Individual LED indicators connector, for each
channel drive.
48
Figure 2-25, ARC-1170
Individual LED indicators
connector, for each channel drive.
Page 49
HARDWARE INSTALLATION
Figure 2-26, ARC-1280
Individual LED indicators
connector, for each channel drive.
Figure 2-27, ARC-1231ML/
1261ML/1280ML Individual
LED indicators connector, for
each channel drive.
C: I2C Connector
You can also connect the I2C interface to a proprietary SATA
backplane enclosure. This can reduce the number of activity LED
and/or fault LED cables. The I2C interface can also cascade to another SATA backplane enclosure for the additional channel status
display.
49
Page 50
HARDWARE INSTALLATION
Figure 2-28, Activity/Fault LED I2C connector connected between
SATA RAID Controller & SATA HDD Cage backplane.
Figure 2-29, Activity/Fault LED I2C connector connected between
SATA RAID Controller & 4 SATA HDD backplane.
Note:
Ci-Design has supported this feature in its 4-port 12-633605A SATA ll backplane.
The following is the I2C signal name description for LCD & Fault/Activity LED.
50
Page 51
HARDWARE INSTALLATION
PINDescriptionPINDescription
1power (+5V) 2GND
3LCD Module Interrupt 4Fault/Activity Interrupt
5LCD Module Serial Data 6Fault/Activity clock
7Fault/Activity Serial Data 8LCD Module clock
D: SGPIO bus
The preferred I/O connector for server backplanes is the Min SAS
4i internal serial-attachment connector. This connector has eight
signal pins to support four SATA drives and six pins for the SGPIO
(Serial General Purpose Input/Output) side-band signals. The
SGPIO bus is used for efcient LED management and for sensing drive Locate status. See SFF 8485 for the specication of the
SGPIO bus.
The number of drives supported can be increased, by a factor of
four, by adding similar backplane to maximum of 24 drives (6
backplanes)
LED Management: The backplane may contain LEDs to indicate
drive status. Light from the LEDs could be transmitted to the outside of the server by using light pipes mounted on the SAS drive
tray. A small EPLP microcontroller on the backplane, connected via
the SGPIO bus to a ARC-1231ML/1261ML/1280ML SATA RAID controller, could control the LEDs. Activity: blinking 5 Times/Second
Fault: solid illuminated
Drive Locate Circuitry: The locate of a drive may be detected by
sensing the voltage level of one of the pre-charge pins before and
after a drive is installed. Fault (red) blinking 2 Times/Second.
The following signal denes the SGPIO assignments for the Min
SAS 4i connector in ARC-1231ML/1261ML/1280ML.
PINDescriptionPINDescription
SideBand0SClock (Clock Signal) SideBand1SLoad (Last clock of a bit
SideBand2GroundSideBand3Ground
SideBand4SDataOut (Serial data
output bit stream)
SideBand5SDataIn (Serial data input bit
stream)
stream)
51
Page 52
HARDWARE INSTALLATION
The following signal denes the sideband connector which can
work with Areca sideband cable.
The sideband header is located at backplane. For SGPIO to
work properly, please connect Areca 8-pin sideband cable to the
sideband header as shown above. See the table for pin denitions.
Step 7. Re-check the SATA HDD LED and Fault LED Cable
connections
Be sure that the proper failed drive channel information is displayed
by the Fault and HDD Activity LEDs. An improper connection will
tell the user to ‘‘Hot Swap’’ the wrong drive. This will remove the
wrong disk (one that is functioning properly) from the controller.
This can result in failure and loss of system data.
Step 8. Power up the System
Thoroughly check the installation, reinstall the computer cover, and
reconnect the power cord cables. Turn on the power switch at the
rear of the computer (if equipped) and then press the power button
at the front of the host computer.
Step 9. Congure volume set
The SATA RAID controller congures RAID functionality through the
McBIOS RAID manager. Please refer to Chapter 3, McBIOS RAID
manager, for the detail regarding conguration. The RAID controller
can also be congured through the McRAID storage manager software utility with ArcHttp proxy server installed through on-board
Lan port or LCD module. For this option, please reference Chapter
6, Web Browser-Based Conguration or LCD conguration menu.
52
Page 53
HARDWARE INSTALLATION
Step 10. Install the controller driver
For a new system:
• Driver installation usually takes places as part of operating system installation. Please reference the Chapter 4 Diver Installation
for the detail installation procedure.
In an existing system:
• Install the controller driver into the existing operating system.
Please reference the Chapter 4, Driver Installation, for the detailed
installation procedure.
Note:
Look for newest release versions of drivers please download
from http://www.areca.com.tw
Step 11. Install ArcHttp proxy Server
The SATA RAID controller rmware has embedded the web-browser
RAID manager. ArcHttp proxy driver will enable it. The browserbased RAID manager provides all of the creation, management,
and monitor SATA RAID controller status. Please refer to the
Chapter 5 for the detail ArcHttp proxy server installation. For SNMP
agent function, please refer to Appendix C.
Step 12. Determining the Boot sequences
The SATA RAID controller is a bootable controller. If your system
already contains a bootable device with an installed operating system, you can set up your system to boot a second operating system from the new controller. To add a second bootable controller,
you may need to enter setup and change the device boot sequence
so that the SATA RAID controller heads the list. If the system BIOS
setup does not allow this change, your system may not be congurable to allow the SATA RAID controller to act as a second boot
device.
53
Page 54
HARDWARE INSTALLATION
Summary of the installation
The ow chart below describes the installation procedures for
SATA RAID controller. These procedures include hardware installation, the creation and conguration of a RAID volume through the
McBIOS, OS installation and installation of SATA RAID controller
software.
The software components congure and monitor the SATA RAID
controller via ArcHttp Proxy Server.
Conguration UtilityOperating System supported
McBIOS RAID ManagerOS-Independent
McRAID Storage Manager
(Via Archttp proxy server)
SAP Monitor (Single Admin portal to
scan for multiple RAID units in the network, Via ArcHttp proxy server)
SNMP Manager Console IntegrationWindows 2000/XP/2003, Linux and
Windows 2000/XP/2003, Linux, FreeBSD, NetWare, UnixWare, Solaris and
Mac
Windows 2000/XP/2003
FreeBSD
McRAID Storage Manager
Before launching the rmware-embedded web server, McRAID storage manager, you can to install the ArcHttp proxy server on your
server system or through on-board Lan-port (if equipped). If you
need additional information about installation and start-up of this
function, see the McRAID Storage Manager section in Chapter 6.
54
Page 55
HARDWARE INSTALLATION
SNMP Manager Console Integration
• Out of Band-Using Ethernet port (12/16/24-port Controller)
Before launching the rmware-embedded SNMP agent in the
sever, you need rst to enable the reware-embedded SNMP
agent function on your SATA RAID controller. If you need
additional information about installation and start-up this
function, see the section 6.8.4 SNMP Conguration (12/16/24-
port)
• In-Band-Using PCI-X/PCIe Bus (4/8/12/16/24-port
Controller)
Before launching the SNMP agent in the sever, you need rst to
enable the reware-embedded SNMP community conguration
and install Areca SNMP extension agent in your server system.
If you need additional information about installation and start-up
the function, see the SNMP Operation & Installation section in the
Appendix C
Single Admin Portal (SAP) Monitor
This utility can scan for multiple RAID units on the network and
monitor the controller set status. It also includes a disk stress test
utility to identify marginal spec disks before the RAID unit is put
into a production environment.
For additional information, see the utility manual in the packaged
CD-ROM or download it from the web site http://www.arec.com.tw
55
Page 56
BIOS CONFIGURATION
3. McBIOS RAID Manager
The system mainboard BIOS automatically congures the following
SATA RAID controller parameters at power-up:
• I/O Port Address
• Interrupt channel (IRQ)
• Adapter ROM Base Address
Use McBIOS to further congure the SATA RAID controller to suit your
server hardware and operating system.
3.1 Starting the McBIOS RAID Manager
This section explains how to use the McBIOS Setup Utility to congure your RAID system. The BIOS Setup Utility is designed to be
user-friendly. It is a menu-driven program, residing in the rmware, which allows you to scroll through various menus and submenus and select among the predetermined conguration options.
When starting a system with an SATA RAID controller installed, it
will display the following message on the monitor during the startup sequence (after the system bios startup screen but before the
operating system boots):
I/O-Port=F3000000h, IRQ=11, BIOS ROM mapped at D000:0h
No BIOS disk Found, RAID Controller BIOS not installed!
Press <Tab/F6> to enter SETUP menu. 9 second(s) left <ESC to Skip>..
The McBIOS conguration manager message remains on your
screen for about nine seconds, giving you time to start the congure menu by pressing Tab or F6. If you do not wish to enter conguration menu, press <ESC> to skip conguration immediately.
When activated, the McBIOS window appears showing a selection
dialog box listing the SATA RAID controllers that are installed in the
system.
The legend at the bottom of the screen shows you what keys are
enabled for the windows.
ArrowKey Or AZ:Move Cursor, Enter: Select, ** Select & Press F10 to Reboot**
Use the Up and Down arrow keys to select the adapter you want
to congure. While the desired adapter is highlighted, press the
<Enter> key to enter the Main Menu of the McBIOS Conguration
Utility.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Verify Password
Note:
The manufacture
default password is
set to 0000; this
password can be
modied by selecting
Change Password
in the Raid System
Function section.
The McBIOS conguration utility is rmware-based and is used to
congure raid sets and volume sets. Because the utility resides in
the SATA RAID controller rmware, operation is independent of any
operating systems on your computer. This utility can be used to:
• Create RAID sets,
• Expand RAID sets,
57
Page 58
BIOS CONFIGURATION
• Add physical drives,
• Dene volume sets,
• Modify volume sets,
• Modify RAID level/stripe size,
• Dene pass-through disk drives,
• Modify system functions, and
• Designate drives as hot spares.
3.3 Conguring Raid Sets and Volume Sets
You can congure RAID sets and volume sets with McBIOS RAID
manager automatically using Quick Volume/Raid Setup or manually
using Raid Set/Volume Set Function. Each conguration method requires a different level of user input. The general ow of operations
for RAID set and volume set conguration is:
StepAction
1Designate hot spares/pass-through drives (optional).
2Choose a conguration method.
3Create RAID sets using the available physical drives.
4Dene volume sets using the space available in the RAID Set.
5Initialize the volume sets and use volume sets (as logical drives) in the
host OS.
3.4 Designating Drives as Hot Spares
Any unused disk drive that is not part of a RAID set can be designated as a Hot Spare. The “Quick Volume/Raid Setup” conguration
will add the spare disk drive and automatically display the appropriate raid level from which the user can select. For the “Raid Set
Function conguration” option, the user can use the “Create Hot
Spare” option to dene the hot spare disk drive.
When a Hot Spare disk drive is being created using the “Create Hot
Spare” option (in the Raid Set Function), all unused physical devices connected to the current controller appear:
Choose the target disk by selecting the appropriate check box.
Press the Enter key to select a disk drive, and press Yes in the
Create Hot Spare to designate it as a hot spare.
58
Page 59
BIOS CONFIGURATION
3.5 Using Quick Volume /Raid Setup Con-
guration
Quick Volume / Raid Setup Conguration collects all available
drives and includes them in a RAID set. The RAID set you create
is associated with exactly one volume set. You will only be able
to modify the default RAID level, the stripe size, and the capacity
of the new volume set. Designating drives as Hot Spares is also
possible in the raid level selection option. The volume set default
settings will be:
ParameterSetting
Volume NameVolume Set # 00
SCSI Channel/SCSI ID/SCSI LUN0/0/0
Cache ModeWrite Back
Tag QueuingYes
The default setting values can be changed after conguration is
complete. Follow the steps below to create arrays using the RAID
Set / Volume Set method:
StepAction
1Choose Quick Volume /Raid Setup from the main menu. The available
RAID levels with hot spare for the current volume set drive are displayed.
2It is recommend that you drives of the same capacity in a specic array.
If you use drives with different capacities in an array, all drives in the
raid set will be set to the capacity of the smallest drive in the raid set.
The numbers of physical drives in a specic array determines which RAID
levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID 1+Spare requires at least 3 physical drives.
RAID 1E requires at least 4 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 3 +Spare requires at least 4 physical drives.
RAID 5 + Spare requires at least 4 physical drives.
RAID 6 requires at least 4 physical drives.
RAID 6 + Spare requires at least 5 physical drives.
Highlight the desired RAID level for the volume set and press the Enter
key to conrm.
59
Page 60
BIOS CONFIGURATION
3The capacity for the current volume set is entered after highlighting the
desired RAID level and pressing the Enter key.
The capacity for the current volume set is displayed. Use the UP and
DOWN arrow keys to set the capacity of the volume set and press the
Enter key to conrm. The available stripe sizes for the current volume
set are then displayed.
4Use the UP and DOWN arrow keys to select the current volume set
stripe size and press the Enter key to conrm. This parameter species
the size of the stripes written to each disk in a RAID 0, 1, 5 or 6 Volume
Set. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or
128 KB. A larger stripe size provides better read performance, especially
when the computer preforms mostly sequential reads. However, if the
computer preforms random read requests more often, choose a smaller
stripe size.
5When you are nished dening the volume set, press the Enter key to
conrm the Quick Volume And Raid Set Setup function.
6Foreground (Fast Completion) Press Enter key to dene fast initialization
or Selected the Background (Instant Available). In the background Initialization, the initialization proceeds as a background task, the volume
set is fully accessible for system reads and writes. The operating system
can instantly access to the newly created arrays without requiring a
reboot and waiting the initialization complete. In Fast Initialization, the
initialization proceeds must be completed before the volume set ready
for system accesses.
7Initialize the volume set you have just congured.
8If you need to add additional volume set, using main menu Create Vol-
ume Set function.
3.6 Using RAID Set/Volume Set Function
Method
In “Raid Set Function”, you can use the “Create Raid Set Function”
to generate a new RAID set. In “Volume Set Function”, you can
use the “Create Volume Set function” to generate an associated
volume set and and conguration parameters.
If the current controller has unused physical devices connected,
you can choose the “Create Hot Spare” option in the “Raid Set
Function” to dene a global hot spare. Select this method to congure new raid sets and volume sets. The “Raid Set/Volume Set
Function” conguration option allows you to associate volume sets
with partial and full RAID sets.
60
Page 61
BIOS CONFIGURATION
StepAction
1To setup the Hot Spare (option), choose RAID Set Function from the
main menu. Select the Create Hot Spare and press the Enter key to
dene the Hot Spare.
2Choose RAID Set Function from the main menu. Select Create RAID Set
and press the Enter key.
3The “Select a Drive For Raid Set” window is displayed showing the SATA
drives connected to the SATA RAID controller.
4Press the UP and DOWN arrow keys to select specic physical drives.
Press the Enter key to associate the selected physical drive with the current RAID set.
It is recommend that you drives of the same capacity in a specic array.
If you use drives with different capacities in an array, all drives in the
raid set will be set to the capacity of the smallest drive in the raid set.
The numbers of physical drives in a specic array determines which RAID
levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID (1+0) requires at least 4 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 6 requires at least 4 physical drives.
5After adding the desired physical drives to the current RAID set, press
Yes to conrm the “Create Raid Set” function.
6An “Edit The Raid Set Name” dialog box appears. Enter 1 to 15 alphanu-
meric characters to dene a unique identier for this new raid set. The
default raid set name will always appear as Raid Set. #. Press Enter to
nish the name editing.
7Press the Enter key when you are nished creating the current RAID
Set. To continue dening another RAID set, repeat step 3. To begin volume set conguration, go to step 8.
8Choose the Volume Set Function from the Main menu. Select Create
Volume Set and press the Enter key.
9Choose a RAID set from the “Create Volume From Raid Set” window.
Press the Enter key to conrm the selection.
10Choosing Foreground (Fast Completion) or Background (Instant Avail-
ability) initiation: during Background Initialization, the initialization
proceeds as a background task and the volume set is fully accessible for
system reads and writes. The operating system can instantly access the
newly created arrays without requiring a reboot and waiting for initialization complete. In Fast Initialization, the initialization must be completed
before the volume set is ready for system accesses. In Fast Initialization,
initiation is completed more quickly but volume access by the operating
system is delayed.
11If space remains in the raid set, the next volume set can be congured.
Repeat steps 8 to 10 to congure another volume set.
61
Page 62
BIOS CONFIGURATION
Note:
A user can use this method to examine the existing conguration. The “modify volume set conguration” method provides
the same functions as the “create volume set conguration”
method. In the volume set function, you can use “modify
volume set” to change all volume set parameters except for
capacity (size).
3.7 Main Menu
The main menu shows all functions that are available for executing
actions, which is accomplished by clicking on the appropriate link.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Verify Password
Note:
The manufacture
default password is
set to 0000; this
password can be
modied by selecting
Change Password
in the Raid System
Function section.
OptionDescription
Quick Volume/Raid SetupCreate a default conguration based on the number
Raid Set FunctionCreate a customized RAID set
Volume Set FunctionCreate a customized volume set
Physical DrivesView individual disk information
Raid System FunctionSetup the RAID system conguration
Ethernet CongurationEthernet LAN setting (ARC-1x30/1x60/1x70 only)
View System EventsRecord all system events in the buffer
Clear Event BufferClear all information in the event buffer
Hardware MonitorShow the hardware system environment status
System InformationView the controller system information
of physical disk installed
62
Page 63
BIOS CONFIGURATION
This password option allows user to set or clear the raid controller’s
password protection feature. Once the password has been set, the
user can only monitor and congure the raid controller by providing
the correct password. The password is used to protect the internal
RAID controller from unauthorized entry. The controller will prompt
for the password only when entering the Main menu from the initial
screen. The SATA RAID controller will automatically return to the
initial screen when it does not receive any command in twenty
seconds.
3.7.1 Quick Volume/RAID Setup
“Quick Volume/RAID Setup” is the fastest way to prepare a RAID
set and volume set. It requires only a few keystrokes to complete. Although disk drives of different capacity may be used in
the RAID set, it will use the capacity of the smallest disk drive as
the capacity of all disk drives in the RAID set. The “Quick Volume/RAID Setup” option creates a RAID set with the following
properties:
1. All of the physical drives are contained in one RAID set.
2. The RAID level, hot spare, capacity, and stripe size options
are selected during the conguration process.
3. When a single volume set is created, it can consume all or a
portion of the available disk capacity in this RAID set.
4. If you need to add an additional volume set, use the main
menu “Create Volume Set” function.
The total number of physical drives in a specic RAID set determine the RAID levels that can be implemented within the RAID
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
• NoIt keeps the volume size with max. 2TB limitation.
• LBA 64
This option use 16 bytes CDB instead of 10 bytes. The maximum
volume capacity up to 512TB.
This option works on different OS which supports 16 bytes CDB.
such as :
Windows 2003 with SP1
Linux kernel 2.6.x or latter
• For Windows
It change the sector size from default 512 Bytes to 4k Bytes. the
maximum volume capacity up to 16TB.
This option works under Windows platform only. And it CAN NOT
be converted to Dynamic Disk, because 4k sector size is not a
standard format.
For more details please download PDF le from ftp://ftp.areca.
com.tw/RaidCards/Documents/Manual_Spec/Over2TB_
050721.zip
64
Page 65
BIOS CONFIGURATION
A single volume set is created and consumes all or a portion of
the disk capacity available in this raid set. Dene the capacity
of volume set in the Available Capacity popup. The default value
for the volume set, which is 100% of the available capacity, is
displayed in the selected capacity. To enter a value less than the
available capacity, type the new value and press the Enter key
to accept this value. If the volume set uses only part of the RAID
Set capacity, you can use the “Create Volume Set” option in the
main menu to dene additional volume sets.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Stripe size This parameter sets the size of the stripe written to
each disk in a RAID 0, 1, 5, or 6 logical drive. You can set the
stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
A larger stripe size produces better-read performance, especially
if your computer does mostly sequential reads. However, if you
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
are sure that your computer performs random reads more often,
select a smaller stripe size.
Press the Yes key in the “Create Vol/Raid” Set dialog box, the
RAID set and volume set will start to initialize it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Manual Conguration gives complete control of the RAID set setting, but it will take longer to congure than “Quick Volume/Raid
Setup” conguration. Select “Raid Set Function” to manually congure the raid set for the rst time or delete existing RAID sets
and recongure the RAID set.
66
Page 67
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
3.7.2.1 Create Raid Set
To dene a RAID set, follow the procedure below:
1. Select “Raid Set Function” from the main menu.
2. Select “Create Raid Set “ from the “Raid Set Function” dialog
box.
3. A “Select SATA Drive For Raid set” window is displayed
showing the SATA drives connected to the current controller.
Press the UP and DOWN arrow keys to select specic physical
drives. Press the Enter key to associate the selected physical
drive with the current RAID Set. Repeat this step; the user can
add as many disk drives as are available to a single RAID set.
When nished selecting SATA drives for RAID set, press the Esc
key. A Create Raid Set conrmation screen appears, select the
Yes option to conrm it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
4. An “Edit The Raid Set Name” dialog box appears. Enter 1 to
15 alphanumeric characters to dene a unique identier for the
RAID Set. The default RAID set name will always appear as Raid
Set. #.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
To completely erase and recongure a RAID set, you must rst
delete it and re-create the raid set. To delete a raid set, select
the raid set number that user want to delete in the “Select Raid
Set to Delete” screen. The “Delete Raid Set” dialog box appears,
then press Yes key to delete it. Warning, data on RAID set will
be lost if this option is used.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function
Expand Raid Set
Physical Drives
Activate Raid Set
Raid System Function
Create Hot Spare
Ethernet Conguration
Delete Hot Spare
View System Events
Raid Set Information
Clear Event Buffer
Hardware Monitor
System information
Clear Event Buffer
Hardware Monitor
System information
Select Drives For Raid Set Expansion
[*]Ch05| 80.0GBST380013AS
[ ]Ch08| 80.0GBST380013AS
Are you Sure?
Yes
No
disk drives, the “Expand Raid Set” function allows the users to
add disk drives to the RAID set that have already been created.
To expand a raid set:
Select the “Expand Raid Set” option. If there is an available
disk, then the “Select SATA Drives For Raid Set Expansion”
screen appears.
Select the target RAID set by clicking on the appropriate radio
button. Select the target disk by clicking on the appropriate
check box.
Press the Yes key to start expansion of the RAID set.
The new additional capacity can be utilized by one or more
volume sets. Follow the instruction presented in the volume set
Function to modify the volume sets; operation system specic
utilities may be required to expand operating system partitions.
Note:
1. Once the Expand Raid Set process has started, user
cannot stop it. The process must be completed.
2. If a disk drive fails during raid set expansion and a hot
spare is available, an auto rebuild operation will occur after
the RAID set expansion completes.
• Migrating
69
Page 70
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Free Capacity : 144.1GB
Min Member Disk Size : 40.0GB
Member Disk Channels : 1234
When one of the disk drives is removed in power off state, the
Raid set state will change to Incomplete State. If a user wants
to continue to work while the SATA RAID controller is powered
on, the user can use the “Activate Raid Set” option to active the
RAID set. After user selects this function, the Raid State will
change to Degraded Mode.
Page 71
BIOS CONFIGURATION
3.7.2.5 Create Hot Spare
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
When you choose the “Create Hot Spare” option in the Raid Set
Function, all unused physical devices connected to the current
controller will result in the following:
Select the target disk by clicking on the appropriate check box.
Press the Enter key to select a disk drive and press Yes in the
“Create Hot Spare” to designate it as a hot spare.
The create Hot Spare option gives you the ability to dene a
global hot spare.
Areca Technology Corporation RAID Controller
Raid Set Function
Create Raid Set
Delete Raid Set
Select Drives For HotSpare, Max 3 HotSpare Supported
Expand Raid Set
Activate Raid Set
[*]Ch05| 80.0GBST380013AS
Create Hot Spare
Create Hot Spare
[ ]Ch08| 80.0GBST380013AS
Delete Hot Spare
Raid Set Information
Are you Sure?
Yes
No
3.7.2.6 Delete Hot Spare
Select the target Hot Spare disk to delete by clicking on the appropriate check box.
Press the Enter keys to select a disk drive, and press Yes in the
“Delete Hot Spare” window to delete the hot spare.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Create Raid Set
Delete Raid Set
Expand Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Delete Hot Spare
Delete Raid Set
Raid Set Information
Select The HotSpare Device To be Deleted
[*]Ch05| 80.0GBST380013AS
[ ]Ch08| 80.0GBST380013AS
Are you Sure?
Yes
No
71
Page 72
BIOS CONFIGURATION
3.7.2.7 Raid Set Information
To display Raid Set information, move the cursor bar to the desired RAID set number, then press the Enter key. The “Raid Set
Information” will display.
You can only view information for the RAID set in this screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Delete Volume Set
Modify Volume Set
Check Volume Set
StopVolume Check
Display Volume Info.
A volume set is seen by the host system as a single logical device; it is organized in a RAID level within the controller utilizing one or more physical disks. RAID level refers to the level of
data performance and protection of a volume set. A volume set
can consume all of the capacity or a portion of the available disk
capacity of a RAID set. Multiple volume sets can exist on a RAID
set. If multiple volume sets reside on a specied RAID set, all
72
Page 73
BIOS CONFIGURATION
volume sets will reside on all physical disks in the RAID set. Thus
each volume set on the RAID set will have its data spread evenly
across all the disks in the RAID set rather than one volume set
using some of the available disks and another volume set using
other disks.
3.7.3.1 Create Volume Set
1. Volume sets of different RAID levels may coexist on the same
raid set.
2. Up to 16 volume sets in a RAID set can be created by the
SATA RAID controller.
3. The maximum addressable size of a single volume set is not
limited to 2 TB as with other cards that support only 32-bit
mode.
To create a volume set, follow the following steps:
1. Select the “Volume Set Function” from the Main menu.
2. Choose the “Create Volume Set” from “Volume Set Functions”
dialog box screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Areca Technology Corporation RAID Controller
Volume Set Function
Create Volume Set
Create Volume From Raid Set
Delete Volume Set
Modify Volume Set
Check Volume Set
StopVolume Check
Display Volume Info.
3. The “Create Volume From RAID Set” dialog box appears. This
screen displays the existing arranged RAID sets. Select the RAID
set number and press the Enter key. The “Volume Creation”
dialogue is displayed in the screen.
4. A window with a summary of the current volume set’s settings. The “Volume Creation” option allows user to select the
volume name, capacity, RAID level, strip size, Disk Info, Cache
mode and tag queuing. The user can modify the default values in this screen; the modication procedures are in section
3.5.3.3.
73
Page 74
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Select Raid Level
0
0 + 1
3
5
6
Set the RAID level for the volume set. Highlight Raid Level and
press <Enter>.
The available RAID levels for the current volume set are displayed. Select a RAID level and press the Enter key to conrm.
75
Page 76
BIOS CONFIGURATION
• Capacity
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Capacity : 160.1GB
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
The maximum available volume size is the default value for the
rst setting. Enter the appropriate volume size to t your application. The capacity value can be increased or decreased by
the UP and DOWN arrow keys. The capacity of each volume
set must be less than or equal to the total capacity of the RAID
set on which it resides.
If volume capacity will exceed 2TB, controller will show the
Greater 2 TB volume Support sub-menu.
76
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Stripe Size : 64K
Stripe Size : 64K
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
• Strip Size
This parameter sets the size of the segment written to each
disk in a RAID 0, 1, 5, or 6 logical drive. You can set the stripe
size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
77
Page 78
BIOS CONFIGURATION
• SCSI Channel
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Stripe Size : 64K
SCSI Channel : 0
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
The SATA RAID controller function simulates a SCSI RAID controller. The host bus represents the SCSI channel. Choose the
SCSI Channel. A “Select SCSI Channel” dialog box appears; select the channel number and press the Enter key to conrm it.
• SCSI ID
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
78
Each device attached to the SATA card, as well as the card
itself, must be assigned a unique SCSI ID number. A SCSI
channel can connect up to 15 devices. It is necessar to assign a
SCSI ID to each device from a list of available SCSI IDs.
Page 79
• SCSI LUN
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
This option, when enabled, can enhance overall system performance under multi-tasking operating systems. The Command
Tag (Drive Channel) function controls the SCSI command tag
queuing support for each drive channel. This function should
normally remain enabled. Disable this function only when using
older drives that do not support command tag queuing.
3.7.3.2 Delete Volume Set
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup
Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer
Hardware Monitor
System information
To delete volume set from a RAID set, move the cursor bar to
the “Volume Set Functions” menu and select the “Delete Volume
Set” item, then press the Enter key. The “Volume Set Functions” menu will show all Raid Set # items. Move the cursor
bar to a RAID set number, then press the Enter key to show all
volume sets within that Raid Set. Move the cursor to the volume
set number that is to be deleted and press Enter to delete it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Use this option to modify volume set conguration. To modify
volume set values from RAID set system function, move the
cursor bar to the “Volume Set Functions” menu and select the
“Modify Volume Set” item, then press the Enter key. The “Volume Set Functions” menu will show all RAID set items. Move the
cursor bar to a RAID set number item, then press the Enter key
to show all volume set items. Select the volume set from the
list to be changed, press the Enter key to modify it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup
Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
As shown, volume information can be modied at this screen.
Choose this option to display the properties of the selected volume set; all values can be modied except the capacity.
81
Page 82
BIOS CONFIGURATION
• Volume Growth
Use this option to expand a raid set when a disk is added to
the system. The additional capacity can be used to enlarge the
volume set size or to create another volume set. The “Modify
Volume Set Function” can support the “volume set expansion”
function. To expand the volume set capacity from the “Raid Set
System Function”, move the cursor bar to the “Volume Set Volume Capacity” item and entry the capacity size. Select “Conrm
The Operation” and select on the “Submit” button to complete
the action. The volume set starts to expand.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
Select Volume To Display
The Volume Set Information
Volume Set Name : Volume Set # 00
Display Volume Info in Raid
Raid Set Name : Raid Set # 00
Volume Capacity : 160.1GB
Volume State : Migration
SCSI CH/Id/Lun : 0/0/0
RAID Level : 6
Stripe Size : 64K
Member Disk : 4
Cache Attribute : Write-Back
Tag Queuing : Enabled
Volume Set # 00
Raid Set # 00
Raid Set # 01
To Expand an existing volume noticed:
• Only the last volume can expand capacity.
• When expand volume capacity, you can’t modify stripe size or
modify raid revel simultaneously.
• You can expand volume capacity, but can’t reduce volume
capacity size.
For Greater 2TB expansion:
• If your system installed in the volume, don't expanded the
volume capacity greater 2TB, currently OS can’t support boot
up from a greater 2TB capacity device.
• Expanded over 2TB used LBA64 mode. Please make sure your
OS supports LBA 64 before expand it.
82
Page 83
BIOS CONFIGURATION
• Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID
level to another, when a volume set strip size changes, or when
a disk is added to a RAID set. Migration status is displayed in
the volume status area of the “Volume Set Information” screen
when moving from one RAID level to another, when a volume
set strip size changes, or when a disk is added to a RAID set.
3.7.3.4 Check Volume Set
Use this option to verify the correctness of the redundant data in
a volume set. For example, in a system with a dedicated parity
disk drive, a volume set check entails computing the parity of
the data disk drives and comparing those results to the contents
of the dedicated parity disk drive. To check volume set from
“Raid Set System Function”, move the cursor bar to the “Volume
Set Functions” menu and select the “Check Volume Set” item,
then press the Enter key. The “Volume Set Functions” menu
will show all Raid Set number items. Move the cursor bar to an
Raid Set number item and then press the Enter key to show all
Volume Set items. Select the volume set to be checked from the
list and press Enter to select it. After completing the selection,
the conrmation screen appears, presses Yes to start the check.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Clear Event Buffer
Hardware Monitor
System information
To display volume set information, move the cursor bar to the
desired volume set number and then press the Enter key. The
“Volume Set Information” will be shown. You can only view the
information of this volume set in this screen, not modify it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
When you choose this option, the physical disks connected to
the SATA RAID controller are listed. Move the cursor to the desired drive and press Enter to view drive information.
3.7.4.2 Create Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
View System Events
Clear Event Buffer
Hardware Monitor
System information
Select The Drive
Create Pass-Throught
Modify Pass-Through Disk
SCSI Channel : 0
Ch01| 80.0GB| Free |ST380013AS
Delete Pass-Through Disk
SCSI ID : 0
Ch04| 80.0GB| Free |ST380013AS
Identify Selected Drive
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
Create Pass-Through
Yes
No
A Pass-Through Disk is not controlled by the SATA RAID controller rmware and thus cannot be a part of a volume set. The
disk is available directly to the operating system as an individual
disk. It is typically used on a system where the operating system is on a disk not controlled by the SATA RAID controller rmware. The SCSI Channel, SCSI ID, SCSI LUN, Cache Mode, and
Tag Queuing must be specied to create a pass-through disk.
3.7.4.3 Modify a Pass-Through Disk
Use this option to modify Pass-Through Disk Attributes. To select
and modify a Pass-Through Disk from the pool of Pass-Through
Disks, move the cursor bar to the “Physical Drive Function”
menu and select the “Modify Pass-Through Drive” option and
then press the Enter key. The “Physical Drive Function” menu
will show all Raid Pass-Through Drive number options. Move the
cursor bar to the desired item and then press the Enter key
to show all Pass-Through Disk Attributes. Select the parameter
from the list to be changed and them press the Enter key to
modify it.
86
Page 87
BIOS CONFIGURATION
3.7.4.4 Delete Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
View System Events
Clear Event Buffer
Hardware Monitor
System information
Select The Drive
Create Pass-Through Disk
Modify Pass-Through Disk
Ch01| 80.0GB| Pass Through |ST380013AS
Delete Pass-Through
Identify Selected Drive
Delete Pass-Through
Yes
No
To delete a Pass-through drive from the Pass-through drive pool,
move the cursor bar to the “Physical Drive Function” menu and
select the “Delete Pass-Through Drive” item, then press the
Enter key. The “Delete Pass-Through conrmation” screen will
appear; select Yes to delete it.
3.7.4.5 Identify Selected Drive
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Physical Drive Information
Raid Set Function
Volume Set Function
View Drive Information
Physical Drive Information
Physical Drives
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
To prevent removing the wrong drive, the selected disk HDD
LED Indicator will light for physically locating the selected disk
when the “Identify Selected Device” is selected.
87
Page 88
BIOS CONFIGURATION
3.7.5 Raid System Function
To set the raid system function, move the cursor bar to the main
menu and select the “Raid System Function” item and then press
Enter key. The “Raid System Function” menu will show multiple
items. Move the cursor bar to an item, then press Enter key to
select the desired function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Mute Alert Beeper
Yes
No
88
The “Mute The Alert Beeper” function item is used to control the
SATA RAID controller Beeper. Select yes and press the Enter
key in the dialog box to turn the beeper off temporarily. The
beeper will still activate on the next event.
Page 89
BIOS CONFIGURATION
3.7.5.2 Alert Beeper Setting
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Alert Beeper Setting
Disabled
Enabled
The “Alert Beeper Setting” item is used to Disabled or Enable the
SATA RAID controller alarm tone generator. Select “Disabled”
and press the Enter key in the dialog box to turn the beeper off.
3.7.5.3 Change Password
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Enter New Password
The manufacture default password is set to 0000. The
password option allows user to set or clear the password protection feature. Once the password has been set, the user can
monitor and congure the controller only by providing the cor-
89
Page 90
BIOS CONFIGURATION
rect password. This feature is used to protect the internal RAID
system from unauthorized access. The controller will check the
password only when entering the Main menu from the initial
screen. The system will automatically go back to the initial
screen if it does not receive any command in 20 seconds.
To set or change the password, move the cursor to “Raid System
Function” screen, press the “Change Password” item. The “Enter
New Password” screen will appear.
To disable the password, only press Enter in both the “Enter
New Password” and “Re-Enter New Password” column. The existing password will be cleared. No password checking will occur
when entering the main menu.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
JBOD/RAID Function
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
JBOD/RAID Function
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
STagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
RAID
JBOD
3.7.5.4 JBOD/RAID Function
JBOD is an acronym for “just a Bunch Of Disks”. It represents
a volume set that is created by the concatenation of partitions
on the disk. The operating system can see all disks when the
JBOD option is selected. It is necessary to delete any RAID
set(s) on any disk(s) if switching from a RAID to a JBOD conguration.
90
Page 91
BIOS CONFIGURATION
3.7.5.5 Background Task Priority
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Raid Rebuild Priority
Mute The Alert Beeper
UltraLow(5%)
Alert Beeper Setting
Low(20%)
Change Password
Medium(50%)
JBOD/RAID Function
High(80%)
Background Task Priority
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
The “Background Task Priority” is a relative indication of how
much time the controller devotes to a rebuild operation. The
SATA RAID controller allows the user to choose the rebuild priority (ultralow, low, normal, high) to balance volume set access
and rebuild tasks appropriately.
3.7.5.6 Maximum SATA Mode
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Maximum SATA Mode
MuteThe Alert Beeper
Alert Beeper Setting
SATA150
Change Password
SATA150+NCQ
SATA150+NCQ
JBOD/RAID Function
SATA300
Background Task Priority
SATA300+NCQ
Maximum SATA Mode
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
The SATA RAID controller can support up to SATA ll, which runs
up to 300MB/s, twice as fast as SATA150. NCQ is a command
protocol in Serial ATA that can only be implemented on native Serial ATA hard drives. It allows multiple commands to be
outstanding within a drive at the same time. Drives that support
NCQ have an internal queue where outstanding commands can
91
Page 92
BIOS CONFIGURATION
be dynamically rescheduled or re-ordered, along with the necessary tracking mechanisms for outstanding and completed portions of the workload. The SATA RAID controller allows the user
to choose the SATA Mode: SATA150, SATA150+NCQ, SATA300,
SATA300+NCQ.
3.7.5.7 HDD Read Ahead Cache
Allow Read Ahead (Default: Enabled)—When Enabled, the drive’
s read ahead cache algorithm is used, providing maximum
performance under most circumstances.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
HDD Read Ahead Cache
Alert Beeper Setting
Enabled
Enabled
Change Password
Disable Maxtor
JBOD/RAID Function
Disabled
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
HDD Read Aead Cache
Stagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
92
3.7.5.8 Stagger Power On
In a PC system with only one or two drives, the power can supply enough power to spin up both drives simultaneously. But in
systems with more than two drives, the startup current from
spinning up the drives all at once can overload the power supply, causing damage to the power supply, disk drives and other
system components. This damage can be avoided by allowing
the host to stagger the spin-up of the drives. New SATA drives
have support staggered spin-up capabilities to boost reliability.
Staggered spin-up is a very useful feature for managing multiple
disk drives in a storage subsystem. It gives the host the ability to spin up the disk drives sequentially or in groups, allowing
the drives to come ready at the optimum time without straining
the system power supply. Staggering drive spin-up in a multiple
drive environment also avoids the extra cost of a power supply
designed to meet short-term startup power demand as well as
steady state conditions.
Page 93
BIOS CONFIGURATION
Areca has supported the xed value staggered power up function in its previous version rmware. But from rmware version
1.39 and later, SATA RAID controller has included the option for
customer to select the disk drives sequentially stagger power up
value. The values can be selected from 0.4ms to 6ms per step
which powers up one drive.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Stagger Power On
Mute The Alert Beeper
Alert Beeper Setting
0.4
Change Password
0.7
JBOD/RAID Function
1.0
Background Task Priority
1.5
Maximum SATA Mode
.
HDD Read Aead Cache
.
Stagger Power On
STagger Power on
6.0
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
From firmware version 1.39 date: 04/01/2006 and later, the
rmware has added the "Empty HDD Slot LED" option to setup
the Failed LED light "ON "or "OFF". When each slot has a power
LED for the HDD installed identify, user can set this option to
"OFF". Choose this option "ON", the failed LED light will ash
red light; if no HDD installed.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
On
OFF
Empty HDD slot LED
ON
93
Page 94
BIOS CONFIGURATION
3.7.5.10 HDD SMART Status Polling
An external RAID enclosure has the hardware monitor in the
dedicated backplane that can report HDD temperature status
to the controller. However, PCI cards do not use backplanes if
the drives are internal to the main server chassis. The type of
enclosure cannot report the HDD temperature to the controller.
For this reason, HDD SMART Status Polling was added to enable
scanning of the HDD temperature function in the version 1.36
date: 2005-05-19 (and later). It is necessary to enable “HDD
SMART Status Polling” function before SMART information is
accessible. This function is disabled by default.
The following screen shot shows how to change the BIOS setting
to enable the Polling function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
HDD SMART Status Polling
Disabled
Enabled
3.7.5.11 Controller Fan Detection
Included in the product box is a field replaceable passive
heatsink to be used only if there is enough airow to adequately
cool the passive heat sink.
The “Controller Fan Detection” function is available in the
version 1.36 date: 2005-05-19 and later for preventing the
Buzzer warning. When using the passive heatsink, disable the
“Controller Fan Detection” function through this BIOS setting.
The following screen shot shows how to change the BIOS setting
to disable the beeper function. (This function is not available
in the Web Browser setting.)
94
Page 95
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Mute The Alert Beeper
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Disk Write Cache Mode
Alert Beeper Setting
Auto
Change Password
Enabled
Enabled
JBOD/RAID Function
Disabled
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Disk Write Cache Mode
Capacity Truncation
3.7.5.13 Capacity Truncation
SATA RAID controllers use drive truncation so that drives from
different vendors are more likely to be usable as spares for one
another. Drive truncation slightly decreases the usable capac-
95
Page 96
BIOS CONFIGURATION
ity of a drive that is used in redundant units. The controller
provides three truncation modes in the system conguration:
Multiples Of 10G, Multiples Of 1G, and No Truncation.
Multiples Of 10G: If you have 120 GB drives from different
vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 120 GB. Areca
drive Truncation mode Multiples Of 10G uses the same capacity for both of these drives so that one could replace the other.
Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example,
one drive might be 123.5 GB, and the other 123.4 GB. Areca
drive Truncation mode Multiples Of 1G uses the same capacity
for both of these drives so that one could replace the other.
No Truncation: It does not truncate the capacity.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Capacity Truncation
Truncate Disk Capacity
To Multiples of 10G
To Multiples of 1G
To Multiples of 1G
Disabled
3.7.6 Ethernet Conguration (12/16/24-port)
Use this feature to set the controller Ethernet port conguration.
It is not necessary to create reserved disk space on any hard disk
for the Ethernet port and HTTP service to function; these functions are built into the controller rmware.
96
Page 97
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
3.7.6.1 DHCP Function
DHCP (Dynamic Host Conguration Protocol) allows network administrators centrally manage and automate the assignment of
IP (Internet Protocol) addresses on a computer network. When
using the TCP/IP protocol (Internet protocol), it is necessary for
a computer to have a unique IP address in order to communicate to other computer systems. Without DHCP, the IP address
must be entered manually at each computer system. DHCP lets
a network administrator supervise and distribute IP addresses
from a central point. The purpose of DHCP is to provide the
automatic (dynamic) allocation of IP client congurations for a
specic time period (called a lease period) and to minimize the
work necessary to administer a large IP network. To manually
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Ethernet Conguration
DHCP Function : Enable
DHCP Function : Enable
Local IPAddress : 192.168.001.100
Select DHCP Setting
Ethernet Address : 00.04.D9.7F.FF.FF
Disabled
Enabled
97
Page 98
BIOS CONFIGURATION
congure the IP address of the controller, move the cursor bar
to the Main menu “Ethernet Conguration Function” item and
then press the Enter key. The “Ethernet Conguration” menu
appears on the screen. Move the cursor bar to DHCP Function
item, then press Enter key to show the DHCP setting. Select
the “Disabled’ or ‘Enabled” option to enable or disable the DHCP
function. If DHCP is disabled, it will be necessary to manually
enter a static IP address that does not conict with other devices on the network.
3.7.6.2 Local IP address
If you intend to set up your client computers manually (no
DHCP), make sure that the assigned IP address is in the same
range as the default router address and that it is unique to your
private network. However, it is highly recommend to use DHCP
if that option is available on your network. An IP address allocation scheme will reduce the time it takes to set-up client computers and eliminate the possibilities of administrative errors
and duplicate addresses. To manually congure the IP address
of the controller, move the cursor bar to the Main menu Ethernet
Conguration Function item and then press the Enter key. The
Ethernet Conguration menu appears on the screen. Move the
cursor bar to Local IP Address item, then press the Enter key
to show the default address setting in the SATA RAID controller.
You can then reassign the static IP address of the controller.
98
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Ethernet Conguration
DHCP Function : Enable
Local IP Address : 192.168.001.100
Local IPAddress : 192.168.001.100
Edit The local IP Address
EthernetAddress : 00.04.D9.7F.FF.FF
1
92.168.001.100
Page 99
BIOS CONFIGURATION
3.7.6.3 Ethernet Address
A MAC address stands for “Media Access Control” address and is
unique to every single ethernet device. On an Ethernet LAN, it’s
the same as your Ethernet address. When you’re connected to
a local network from the SATA RAID controller Ethernet port, a
correspondence table relates your IP address to the SATA RAID
controller’s physical (MAC) address on the LAN.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events
Clear Event Buffer
Hardware Monitor
System information
Ethernet Conguration
DHCP Function : Enable
Local IP Address : 192.168.001.100
Ethernet Address : 00.04.D9.7F.FF.FF
Ethernet Address : 00.04.D9.7F.FF.FF
3.7.7 View System Events
To view the SATA RAID controller’s information, move the cursor bar to the main menu and select the “View Events” link, then
press the Enter key. The SATA RAID controller’s events screen
appear.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Time Device Event Type ElapseTime Errors
2004-1-1 12:00:00 H/W Monitor Raid Powered On
2004-1-1 12:00:00 H/W Monitor Raid Powered On
2004-1-1 12:00:00 H/W Monitor Raid Powered On
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set FunctionPhysical DrivesRaid System Function
Raid System Function
Ethernet Conguration
View System EventsClear Event BufferHardware MonitorSystem information
Choose this option to view the system events information: Timer,
Device, Event type, Elapsed Time, and Errors. The RAID system
does not have a real time clock. The Time information is the relative time from the SATA RAID controller powered on.
3.7.8 Clear Events Buffer
Use this feature to clear the entire events buffer.
3.7.9 Hardware Monitor
To view the RAID controller’s hardware monitor information, move
the mouse cursor to the main menu and click the “Hardware
Monitor” link. The Hardware Information screen appears.
The Hardware Monitor Information provides the temperature and
fan speed (I/O Processor fan) of the SATA RAID controller.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Choose this option to display Main processor, CPU Instruction
cache and data cache size, rmware version, serial number,
controller model name, and the cache memory size. To check the
system information, move the cursor bar to “System Information”
100
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.