Areca ARC-1110, ARC-1120, ARC-1130, ARC-1231ML, ARC-1261ML User Manual

...
Page 1
SATA RAID Cards
ARC-1110/1120/1130/1160/1170
( 4/8/12/16/24-port PCI-X SATA RAID Controllers )
ARC-1110ML/1120ML/1130ML/1160ML
( 4/8-port Innband connector and 12/16-port Multi-lane
connector PCI-X SATA RAID Controllers )
ARC-1210/1220/1230/1260/1280
( 4/8/12/16/24-port PCI-Express SATA RAID Controllers )
ARC-1231ML/1261ML/1280ML
USER Manual
Version: 3.3 Issue Date: November, 2006
Page 2
Microsoft WHQL Windows Hardware Compatibility Test
ARECA is committed to submitting products to the Microsoft Windows Hardware Quality Labs (WHQL), which is required for participation in the Windows Logo Program. Successful passage of the WHQL tests results in both the “Designed for Windows” logo for qualifying ARECA PCI-X and PCI-Express SATA RAID controllers and a listing on the Microsoft Hard­ware Compatibility List (HCL).
Copyright and Trademarks
The information of the products in this manual is subject to change without prior notice and does not represent a commitment on the part of the vendor, who assumes no liability or responsibility for any errors that may appear in this manual. All brands and trademarks are the properties of their respective owners. This manual contains materials protected under International Copyright Conventions. All rights reserved. No part of this manual may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without the written permission of the manufacturer and the author. All inquiries should be addressed to ARECA Technology Corp.
FCC STATEMENT
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against interfer­ence in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in ac­cordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.
Page 3
Contents
1. Introduction .............................................................. 10
1.1 Overview ....................................................................... 10
1.2 Features ........................................................................12
1.3 RAID Concept ................................................................. 15
1.3.1 RAID Set ................................................................... 15
1.3.2 Volume Set ................................................................ 15
1.3.3 Ease of Use Features ................................................. 16
1.3.3.1 Foreground Availability/Background Initialization ....... 16
1.3.3.2 Array Roaming ..................................................... 16
1.3.3.3 Online Capacity Expansion ..................................... 17
1.3.3.4 Online RAID Level and Stripe Size Migration ............. 19
1.3.3.5 Online Volume Expansion ........................................ 19
1.4 High availability .............................................................. 20
1.4.1 Global Hot Spares ...................................................... 20
1.4.2 Hot-Swap Disk Drive Support ....................................... 21
1.4.3 Auto Declare Hot-Spare .............................................. 21
1.4.4 Auto Rebuilding ......................................................... 21
1.4.5 Adjustable Rebuild Priority ........................................... 22
1.5.1 Hard Drive Failure Prediction ........................................ 23
1.5.2 Auto Reassign Sector .................................................. 23
1.5.3 Consistency Check ...................................................... 24
1.6 Data Protection ............................................................... 24
1.6.1 BATTERY BACKUP ...................................................... 24
1.6.2 RECOVERY ROM ......................................................... 25
1.7 Understanding RAID ........................................................ 25
1.7.1 RAID 0 ...................................................................... 25
1.7.2 RAID 1 ...................................................................... 26
1.7.3 RAID 1E .................................................................... 27
1.7.4 RAID 3 ...................................................................... 27
1.7.5 RAID 5 ...................................................................... 28
1.7.6 RAID 6 ...................................................................... 29
2. Hardware Installation ............................................... 32
2.1 Before Your begin Installation ........................................... 32
2.2 Board Layout .................................................................. 33
2.3 Installation .....................................................................39
3. McBIOS RAID Manager .............................................. 56
3.1 Starting the McBIOS RAID Manager ................................... 56
Page 4
3.2 McBIOS Conguration manager ......................................... 57
3.3 Conguring Raid Sets and Volume Sets .............................. 58
3.4 Designating Drives as Hot Spares ...................................... 58
3.5 Using Quick Volume /Raid Setup Conguration .................... 59
3.6 Using RAID Set/Volume Set Function Method ...................... 60
3.7 Main Menu .................................................................... 62
3.7.1 Quick Volume/RAID Setup ........................................... 63
3.7.2 Raid Set Function ....................................................... 66
3.7.2.1 Create Raid Set .................................................... 67
3.7.2.2 Delete Raid Set ..................................................... 68
3.7.2.3 Expand Raid Set .................................................... 68
• Migrating ...................................................................... 69
3.7.2.4 Activate Incomplete Raid Set ................................... 70
3.7.2.5 Create Hot Spare ................................................... 71
3.7.2.6 Delete Hot Spare ................................................... 71
3.7.2.7 Raid Set Information .............................................. 72
3.7.3 Volume Set Function ................................................... 72
3.7.3.1 Create Volume Set ................................................. 73
• Volume Name ................................................................ 75
• Raid Level ..................................................................... 75
• Capacity .......................................................................76
• Strip Size ...................................................................... 77
• SCSI Channel ................................................................78
• SCSI ID ........................................................................ 78
• SCSI LUN ...................................................................... 79
• Cache Mode .................................................................. 79
3.7.3.2 Delete Volume Set ................................................. 80
• Tag Queuing .................................................................. 80
3.7.3.3 Modify Volume Set ................................................. 81
• Volume Growth .............................................................. 82
• Volume Set Migration ...................................................... 83
3.7.3.4 Check Volume Set .................................................. 83
3.7.3.5 Stop Volume Set Check .......................................... 83
3.7.3.6 Display Volume Set Info. ........................................ 84
3.7.4 Physical Drives ........................................................... 85
3.7.4.1 View Drive Information .......................................... 85
3.7.4.2 Create Pass-Through Disk ....................................... 86
3.7.4.3 Modify a Pass-Through Disk ..................................... 86
3.7.4.4 Delete Pass-Through Disk ....................................... 87
3.7.4.5 Identify Selected Drive ........................................... 87
3.7.5 Raid System Function .................................................88
3.7.5.1 Mute The Alert Beeper ........................................... 88
Page 5
3.7.5.2 Alert Beeper Setting ............................................... 89
3.7.5.3 Change Password .................................................. 89
3.7.5.4 JBOD/RAID Function .............................................. 90
3.7.5.5 Background Task Priority ........................................ 91
3.7.5.6 Maximum SATA Mode ............................................. 91
3.7.5.7 HDD Read Ahead Cache ......................................... 92
3.7.5.8 Stagger Power On .................................................. 92
3.7.5.9 Empty HDD slot HDD .............................................93
3.7.5.10 HDD SMART Status Polling .................................... 94
3.7.5.11 Controller Fan Detection ....................................... 94
3.7.5.12 Disk Write Cache Mode ......................................... 95
3.7.5.13 Capacity Truncation .............................................. 95
3.7.6 Ethernet Conguration (12/16/24-port) ......................... 96
3.7.6.1 DHCP Function ......................................................97
3.7.6.2 Local IP address .................................................... 98
3.7.6.3 Ethernet Address ................................................... 99
3.7.7 View System Events ...................................................99
3.7.8 Clear Events Buffer ................................................... 100
3.7.9 Hardware Monitor ..................................................... 100
3.7.10 System Information ................................................ 100
4. Driver Installation ................................................... 102
4.1 Creating the Driver Diskettes .......................................... 102
4.2 Driver Installation for Windows ....................................... 103
4.2.1 New Storage Device Drivers in Windows Server 2003 .... 103
4.2.2 Install Windows 2000/XP/2003 on a SATA RAID Volume 104
4.2.2.1 Installation procedures ......................................... 104
4.2.2.2 Making Volume Sets Available to Windows System ... 105
4.2.3 Installing controller into an existing Windows 2000/XP/2003
Installation ...................................................................... 106
4.2.3.1 Making Volume Sets Available to Windows System ... 107
4.2.4 Uninstall controller from Windows 2000/XP/2003 .......... 108
4.3 Driver Installation for Linux ............................................ 109
4.4 Driver Installation for FreeBSD ........................................ 109
4.5 Driver Installation for Solaris 10 ...................................... 110
4.6 Driver Installation for Mac 10.x ....................................... 110
4.7 Driver Installation for UnixWare 7.1.4 .............................. 111
4.8 Driver Installation for NetWare 6.5 .................................. 111
5. ArcHttp Proxy Server Installation ........................... 112
5.1 For Windows................................................................. 113
5.2 For Linux ..................................................................... 114
5.3 For FreeBSD ................................................................. 115
Page 6
5.4 For Solaris 10 x86 ......................................................... 116
5.5 For Mac OS 10.x ........................................................... 116
5.6 ArcHttp Conguration .................................................... 117
6. Web Browser-based Conguration ......................... 121
6.1 Start-up McRAID Storage Manager ................................. 121
Another method to start-up McRAID Storage Manager from
Windows Local Administration .......................................... 122
6.1.1 Through Ethernet port (Out-of-Band) ......................... 123
6.2 SATA RAID controller McRAID Storage Manager ................. 124
6.3 Main Menu .................................................................. 125
6.4 Quick Function .............................................................. 125
6.5 RaidSet Functions ......................................................... 126
6.5.1 Create Raid Set ....................................................... 126
6.5.2 Delete Raid Set ........................................................ 127
6.5.3 Expand Raid Set ....................................................... 128
6.5.4 Activate Incomplete Raid Set ..................................... 128
6.5.5 Create Hot Spare ..................................................... 129
6.5.6 Delete Hot Spare ...................................................... 129
6.5.7 Rescue Raid Set ....................................................... 129
6.6 Volume Set Functions .................................................... 130
6.6.1 Create Volume Set ................................................... 130
• Volume Name .............................................................. 131
• Raid Level .................................................................. 131
• Capacity ..................................................................... 131
• Greater Two TB Volume Support ..................................... 131
• Initialization Mode ........................................................ 132
• Strip Size .................................................................... 132
• Cache Mode ................................................................ 132
• SCSI Channel/SCSI ID/SCSI Lun .................................... 132
Tag Queuing ................................................................ 132
6.6.2 Delete Volume Set .................................................... 133
6.6.3 Modify Volume Set .................................................... 133
6.6.3.1 Volume Set Migration ........................................... 134
6.6.4 Check Volume Set .................................................... 135
6.6.5 Stop VolumeSet Check .............................................. 135
6.7 Physical Drive .............................................................. 135
6.7.1 Create Pass-Through Disk .......................................... 136
6.7.2 Modify Pass-Through Disk .......................................... 136
6.7.3 Delete Pass-Through Disk .......................................... 137
6.8 System Controls ........................................................... 138
6.8.1 System Cong ......................................................... 138
Page 7
• System Beeper Setting ................................................. 138
• Background Task Priority ............................................... 138
• JBOD/RAID Conguration .............................................. 138
• Maximun SATA Supported ............................................. 138
• HDD Read Ahead Cache ................................................ 138
• Stagger Power on ........................................................ 139
• Empty HDD Slot LED .................................................... 140
• HDD SMART Status Polling............................................. 140
• Disk Write Cache Mode ................................................. 141
Disk Capacity Truncation Mode ....................................... 141
6.8.2 Ethernet Conguration (12/16/24-port) ....................... 142
6.8.3 Alert by Mail Conguration (12/16/24-port) ................ 143
6.8.4 SNMP Conguration (12/16/24-port) ........................... 144
• SNMP Trap Congurations ............................................. 145
• SNMP System Congurations ......................................... 145
• SNMP Trap Notication Congurations ............................. 145
6.8.5 NTP Conguration (12/16/24-port) ............................. 145
• NTP Sever Address ....................................................... 145
• Time Zone ................................................................... 146
• Automatic Daylight Saving............................................. 146
6.8.6 View Events/Mute Beeper .......................................... 146
6.8.7 Generate Test Event ................................................. 146
6.8.8 Clear Events Buffer ................................................... 147
6.8.9 Modify Password ...................................................... 147
6.8.10 Update Firmware ................................................... 148
6.9 Information .................................................................. 148
6.9.1 RaidSet Hierarchy ..................................................... 148
6.9.2 System Information .................................................. 148
Appendix A ................................................................. 151
Upgrading Flash ROM Update Process .................................... 151
Upgrading Firmware Through McRAID Storage Manager ........... 151
Upgrading Entire Flash ROM ImageThrough Arcash DOS Utility 153
Appendix B .................................................................. 156
Battery Backup Module (ARC-6120-BAT) ................................ 156
BBM Components ........................................................... 156
BBM Specications .......................................................... 156
Installation .................................................................... 157
Battery Backup Capacity .................................................. 157
Operation ...................................................................... 158
Changing the Battery Backup Module ................................ 158
Status of BBM ................................................................ 158
Page 8
Appendix C .................................................................. 159
SNMP Operation & Denition ................................................ 159
Appendix D .................................................................. 166
Event Notication Congurations ........................................ 166
A. Device Event .............................................................. 166
B. Volume Event ............................................................. 167
C. RAID Set Event .......................................................... 167
D. Hardware Event .......................................................... 168
Appendix E .................................................................. 169
General Troubleshooting Tips ............................................... 169
Appendix F .................................................................. 173
Technical Support ............................................................... 173
Glossary ...................................................................... 174
2TB .............................................................................. 174
Array ............................................................................ 174
ATA .............................................................................. 174
Auto Reassign Sector ..................................................... 174
Battery Backup Module .................................................... 175
BIOS ............................................................................ 175
Cache ........................................................................... 175
Consistency Check .......................................................... 175
Driver ........................................................................... 175
Hot Spare ...................................................................... 176
Hardware RAID versus Software RAID .............................. 176
Hot Swap ...................................................................... 176
NVRAM .......................................................................... 176
Parity ............................................................................ 176
PCI Express .................................................................. 176
PCI-X ........................................................................... 177
RAID ............................................................................ 177
Rebuild ......................................................................... 177
SATA (Serial ATA) ........................................................... 177
SMART .......................................................................... 178
SNMP ............................................................................ 178
Volume Set .................................................................... 178
Write-back ..................................................................... 178
Write-through ................................................................ 178
XOR-Engine ................................................................... 179
Page 9
Page 10
INTRODUCTION
1. Introduction
This section presents a brief overview of the SATA RAID Series controller, ARC-1110/1110ML/1120/1120ML/1130/1130ML/1160/ 1160ML/1170 (4/8/12/16/24-port PCI-X SATA RAID Controllers) and ARC-1210/1220/1230/1230/1231ML/1260/1261ML/1280/1280ML (4/8/12/16/24-port PCI-Express SATA RAID Controllers).
1.1 Overview
The ARC-11xx and ARC-12xx Series of high-performance Serial ATA RAID controllers support a maximum of 4, 8, 12, 16, or 24 SATA II peripheral devices (depending on model) on a single controller. The ARC-11xx series for the PCI-X bus and the ARC-12xx Series for the PCI-Express bus. When properly congured, these SATA controllers provide non-stop service with a high degree of fault tolerance through the use of RAID technology and can also provide advanced array management features.
The 4 and 8 port SATA RAID controllers are low-prole PCI cards, ideal for 1U and 2U rack-mount systems. These controllers utilize the same RAID kernel that has been eld-proven in Areca existing external RAID controllers, allowing Areca to quickly bring stable and reliable RAID controllers to the market.
Unparalleled Performance
The SATA RAID controllers provide reliable data protection for desktops, workstations, and servers. These cards set the stan­dard with enhancements that include a high-performance Intel I/O Processor, a new memory architecture, and a high performance PCI bus interconnection. The 8/12/16/24-port controllers with the RAID 6 engine built-in can offer extreme-availability RAID 6 functionality. This engine can concurrently compute two parity blocks with per­formance very similar to RAID 5. The controllers by default support 256MB of ECC SDRAM memory. The 12/16/24 port controllers sup­port one DDR333 SODIMM socket that allows for upgrading up to 1GB of memory. The 12/16/24 port controllers support one DDR2­533 DIMM socket that allows for upgrading up to 2GB of memory. The controllers use Marvell 4/8 channel SATA PCI-X controller
10
Page 11
INTRODUCTION
chips, which can simultaneously communicate with the I/O proces­sor and read or write data on multiple drives.
Unsurpassed Data Availability
As storage capacity requirements continue to rapidly increase, us­ers require greater levels of disk drive fault tolerance, which can be implemented without doubling the investment in disk drives. RAID 1 (mirroring) provides high fault tolerance. However, half of the drive capacity of the array is lost to mirroring, making it too costly for most users to implement on large volume sets due to dobuling the number of drives required. Users want the protection of RAID 1 or better with an implementation cost comparable to RAID 5. RAID 6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data. The 8/12/16/24-port RAID controllers provide RAID 6 functionality to meet these demanding requirements.
The SATA RAID controllers also provide RAID levels 0, 1, 1E, 3, 5 or JBOD congurations. Its high data availability and protection is derived from the following capabilities: Online RAID Capacity Ex­pansion, Array Roaming, Online RAID Level / Stripe Size Migration, Dynamic Volume Set Expansion, Global Online Spare, Automatic Drive Failure Detection, Automatic Failed Drive Rebuilding, Disk Hot-Swap, Online Background Rebuilding and Instant Availabil­ity/Background Initialization. During the controller rmware ash upgrade process, it is possible that an error results in corruption of the controller rmware. This could result in the device becoming non-functional. However, with our Redundant Flash image feature, the controller will revert back to the last known version of rmware and continue operating. This reduces the risk of system failure due to rmware crashes.
Easy RAID Management
The SATA RAID controller utilizes built-in rmware with an embed­ded terminal emulation that can access via hot key at BIOS boot­up screen. This pre-boot manager utility can be used to simplify the setup and management of the RAID controller. The controller rmware also contains a ArcHttp browser-based program that can be accessed through the ArcHttp proxy server function in Windows,
11
Page 12
INTRODUCTION
Linux, FreeBSD and more environments. This Web browser-based RAID management utility allows both local and remote creation and modication RAID sets, volume sets, and monitoring of RAID status from standard web browsers.
1.2 Features
Adapter Architecture
• Intel IOP 331 I/O processor (ARC-11xx series)
• Intel IOP 332/IOP 333 I/O processor (ARC-12xx series)
• Intel IOP341 I/O processor (ARC-12x1ML/ARC-1280ML/1280)
• 64-bit/133MHz PCI-X Bus compatible
• PCI Express X8 compatible
• 256MB on-board DDR333 SDRAM with ECC protection (4/8-port)
• One SODIMM Socket with default 256 MB of DDR333 SDRAM with ECC protection, upgrade to 1GB (12, 16 and 24-port cards only)
• One DIMM Socket with default 256 MB of DDR2-533 SDRAM with ECC protection, upgrade to 2GB(ARC-12xxML, ARC-1280)
• An ECC or non-ECC SDRAM module using X8 or X16 devices
• Support up to 4/8/12/16/24 SATA ll drives
• Write-through or write-back cache support
• Multi-adapter support for large storage requirements
• BIOS boot support for greater fault tolerance
• BIOS PnP (plug and play) and BBS (BIOS boot specication) support
• Supports extreme performance Intel RAID 6 functionality
• NVRAM for RAID event & transaction log
• Battery backup module (BBM) ready (Depend on mother board)
RAID Features
• RAID level 0, 1, 1E, 3, 5, 6 (R6 engine inside) and JBOD
• Multiple RAID selection
• Array roaming
• Online RAID level/stripe size migration
• Online capacity expansion & RAID level migration simultaneously
• Online volume set growth
• Instant availability and background initialization
• Automatic drive insertion / removal detection and rebuilding
• Greater than 2TB per volume set for 64-bit LBA
12
Page 13
INTRODUCTION
• Redundant ash image for adapter availability
• Support S.M.A.R.T, NCQ and OOB staggered spin-up capable drives
Monitors/Notication
• System status indication through LED/LCD connector, HDD activity/fault connector, and alarm buzzer
• SMTP support for email notication
• SNMP agent supports for remote SNMP Manager
• I2C Enclosure Management Ready (IOP331/332/333)
• I2C & SGPIO Enclosure Management Ready (IOP341)
RAID Management
• Field-upgradeable rmware in ash ROM
• Ethernet port support on 12/16/24-port
In-Band Manager
• Hot key boot-up McBIOS RAID manager via BIOS
• Support controller’s API library, allowing customer to write its own AP
• Support Command Line Interface (CLI)
• Browser-based management utility via ArcHttp proxy server
• Single Admin Portal (SAP) monitor utility
• Disk Stress Test (DST) utility for production in Windows
Out-of-Band Manager
• Firmware-embedded browser-based RAID manager, SMTP manager, SNMP agent and Telnet function via Ethernet port (for 12/16/24-port Adapter)
• Support controller’s API library for customer to write its own AP (for 12/16/24-port Adapter)
• Push Button and LCD display panel (option)
Operating System
• Windows 2000/XP/Server 2003
• Red Hat Linux
• SuSE Linux
• FreeBSD
• Novell Netware 6.5
• Solaris 10 X86/X86_64
• SCO Unixware 7.1.4
• Mac OS 10.X (no_bootable)
(For latest supported OS listing visit http://www.areca.com.tw)
13
Page 14
INTRODUCTION
Internal PCI-X RAID Card Comparison (ARC-11XX)
1110 1120 1130 1160 1170
RAID processor IOP331
Host Bus Type PCI-X 133MHz
RAID 6 support N/A YES YES YES YES
Cache Memory 256MB 256MB One SO-
DIMM
Drive Support 4 * SATA ll 8 * SATA ll 12 * SATA ll 16 * SATA ll 24 * SATA ll
Disk Connector SATA SATA SATA SATA SATA
PCI-X RAID Card Comparison (ARC-11XXML)
1110ML 1120ML 1130ML 1160ML
RAID processor IOP331
Host Bus Type PCI-X 133MHz
RAID 6 support N/A YES YES YES
Cache Memory 256MB 256MB One SODIMM One SODIMM
Drive Support 4 * SATA ll 8 * SATA ll 12 * SATA ll 16 * SATA ll
Disk Connector Innband Innband Multi-lane Multi-lane
One SO-
DIMM
One SO-
DIMM
Internal PCI-Express RAID Card Comparison (ARC-12XX)
1210 1220 1230 1260
RAID processor IOP333
Host Bus Type PCI-Express X8
RAID 6 support N/A YES YES YES
Cache Memory 256MB 256MB One SODIMM One SODIMM
Drive Support 4 * SATA ll 8 * SATA ll 12 * SATA ll 16 * SATA ll
Disk Connector SATA SATA SATA SATA
IOP332
14
Page 15
INTRODUCTION
Internal PCI-Express RAID Card Comparison (ARC-12XX)
1231ML 1261ML 1280ML 1280
RAID processor IOP341
Host Bus Type PCI-Express X8
RAID 6 support YES YES YES YES
Cache Memory One DDR2 DIMM (Default 256MB, Upgrade to 2GB)
Drive Support 12 * SATA ll 16 * SATA ll 24 * SATA ll 24 * SATA ll
Disk Connector 3*Min SAS 4i 4*Min SAS 4i 6*Min SAS 4i 24*SATA
1.3 RAID Concept
1.3.1 RAID Set
A RAID set is a group of disks connected to a RAID controller. A RAID set contains one or more volume sets. The RAID set itself does not dene the RAID level (0, 1, 1E, 3, 5, 6, etc); the RAID level is dened within each volume set. Therefore, volume sets are contained within RAID sets and RAID Level is dened within the volume set. If physical disks of different capacities are grouped together in a RAID set, then the capacity of the smallest disk will become the effective capacity of all the disks in the RAID set.
1.3.2 Volume Set
Each volume set is seen by the host system as a single logical de­vice (in other words, a single large virtual hard disk). A volume set will use a specic RAID level, which will require one or more physi­cal disks (depending on the RAID level used). RAID level refers to the level of performance and data protection of a volume set. The capacity of a volume set can consume all or a portion of the avail­able disk capacity in a RAID set. Multiple volume sets can exist in a RAID set.
For the SATA RAID controller, a volume set must be created either on an existing RAID set or on a group of available individual disks (disks that are about to become part of a RAID set). If there are pre-existing RAID sets with available capacity and enough disks for the desired RAID level, then the volume set can be created in the existing RAID set of the user’s choice.
15
Page 16
INTRODUCTION
In the illustration, volume 1 can be assigned a RAID level 5 of operation while volume 0 might be assigned a RAID level 1E of operation. Alterantively, the free space can be used to create vol­ume 2, which could then be set to use RAID level 5.
1.3.3 Ease of Use Features
1.3.3.1 Foreground Availability/Background Initial­ization
RAID 0 and RAID 1 volume sets can be used immediately af­ter creation because they do not create parity data. However, RAID 3, 5 and 6 volume sets must be initialized to generate parity information. In Backgorund Initialization, the initializa­tion proceeds as a background task, and the volume set is fully accessible for system reads and writes. The operating system can instantly access the newly created arrays without requir­ing a reboot and without waiting for initialization to complete. Furthermore, the volume set is protected against disk failures while initialing. If using Foreground Initialization, the initializa­tion process must be completed before the volume set is ready for system accesses.
16
1.3.3.2 Array Roaming
The SATA RAID controllers store RAID conguration information on the disk drives. The controller therefore protect the congu­ration settings in the event of controller failure. Array roaming allows the administrators the ability to move a completele RAID set to another system without losing RAID conguration infor-
Page 17
INTRODUCTION
mation or data on that RAID set. Therefore, if a server fails, the RAID set disk drives can be moved to another server with an
Areca RAID controller and the disks can be inserted in any order.
1.3.3.3 Online Capacity Expansion
Online Capacity Expansion makes it possible to add one or more physical drives to a volume set without interrupting server op­eration, eliminating the need to backup and restore after recon­guration of the RAID set. When disks are added to a RAID set, unused capacity is added to the end of the RAID set. Then, data on the existing volume sets (residing on the newly expanded RAID set) is redistributed evenly across all the disks. A contigu­ous block of unused capacity is made available on the RAID set. The unused capacity can be used to create additional volume sets.
A disk, to be added to a RAID set, must be in normal mode (not failed), free (not spare, in a RAID set, or passed through to host) and must have at least the same capacity as the smallest disk capacity already in the RAID set.
Capacity expansion is only permitted to proceed if all volumes on the RAID set are in the normal status. During the expansion process, the volume sets being expanded can be accessed by the host system. In addition, the volume sets with RAID level 1, 1E, 3, 5 or 6 are protected against data loss in the event of disk failure(s). In the case of disk failure, the volume set transitions from “migrating” state to “migrating+degraded“ state. When the expansion is completed, the volume set would then transition to “degraded” mode. If a global hot spare is present, then it further transitions to the “rebuilding” state.
17
Page 18
INTRODUCTION
The expansion process is illustrated as following gure.
The SATA RAID controller redistributes the original volume set over the original and newly added disks, using the same fault­tolerance conguration. The unused capacity on the expand RAID set can then be used to create an additional volume set, with a different fault tolerance setting (if required by the user.)
18
The SATA RAID controller redistributes the original volume set over the original and newly added disks, using the same fault­tolerance conguration. The unused capacity on the expand raid set can then be used to create an additional volume sets, with a different fault tolerance setting if user need to change.
Page 19
INTRODUCTION
1.3.3.4 Online RAID Level and Stripe Size Migration
For those who wish to later upgrade to any RAID capabilities, a system with Areca online RAID level/stripe size migration allows a simplied upgrade to any supported RAID level without having to reinstall the operating system.
The SATA RAID controllers can migrate both the RAID level and stripe size of an existing volume set, while the server is on­line and the volume set is in use. Online RAID level/stripe size migration can prove helpful during performance tuning activities as well as when additional physical disks are added to the SATA RAID controller. For example, in a system using two drives in RAID level 1, it is possible to add a single drive and add capac­ity and retain fault tolerance. (Normally, expanding a RAID level 1 array would require the addition of two disks). A third disk can be added to the existing RAID logical drive and the volume set can then be migrated from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the system down. A forth disk could be added to migrate to RAID level 6. It is only possible to migrate to a higher RAID level by adding a disk; disks in an existing array can’t be recongured for a higher RAID level without adding a disk.
Online migration is only permitted to begin, It all volumes to be migrated are in the normal mode. During the migration process, the volume sets being migrated are accessed by the host sys­tem. In addition, the volume sets with RAID level 1, 1E, 3, 5 or 6 are protected against data loss in the event of disk failure(s). In the case of disk failure, the volume set transitions from mi­grating state to (migrating+degraded) state. When the migra­tion is completed, the volume set transitions to degraded mode. If a global hot spare is present, then it further transitions to rebuilding state.
1.3.3.5 Online Volume Expansion
Performing a volume expansion on the controller is the process of growing only the size of the lastest volume. A more exible option is for the array to concatenate an additional drive into the RAID set and then expand the volumes on the y. This happens
19
Page 20
INTRODUCTION
transparently while the volumes are online, but, at the end of the process, the operating system will detect free space at after the existing volume.
Windows, NetWare and other advanced operating systems sup­port volume expansion, which enables you to incorporate the additional free space within the volume into the operating sys­tem partition. The operating system partition is extended to incorporate the free space so it can be used by the operating system without creating a new operating system partition.
You can use the Diskpart.exe command line utility, included with Windows Server 2003 or the Windows 2000 Resource Kit, to ex­tend an existing partition into free space in the dynamic disk.
Third-party software vendors have created utilities that can be used to repartition disks without data loss. Most of these utilities work ofine. Partition Magic is one such utility.
1.4 High availability
1.4.1 Global Hot Spares
A Global Hot Spare is an unused online available drive, which is ready for replacing the failure disk. The Global Hot Spare is one of the most important features that SATA RAID controllers provide to deliver a high degree of fault-tolerance. A Global Hot Spare is a spare physical drive that has been marked as a global hot spare and therefore is not a member of any RAID set. If a disk drive used in a volume set fails, then the Global Hot Spare will automatically take its place and he data previously located on the failed drive is reconstructed on the Global Hot Spare.
For this feature to work properly, the global hot spare must have at least the same capacity as the drive it replaces. Global Hot Spares only work with RAID level 1, 1E, 3, 5, or 6 volume set. You can congure up to three global hot spares with ARC-11xx/ 12xx.
The Create Hot Spare option gives you the ability to dene a
20
Page 21
INTRODUCTION
global hot spare disk drive. To effectively use the global hot spare feature, you must always maintain at least one drive that is marked as a global spare.
Important:
The hot spare must have at least the same capacity as the drive it replaces.
1.4.2 Hot-Swap Disk Drive Support
The SATA controller chip includes a protection circuit that supports the replacement of SATA hard disk drives without having to shut down or reboot the system. A removable hard drive tray can de­liver “hot swappable” fault-tolerant RAID solutions at prices much less than the cost of conventional SCSI hard disk RAID control­lers. This feature provides advanced fault tolerant RAID protection and “online” drive replacement.
1.4.3 Auto Declare Hot-Spare
If a disk drive is brought online into a system operating in de­graded mode, The SATA RAID controllers will automatically de­clare the new disk as a spare and begin rebuilding the degraded volume. The Auto Declare Hot-Spare function requires that the smallest drive contained within the volume set in which the failure occurred.
In the normal status, the newly installed drive will be recongured an online free disk. But, the newly-installed drive is automatically assigned as a hot spare if any hot spare disk was used to rebuild and without new installed drive replaced it. In this condition, the Auto Declare Hot-Spare status will disappeared if the RAID sub­system has since powered off/on.
The Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1, 1E, 3, 5, and 6.
21
Page 22
INTRODUCTION
1.4.4 Auto Rebuilding
If a hot spare is available, the rebuild starts automatically when a drive fails. The SATA RAID controllers automatically and trans­parently rebuild failed drives in the background at user-denable rebuild rates.
If a hot spare is not available, the failed disk drive must be re­placed with a new disk drive so that the data on the failed drive can be automatically rebuilt and so that fault tolerance can be maintained.
The SATA RAID controllers will automatically restart the system and the rebuild process if the system is shut down or powered off abnormally during a reconstruction procedure condition.
When a disk is hot swapped, although the system is functionally operational, the system may no longer be fault tolerant. Fault tolerance will be lost until the removed drive is replaced and the rebuild operation is completed.
During the automatic rebuild process, system activity will contin­ue as normal, however, the system performance and fault toler­ance will be affected.
1.4.5 Adjustable Rebuild Priority
Rebuilding a degraded volume incurs a load on the RAID sub­system. The SATA RAID controllers allow the user to select the rebuild priority to balance volume access and rebuild tasks ap­propriately. The Background Task Priority is a relative indication of how much time the controller devotes to a background operation, such as rebuilding or migrating.
The SATA RAID controller allows user to choose the task priority (Ultra Low (5%), Low (20%), Medium (50%), High (80%)) to bal­ance volume set access and background tasks appropriately. For high array performance, specify an Ultra Low value. Like volume initialization, after a volume rebuilds, it does not require a system reboot.
22
Page 23
INTRODUCTION
1.5 High Reliability
1.5.1 Hard Drive Failure Prediction
In an effort to help users avoid data loss, disk manufacturers are now incorporating logic into their drives that acts as an "early warning system" for pending drive problems. This system is called S.M.A.R.T. The disk integrated controller works with multiple sensors to monitor various aspects of the drive's performance, determines from this information if the drive is behaving normally or not, and makes available status information to RAID controller rmware that probes the drive and look at it. The SMART can often predict a problem before failure occurs. The controllers will recognize a SMART error code and notify the administer of an impending hard drive failure.
1.5.2 Auto Reassign Sector
Under normal operation, even initially defect-free drive media can develop defects. This is a common phenomenon. The bit density and rotational speed of disks is increasing every year, and so is the potential of problems. Usually a drive can internally remap bad sectors without external help using cyclic redundancy check (CRC) checksums stored at the end of each sector.
SATA drives perform automatic defect re-assignment for both read and write errors. Writes are always completed - if a location to be written is found to be defective, the drive will automatically relocate that write command to a new location and map out the defective location. If there is a recoverable read error, the cor­rect data will be transferred to the host and that location will be tested by the drive to be certain the location is not defective. If it is found to have a defect, data will be automatically relocated, and the defective location is mapped out to prevent future write attempts.
In the event of an unrecoverable read error, the error will be reported to the host and the location agged as potentially defec­tive. A subsequent write to that location will initiate a sector test and relocation should that location have a defect. Auto Reassign Sector does not affect disk subsystem performance because it
23
Page 24
INTRODUCTION
runs as a background task. Auto Reassign Sector discontinues when the operating system makes a request.
1.5.3 Consistency Check
A consistency check is a process that veries the integrity of redundant data. For example, performing a consistency check of a mirrored drive assures that the data on both drives of the mirrored pair is exactly the same. To verify RAID 3, 5 or 6 redun­dancy, a consistency check reads all associated data blocks, com­putes parity, reads parity, and veries that the computed parity matches the read parity. Consistency checks are very important because they detect and correct parity errors or bad disk blocks in the drive. A consistency check forces every block on a volume to be read, and any bad blocks are marked; those blocks are not used again. This is criti­cal and important because a bad disk block can prevent a disk rebuild from completing. We strongly recommend that you run consistency checks on a regular basis—at least once per week. Note that consistency checks degrade performance, so you should run them when the system load can tolerate it.
1.6 Data Protection
1.6.1 BATTERY BACKUP
The SATA RAID controllers are armed with a Battery Backup Mod­ule (BBM). While a Uninterruptible Power Supply (UPS) protects most servers from power uctuations or failures, a BBM provides an additional level of protection. In the event of a power failure, a BBM supplies power to retain data in the RAID controller’s cache, thereby permitting any potentially dirty data in the cache to be ushed out to secondary storage when power is restored.
The batteries in the BBM are recharged continuously through a trickle-charging process whenever the system power is on. The batteries protect data in a failed server for up to three or four days, depending on the size of the memory module. Under nor­mal operating conditions, the batteries last for three years before replacement is necessary.
24
Page 25
INTRODUCTION
1.6.2 RECOVERY ROM
The SATA RAID controller rmware is stored on the ash ROM and is executed by the I/O processor. The rmware can also be updat­ed through the PCI-X/PCIe bus port or Ethernet port (if equipped) without the need to replace any hardware chips. During the con­troller rmware upgrade ash process, it is possible for a problem to occur resulting in corruption of the controller rmware. With our Redundant Flash Image feature, the controller will revert back to the last known version of rmware and continue operating. This reduces the risk of system failure due to rmware crash.
1.7 Understanding RAID
RAID is an acronym for Redundant Array of Independent Disks. It is an array of multiple independent hard disk drives that provides high performance and fault tolerance. The SATA RAID control­ler implements several levels of the Berkeley RAID technology. An appropriate RAID level is selected when the volume sets are dened or created. This decision should be based on the desired disk capacity, data availability (fault tolerance or redundancy), and disk performance. The following section discusses the RAID levels supported by the SATA RAID controller.
The SATA RAID controller makes the RAID implementation and the disks’ physical conguration transparent to the host operating system. This means that the host operating system drivers and software utilities are not affected, regardless of the RAID level selected. Correct installation of the disk array and the control­ler requires a proper understanding of RAID technology and the concepts.
1.7.1 RAID 0
RAID 0, also referred to as striping, writes stripes of data across multiple disk drives instead of just one disk drive. RAID 0 does not provide any data redundancy, but does offer the best high­speed data throughput. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the array. Disk strip­ing enhances performance because multiple drives are accessed
25
Page 26
INTRODUCTION
simultaneously; the reliability of RAID Level 0 is less because the entire array will fail if any one disk drive fails, due to a lack of redundancy.
1.7.2 RAID 1
RAID 1 is also known as “disk mirroring”; data written to one disk drive is simultaneously written to another disk drive. Read per­formance may be enhanced if the array controller can, in parallel, accesses both members of a mirrored pair. During writes, there will be a minor performance penalty when compared to writing to a single disk. If one drive fails, all data (and software applica­tions) are preserved on the other drive. RAID 1 offers extremely high data reliability, but at the cost of doubling the required data storage capacity.
26
Page 27
INTRODUCTION
1.7.3 RAID 1E
RAID 1E is a combination of RAID 0 and RAID 1, combing strip­ping with disk mirroring. RAID Level 1E combines the fast per­formance of Level 0 with the data redundancy of Leve1 1. In this conguration, data is distributed across several disk drives, similar to Level 0, which are then duplicated to another set of drive for data protection. RAID 1E has been traditionally imple­mented using an even number of disks, some hybrids can use an odd number of disks as well. Illustration is an example of a hy­brid RAID 1E array comprised of ve disks; A, B, C, D and E. In this conguration, each strip is mirrored on an adjacent disk with wrap-around. In fact this scheme - or a slightly modied version of it - is often referred to as RAID 1E and was originally proposed by IBM. When the number of disks comprising a RAID 1E is even, the striping pattern is identical to that of a traditional RAID 1E, with each disk being mirrored by exactly one other unique disk. Therefore, all the characteristics for a traditional RAID 1E apply to a RAID 1E when the latter has an even number of disks. Areca RAID 1E offers a little more exibility in choosing the number of disks that can be used to constitute an array. The number can be even or odd.
1.7.4 RAID 3
RAID 3 provides disk striping and complete data redundancy though a dedicated parity drive. RAID 3 breaks up data into smaller blocks, calculates parity by performing an exclusive-or on the blocks, and then writes the blocks to all but one drive in
27
Page 28
INTRODUCTION
the array. The parity data created during the exclusive-or is then written to the last drive in the array. If a single drive fails, data is still available by computing the exclusive-or of the contents cor­responding strips of the surviving member disk. RAID 3 is best for applications that require very fast data- transfer rates or long data blocks.
1.7.5 RAID 5
RAID 5 is sometimes called striping with parity at byte level. In RAID 5, the parity information is written to all of the drives in the controllers rather than being concentrated on a dedicated parity disk. If one drive in the system fails, the parity information can be used to reconstruct the data from that drive. All drives in the array system can be used for seek operations at the same time, greatly increasing the performance of the RAID system. This relieves the write bottleneck that characterizes RAID 4, and is the primary reason that RAID 5 is more often implemented in RAID arrays.
28
Page 29
INTRODUCTION
1.7.6 RAID 6
RAID 6 provides the highest reliability, but is not yet widely used. It is similar to RAID 5, but it performs two different parity com­putations or the same computation on overlapping subsets of the data. RAID 6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for dis­tributed parity data. RAID 6 is an extension of RAID 5 but uses a second, independent distributed parity scheme. Data is striped on a block level across a set of drives, and then a second set of par­ity is calculated and written across all of the drives.
Summary of RAID Levels
The SATA RAID controller supports RAID Level 0, 1, 1E, 3, 5 and 6. The table below provides a summary of RAID levels.
Features and Performance
RAID
Level
0 Also known as stripping
Description Min.
Drives
Data distributed across multiple drives in the array. There is no data protection.
Data Reli­ability
1 No data
Protec­tion
Data Transfer Rate
Very High
I/O Request Rates
Very High for Both Reads and Writes
29
Page 30
INTRODUCTION
1 Also known as mirroring
All data replicated on N sepa­rated disks. N is almost always 2. This is a high availability solu­tion, but due to the 100% duplication, it is also a costly solution. Half of drive capacity in array devoted to mirroring.
1E Also known Block-Interleaved
Parity. Data and parity information is subdivided and distributed across all disks. Parity must be the equal to the smallest disk capacity in the array. Parity information normally stored on a dedicated parity disk.
3 Also known Bit-Interleaved Par-
ity. Data and parity information is subdivided and distributed across all disks. Parity data consumes the capacity of 1 disk drive. Parity information normally stored on a dedicated parity disk.
5 Also known Block-Interleaved
Distributed Parity. Data and parity information is subdivided and distributed across all disk. Parity data con­sumes the capacity of 2 disk drive.
2 Lower
than RAID 6; Higher than
RAID
3, 5
3 Lower
than RAID 6; Higher than
RAID
3, 5
3 Lower
than RAID 1, 1E, 6;
Higher than a single drive
3 Lower
than RAID 1, 1E, 6;
Higher than a single drive
Reads are higher than a single disk;
Writes similar to a single disk
Transfer rates more similar
to RAID
1 than RAID 0
Reads are similar
to RAID
0;
Writes are slower than a single disk
Reads are similar
to
RAID 0;
Writes are slower than a single disk
Reads are twice as fast as a single disk;
Write are similar to a single disk.
Reads are twice as fast as a single disk;
Writes are similar to a single disk.
Reads are close to being twice as fast as a single disk;
Writes are similar to a single disk.
Reads are similar to RAID 0;
Writes are slower than a single disk.
30
Page 31
INTRODUCTION
6 RAID 6 provides the highest
reliability. Similar to RAID 5, but does two different parity com­putations. RAID 6 offers fault tolerance greater that RAID 1 or RAID 5. Parity data consumes the capacity of 2 disk drives.
4 highest
reliabil­ity
Reads are similar
to
RAID 0;
Writes are slower than a single disk
Reads are similar to RAID 0;
Writes are slower than a single disk.
31
Page 32
HARDWARE INSTALLATION
2. Hardware Installation
This section describes the procedures for installing the SATA RAID con­trollers.
2.1 Before Your begin Installation
Thanks for purchasing the SATA RAID Controller as your RAID data storage and management system. This user guide gives simple step-by-step instructions for installing and conguring the SATA RAID Controller. To ensure personal safety and to protect your equipment and data, carefully read the information following the package content list before you begin installing.
Package Contents
If your package is missing any of the items listed below, contact your local dealer before proceeding with installation (disk drives and disk mounting brackets are not included):
ARC-11xx Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 4/8/12/16/24 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
ARC-11xxML/12xxML Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 1 x Installation CD
• 1 x User Manual
ARC-12xx Series SATA RAID Controller
• 1 x PCI-Express SATA RAID Controller in an ESD-protective bag
• 4/8/12/16/24 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
32
Page 33
HARDWARE INSTALLATION
2.2 Board Layout
Follow the instructions below to install a PCI RAID Card into your PC / Server.
Figure 2-1, ARC-1110/1120 (4/8-port PCI-X SATA RAID Controller)
Figure 2-2, ARC-1210/1220 (4/8-port PCI-Express SATA RAID Con-
troller)
33
Page 34
HARDWARE INSTALLATION
Figure 2-3, ARC-1110ML/1120ML (4/8-port PCI-X SATA RAID Con­troller)
Figure 2-4, ARC-1210ML/1220ML (4-port PCI Express SAS RAID
Controller)
34
Page 35
HARDWARE INSTALLATION
Figure 2-5, ARC-1130/1160 (12/16-port PCI-X SATA RAID Control­ler)
Figure 2-6, ARC-1130ML/1160ML (12/16-port PCI-X SATA RAID
Controller)
35
Page 36
HARDWARE INSTALLATION
Figure 2-7, ARC-1230/1260 (12/16-port PCI-EXpress SATA RAID
Controller)
Figure 2-8, ARC-1170 (24-port PCI-X SATA RAID Controller)
36
Page 37
HARDWARE INSTALLATION
Figure 2-9, ARC-1280 (24-port PCI-Express SATA RAID Controller)
Figure 2-10, ARC-1231ML/1261ML/1280ML (12/16/24-port PCI-Ex-
press SATA RAID Controller)
37
Page 38
HARDWARE INSTALLATION
Tools Required
An ESD grounding strap or mat is required. Also required are stan­dard hand tools to open your system’s case.
System Requirement
The controller can be installed in a universal PCI slot and requires a motherboard that: ARC-11xx series required one of the following:
• Complies with the PCI Revision 2.3 32/64-bit 33/66MHz, 3.3V.
• Complies with the PCI-X 32/64-bit 66/100/133 MHz, 3.3V. ARC-12xx series requires:
• Complies with the PCI-Express X8 The SATA RAID controller may be connected to up to 4, 8, 12, 16, or 24 SATA ll hard drives using the supplied cables. Optional cables are required to connect any drive activity LEDs and fault LEDs on the enclosure to the SATA RAID controller.
Installation Tools
The following items may be needed to assist with installing the SATA RAID controller into an available PCI expansion slot.
• Small screwdriver
• Host system hardware manuals and manuals for the disk or enclosure being installed.
Personal Safety Information
To ensure personal safety as well as the safety of the equipment:
• Always wear a grounding strap or work on an ESD-protective mat.
• Before opening the system cabinet, turn off power switches and unplug the power cords. Do not reconnect the power cords until you have replaced the covers.
38
Page 39
HARDWARE INSTALLATION
Warning:
High voltages may be found inside computer equipment. Be­fore installing any of the hardware in this package or remov­ing the protective covers of any computer equipment, turn off power switches and disconnect power cords. Do not re­connect the power cords until you have replaced the covers.
Electrostatic Discharge
Static electricity can cause serious damage to the electronic com­ponents on this SATA RAID controller. To avoid damage caused by electrostatic discharge, observe the following precautions:
• Don’t remove the SATA RAID controller from its anti-static pack­aging until you are ready to install it into a computer case.
• Handle the SATA RAID Controller by its edges or by the metal mounting brackets at its each end.
• Before you handle the SATA RAID controller in any way, touch a grounded, anti-static surface, such as an unpainted portion of the system chassis, for a few seconds to discharge any built-up static
electricity.
2.3 Installation
Follow the instructions below to install a SATA RAID controller into your PC / Server.
Step 1. Unpack
Unpack and remove the SATA RAID controller from the package. Inspect it carefully, if anything is missing or damaged, contact your local dealer.
Step 2. Power PC/Server Off
Turn off computer and remove the AC power cord. Remove the system’s cover. See the computer system documentation for in­struction.
39
Page 40
HARDWARE INSTALLATION
Step 3. Install the PCI RAID Cards
To install the SATA RAID controller remove the mounting screw and existing bracket from the rear panel behind the selected PCI slot. Align the gold-ngered edge on the card with the selected PCI expansion slot. Press down gently but rmly to ensure that the card is properly seated in the slot, as shown in Figure 2-11. Next, screw the bracket into the computer chassis. ARC-11xx controllers can t in both PCI (32-bit/3.3V) and PCI-X slots. It can get the best performance installed in a 64-bit/133MHz PCI-X slot. ARC-12xx controllers require a PCI-Express 8X slot.
Figure 2-11, Insert SATA RAID controller into a PCI-X slot
Step 4. Mount the Cages or Drives
Remove the front bezel from the computer chassis and install the Cages or SATA Drives in the computer chassis. Loading drives to the drive tray if cages are installed. Be sure that the power is con­nected to either the Cage backplane or the individual drives.
40
Page 41
HARDWARE INSTALLATION
Figure 2-12, Mount Cages & Drives
Step 5 Connect the SATA cable
Model ARC-11XX and ARC-12XX controllers have dual-layer SATA internal connectors. If you have not already connected your SATA cables, use the cables included with your kit to connect the control­ler to the SATA hard drives. The cable connectors are all identical, so it does not matter which end you connect to your controller, SATA hard drive, or cage back­plane SATA connector.
Figure 2-13, SATA Cable
Note:
The SATA cable connectors must match your HDD cage. For example: Channel 1 of RAID Card connects to channel 1 of HDD cage, channel 2 of RAID Card connects to channel 2 of HDD cage, and follow this rule.
41
Page 42
HARDWARE INSTALLATION
Step 5-2. Connect the Multi-lance cable
Model ARC-11XXML has multi-lance internal connectors, each of them can support up to four SATA drives. These adapters can be installed in a server RAID enclosure with a Multi-lance connector (SFF-8470) backplane. Multi-lance cables are not included in the ARC-11XXML package. If you have not already connected your Multi-lance cables, use the cables included with your enclosure to connect your controller to the Multi-lance connector backplane. The type of cable will depend on what enclosure you have. The following diagram shows one ex­ample picture of Multi-lane cable. Unpack and remove the PCI RAID cards. Inspect it carefully. If any­thing is missing or damaged, contact your local dealer.
Figure 2-14, Multi-Lance Cable
Step 5-3. Connect the Min SAS 4i to 4*SATA cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i (SFF-8087) internal connectors, each of them can support up to four SATA drives. These adapters can be installed in a server RAID enclosure with a standard SATA connector backplane. Min SAS 4i to SATA cables are included in the ARC-1231ML/1261ML/1280ML package. The following diagram shows the picture of MinSAS 4i to 4*SATA cables. Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
42
Page 43
HARDWARE INSTALLATION
Figure 2-15, Min SAS 4i to 4*SATA
For Sideband cable signal Please refer to page 51 for SGPIO bus.
Step 5-4. Connect the Min SAS 4i to Multi-lance cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i internal connectors, each of them can support up to four SATA drives. These controllers can be installed in a server RAID enclosure with a Multi­lance connector (SFF-8470) backplane. Multi-lance cables are not included in the ARC-12XXML package. If you have not already connected your Min SAS 4i to Multi­lance cables, buy the Min SAS 4i to Multi-lance cables to t your enclosure. And connect your controller to the Multi-lance connector backplane. The type of cable will depend on what enclosure you have. The following diagram shows one example picture of Min SAS 4i to Multi-lance cable. Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
Figure 2-16, Min SAS 4i to Multi-lance
43
Page 44
HARDWARE INSTALLATION
Step 5-5. Connect the Min SAS 4i to Min SAS 4i cable
Model ARC-1230ML/1260ML/1280ML have Min SAS 4i internal connectors, each of them can support up to four SATA drives. These adapters can be installed in a server RAID enclosure with a Min SAS 4i internal connector backplane. Min SAS 4i cables are not included in the ARC-12XXML package. This Min SAS 4i cable has eight signal pins to support four SATA drives and six pins for the SGPIO (Serial General Purpose Input/ Output) side-band signals. The SGPIO bus is used for efcient LED management and for sensing drive Locate status. Please see page 51 for the details of the SGPIO bus.
Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
Figure 2-17, Min SAS 4i to Min SAS 4i
Step 6 Install the LED cable (optional)
ARC-1XXX Series Fault/Activity Header Intelligent Electronics Schematic.
44
Page 45
HARDWARE INSTALLATION
The intelligent LED controller outputs a low-level pulse to deter­mine if status LEDs are attached to pin sets 1 and 2. This allows automatic controller conguration of the LED output. If the logi­cal level is different between the st 2 sets of the HDD LED header (LED attached to Set 1 but not Set 2), the controller will assign the rst HDD LED header as the global indicator connector. Otherwise, each LED output will show only individual drive status.
The SATA RAID controller provides four kinds of LED status connec­tors.
A: Global indicator connector, which lights when any drive is active. B: Individual LED indicator connector, for each drive channel. C: I2C connector, for SATA proprietary backplane enclosure. D: SGPIO connector for SAS Backplane enclosure
The following diagrams and description describes each type of con­nector.
Note:
A cable for the global indicator comes with your computer system. Cables for the individual drive LEDs may come with a drive cage, or you may need to purchase them.
A: Global Indicator Connector
If the system will use only a single global indicator, attach the global indicator cable to the two pins HDD LED connector. The fol­lowing diagrams show the connector and pin locations.
Figure 2-18, ARC­1110/1120/1210/1220 global LED connection for Computer Case.
45
Page 46
HARDWARE INSTALLATION
Figure 2-19, ARC­1130/1160/1230/1260 global LED connection for Computer Case.
Figure 2-20, ARC-1170 global LED connection for Computer Case.
46
Figure 2-21, ARC-1280 global LED connection for Computer Case.
Page 47
HARDWARE INSTALLATION
Figure 2-22, ARC-1231ML/ 1261ML/1280ML global LED connection for Computer Case.
B: Individual LED indicator connector
Connect the cables for the drive activity LEDs and fault LEDs be­tween the backplane of the cage and the respective connector on the SATA RAID controller. The following describes the fault/activ­ity LED.
LED Normal Status Problem Indication
Activity LED When the activity LED is illu-
minated, there is I/O activity on that disk drive. When the LED is dark, there is no activ­ity on that disk drive.
Fault LED When the fault LED is solid
illuminated, there is no disk present. When the fault LED is off, that disk is present and sta­tus is normal.
N/A
When the Red LED is slow blinking (2 times/sec), that disk drive has failed and should be hot-swapped immediately. When the activity LED is illuminated and Red LED is fast blinking (10 times/sec) there is rebuilding activity on that disk drive.
47
Page 48
HARDWARE INSTALLATION
Figure 2-23, ARC­1110/1120/1210/1220 Individual LED indica­tors connector, for each channel drive.
Figure 2-24, ARC­1130/1160/1230/1260 Individual LED indica­tors connector, for each channel drive.
48
Figure 2-25, ARC-1170 Individual LED indicators connector, for each chan­nel drive.
Page 49
HARDWARE INSTALLATION
Figure 2-26, ARC-1280 Individual LED indicators connector, for each chan­nel drive.
Figure 2-27, ARC-1231ML/ 1261ML/1280ML Individual LED indicators connector, for each channel drive.
C: I2C Connector
You can also connect the I2C interface to a proprietary SATA backplane enclosure. This can reduce the number of activity LED and/or fault LED cables. The I2C interface can also cascade to an­other SATA backplane enclosure for the additional channel status display.
49
Page 50
HARDWARE INSTALLATION
Figure 2-28, Activity/Fault LED I2C connector connected between SATA RAID Controller & SATA HDD Cage backplane.
Figure 2-29, Activity/Fault LED I2C connector connected between SATA RAID Controller & 4 SATA HDD backplane.
Note:
Ci-Design has supported this feature in its 4-port 12-6336­05A SATA ll backplane.
The following is the I2C signal name description for LCD & Fault/Ac­tivity LED.
50
Page 51
HARDWARE INSTALLATION
PIN Description PIN Description
1 power (+5V) 2 GND
3 LCD Module Interrupt 4 Fault/Activity Interrupt
5 LCD Module Serial Data 6 Fault/Activity clock
7 Fault/Activity Serial Data 8 LCD Module clock
D: SGPIO bus
The preferred I/O connector for server backplanes is the Min SAS 4i internal serial-attachment connector. This connector has eight signal pins to support four SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) side-band signals. The SGPIO bus is used for efcient LED management and for sens­ing drive Locate status. See SFF 8485 for the specication of the SGPIO bus.
The number of drives supported can be increased, by a factor of four, by adding similar backplane to maximum of 24 drives (6 backplanes)
LED Management: The backplane may contain LEDs to indicate drive status. Light from the LEDs could be transmitted to the out­side of the server by using light pipes mounted on the SAS drive tray. A small EPLP microcontroller on the backplane, connected via the SGPIO bus to a ARC-1231ML/1261ML/1280ML SATA RAID con­troller, could control the LEDs. Activity: blinking 5 Times/Second Fault: solid illuminated
Drive Locate Circuitry: The locate of a drive may be detected by sensing the voltage level of one of the pre-charge pins before and after a drive is installed. Fault (red) blinking 2 Times/Second.
The following signal denes the SGPIO assignments for the Min SAS 4i connector in ARC-1231ML/1261ML/1280ML.
PIN Description PIN Description
SideBand0 SClock (Clock Signal) SideBand1 SLoad (Last clock of a bit
SideBand2 Ground SideBand3 Ground
SideBand4 SDataOut (Serial data
output bit stream)
SideBand5 SDataIn (Serial data input bit
stream)
stream)
51
Page 52
HARDWARE INSTALLATION
The following signal denes the sideband connector which can work with Areca sideband cable.
The sideband header is located at backplane. For SGPIO to work properly, please connect Areca 8-pin sideband cable to the sideband header as shown above. See the table for pin denitions.
Step 7. Re-check the SATA HDD LED and Fault LED Cable connections
Be sure that the proper failed drive channel information is displayed by the Fault and HDD Activity LEDs. An improper connection will tell the user to ‘‘Hot Swap’’ the wrong drive. This will remove the wrong disk (one that is functioning properly) from the controller. This can result in failure and loss of system data.
Step 8. Power up the System
Thoroughly check the installation, reinstall the computer cover, and reconnect the power cord cables. Turn on the power switch at the rear of the computer (if equipped) and then press the power button at the front of the host computer.
Step 9. Congure volume set
The SATA RAID controller congures RAID functionality through the McBIOS RAID manager. Please refer to Chapter 3, McBIOS RAID manager, for the detail regarding conguration. The RAID controller can also be congured through the McRAID storage manager soft­ware utility with ArcHttp proxy server installed through on-board Lan port or LCD module. For this option, please reference Chapter 6, Web Browser-Based Conguration or LCD conguration menu.
52
Page 53
HARDWARE INSTALLATION
Step 10. Install the controller driver
For a new system:
• Driver installation usually takes places as part of operating sys­tem installation. Please reference the Chapter 4 Diver Installation for the detail installation procedure.
In an existing system:
• Install the controller driver into the existing operating system. Please reference the Chapter 4, Driver Installation, for the detailed installation procedure.
Note:
Look for newest release versions of drivers please download from http://www.areca.com.tw
Step 11. Install ArcHttp proxy Server
The SATA RAID controller rmware has embedded the web-browser RAID manager. ArcHttp proxy driver will enable it. The browser­based RAID manager provides all of the creation, management, and monitor SATA RAID controller status. Please refer to the Chapter 5 for the detail ArcHttp proxy server installation. For SNMP agent function, please refer to Appendix C.
Step 12. Determining the Boot sequences
The SATA RAID controller is a bootable controller. If your system already contains a bootable device with an installed operating sys­tem, you can set up your system to boot a second operating sys­tem from the new controller. To add a second bootable controller, you may need to enter setup and change the device boot sequence so that the SATA RAID controller heads the list. If the system BIOS setup does not allow this change, your system may not be con­gurable to allow the SATA RAID controller to act as a second boot device.
53
Page 54
HARDWARE INSTALLATION
Summary of the installation
The ow chart below describes the installation procedures for SATA RAID controller. These procedures include hardware installa­tion, the creation and conguration of a RAID volume through the McBIOS, OS installation and installation of SATA RAID controller software.
The software components congure and monitor the SATA RAID controller via ArcHttp Proxy Server.
Conguration Utility Operating System supported
McBIOS RAID Manager OS-Independent
McRAID Storage Manager (Via Archttp proxy server)
SAP Monitor (Single Admin portal to scan for multiple RAID units in the net­work, Via ArcHttp proxy server)
SNMP Manager Console Integration Windows 2000/XP/2003, Linux and
Windows 2000/XP/2003, Linux, Free­BSD, NetWare, UnixWare, Solaris and Mac
Windows 2000/XP/2003
FreeBSD
McRAID Storage Manager
Before launching the rmware-embedded web server, McRAID stor­age manager, you can to install the ArcHttp proxy server on your server system or through on-board Lan-port (if equipped). If you need additional information about installation and start-up of this function, see the McRAID Storage Manager section in Chapter 6.
54
Page 55
HARDWARE INSTALLATION
SNMP Manager Console Integration
• Out of Band-Using Ethernet port (12/16/24-port Control­ler)
Before launching the rmware-embedded SNMP agent in the sever, you need rst to enable the reware-embedded SNMP agent function on your SATA RAID controller. If you need additional information about installation and start-up this function, see the section 6.8.4 SNMP Conguration (12/16/24-
port)
• In-Band-Using PCI-X/PCIe Bus (4/8/12/16/24-port Controller)
Before launching the SNMP agent in the sever, you need rst to enable the reware-embedded SNMP community conguration and install Areca SNMP extension agent in your server system. If you need additional information about installation and start-up the function, see the SNMP Operation & Installation section in the Appendix C
Single Admin Portal (SAP) Monitor
This utility can scan for multiple RAID units on the network and monitor the controller set status. It also includes a disk stress test utility to identify marginal spec disks before the RAID unit is put into a production environment.
For additional information, see the utility manual in the packaged CD-ROM or download it from the web site http://www.arec.com.tw
55
Page 56
BIOS CONFIGURATION
3. McBIOS RAID Manager
The system mainboard BIOS automatically congures the following SATA RAID controller parameters at power-up:
• I/O Port Address
• Interrupt channel (IRQ)
• Adapter ROM Base Address
Use McBIOS to further congure the SATA RAID controller to suit your server hardware and operating system.
3.1 Starting the McBIOS RAID Manager
This section explains how to use the McBIOS Setup Utility to con­gure your RAID system. The BIOS Setup Utility is designed to be user-friendly. It is a menu-driven program, residing in the rm­ware, which allows you to scroll through various menus and sub­menus and select among the predetermined conguration options.
When starting a system with an SATA RAID controller installed, it will display the following message on the monitor during the start­up sequence (after the system bios startup screen but before the operating system boots):
ARC-1xxx RAID Ctrl - DRAM: 128(MB) / #Channels: 8
BIOS: V1.00 / Date: 2004-5-13 - F/W: V1.31 / Date: 2004-5-31
I/O-Port=F3000000h, IRQ=11, BIOS ROM mapped at D000:0h No BIOS disk Found, RAID Controller BIOS not installed! Press <Tab/F6> to enter SETUP menu. 9 second(s) left <ESC to Skip>..
The McBIOS conguration manager message remains on your screen for about nine seconds, giving you time to start the cong­ure menu by pressing Tab or F6. If you do not wish to enter con­guration menu, press <ESC> to skip conguration immediately. When activated, the McBIOS window appears showing a selection dialog box listing the SATA RAID controllers that are installed in the system. The legend at the bottom of the screen shows you what keys are enabled for the windows.
56
Page 57
BIOS CONFIGURATION
Areca Technology Corporation RAID Controller Setup <V1.0, 2004/05/20>
Select An Adapter To Congure
( 3/14/ 0)I/O=DD200000h, IRQ = 9
ArrowKey Or AZ:Move Cursor, Enter: Select, ** Select & Press F10 to Reboot**
Use the Up and Down arrow keys to select the adapter you want to congure. While the desired adapter is highlighted, press the <Enter> key to enter the Main Menu of the McBIOS Conguration Utility.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Verify Password
Note:
The manufacture default password is set to 0000; this password can be modied by selecting
Change Password in the Raid System Function section.
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.2 McBIOS Conguration manager
The McBIOS conguration utility is rmware-based and is used to congure raid sets and volume sets. Because the utility resides in the SATA RAID controller rmware, operation is independent of any operating systems on your computer. This utility can be used to:
• Create RAID sets,
• Expand RAID sets,
57
Page 58
BIOS CONFIGURATION
• Add physical drives,
• Dene volume sets,
• Modify volume sets,
• Modify RAID level/stripe size,
• Dene pass-through disk drives,
• Modify system functions, and
• Designate drives as hot spares.
3.3 Conguring Raid Sets and Volume Sets
You can congure RAID sets and volume sets with McBIOS RAID manager automatically using Quick Volume/Raid Setup or manually using Raid Set/Volume Set Function. Each conguration method re­quires a different level of user input. The general ow of operations for RAID set and volume set conguration is:
Step Action
1 Designate hot spares/pass-through drives (optional).
2 Choose a conguration method.
3 Create RAID sets using the available physical drives.
4 Dene volume sets using the space available in the RAID Set.
5 Initialize the volume sets and use volume sets (as logical drives) in the
host OS.
3.4 Designating Drives as Hot Spares
Any unused disk drive that is not part of a RAID set can be desig­nated as a Hot Spare. The “Quick Volume/Raid Setup” conguration will add the spare disk drive and automatically display the appro­priate raid level from which the user can select. For the “Raid Set Function conguration” option, the user can use the “Create Hot Spare” option to dene the hot spare disk drive.
When a Hot Spare disk drive is being created using the “Create Hot Spare” option (in the Raid Set Function), all unused physical de­vices connected to the current controller appear: Choose the target disk by selecting the appropriate check box. Press the Enter key to select a disk drive, and press Yes in the Create Hot Spare to designate it as a hot spare.
58
Page 59
BIOS CONFIGURATION
3.5 Using Quick Volume /Raid Setup Con-
guration
Quick Volume / Raid Setup Conguration collects all available drives and includes them in a RAID set. The RAID set you create is associated with exactly one volume set. You will only be able to modify the default RAID level, the stripe size, and the capacity of the new volume set. Designating drives as Hot Spares is also possible in the raid level selection option. The volume set default settings will be:
Parameter Setting
Volume Name Volume Set # 00
SCSI Channel/SCSI ID/SCSI LUN 0/0/0
Cache Mode Write Back
Tag Queuing Yes
The default setting values can be changed after conguration is complete. Follow the steps below to create arrays using the RAID Set / Volume Set method:
Step Action
1 Choose Quick Volume /Raid Setup from the main menu. The available
RAID levels with hot spare for the current volume set drive are displayed.
2 It is recommend that you drives of the same capacity in a specic array.
If you use drives with different capacities in an array, all drives in the raid set will be set to the capacity of the smallest drive in the raid set. The numbers of physical drives in a specic array determines which RAID levels that can be implemented in the array. RAID 0 requires 1 or more physical drives. RAID 1 requires at least 2 physical drives. RAID 1+Spare requires at least 3 physical drives. RAID 1E requires at least 4 physical drives. RAID 3 requires at least 3 physical drives. RAID 5 requires at least 3 physical drives. RAID 3 +Spare requires at least 4 physical drives. RAID 5 + Spare requires at least 4 physical drives. RAID 6 requires at least 4 physical drives. RAID 6 + Spare requires at least 5 physical drives. Highlight the desired RAID level for the volume set and press the Enter key to conrm.
59
Page 60
BIOS CONFIGURATION
3 The capacity for the current volume set is entered after highlighting the
desired RAID level and pressing the Enter key. The capacity for the current volume set is displayed. Use the UP and
DOWN arrow keys to set the capacity of the volume set and press the Enter key to conrm. The available stripe sizes for the current volume
set are then displayed.
4 Use the UP and DOWN arrow keys to select the current volume set
stripe size and press the Enter key to conrm. This parameter species the size of the stripes written to each disk in a RAID 0, 1, 5 or 6 Volume Set. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB. A larger stripe size provides better read performance, especially when the computer preforms mostly sequential reads. However, if the computer preforms random read requests more often, choose a smaller stripe size.
5 When you are nished dening the volume set, press the Enter key to
conrm the Quick Volume And Raid Set Setup function.
6 Foreground (Fast Completion) Press Enter key to dene fast initialization
or Selected the Background (Instant Available). In the background Ini­tialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In Fast Initialization, the initialization proceeds must be completed before the volume set ready for system accesses.
7 Initialize the volume set you have just congured.
8 If you need to add additional volume set, using main menu Create Vol-
ume Set function.
3.6 Using RAID Set/Volume Set Function Method
In “Raid Set Function”, you can use the “Create Raid Set Function” to generate a new RAID set. In “Volume Set Function”, you can use the “Create Volume Set function” to generate an associated volume set and and conguration parameters.
If the current controller has unused physical devices connected, you can choose the “Create Hot Spare” option in the “Raid Set Function” to dene a global hot spare. Select this method to con­gure new raid sets and volume sets. The “Raid Set/Volume Set Function” conguration option allows you to associate volume sets with partial and full RAID sets.
60
Page 61
BIOS CONFIGURATION
Step Action
1 To setup the Hot Spare (option), choose RAID Set Function from the
main menu. Select the Create Hot Spare and press the Enter key to dene the Hot Spare.
2 Choose RAID Set Function from the main menu. Select Create RAID Set
and press the Enter key.
3 The “Select a Drive For Raid Set” window is displayed showing the SATA
drives connected to the SATA RAID controller.
4 Press the UP and DOWN arrow keys to select specic physical drives.
Press the Enter key to associate the selected physical drive with the cur­rent RAID set. It is recommend that you drives of the same capacity in a specic array. If you use drives with different capacities in an array, all drives in the raid set will be set to the capacity of the smallest drive in the raid set. The numbers of physical drives in a specic array determines which RAID levels that can be implemented in the array. RAID 0 requires 1 or more physical drives. RAID 1 requires at least 2 physical drives. RAID (1+0) requires at least 4 physical drives. RAID 3 requires at least 3 physical drives. RAID 5 requires at least 3 physical drives. RAID 6 requires at least 4 physical drives.
5 After adding the desired physical drives to the current RAID set, press
Yes to conrm the “Create Raid Set” function.
6 An “Edit The Raid Set Name” dialog box appears. Enter 1 to 15 alphanu-
meric characters to dene a unique identier for this new raid set. The default raid set name will always appear as Raid Set. #. Press Enter to nish the name editing.
7 Press the Enter key when you are nished creating the current RAID
Set. To continue dening another RAID set, repeat step 3. To begin vol­ume set conguration, go to step 8.
8 Choose the Volume Set Function from the Main menu. Select Create
Volume Set and press the Enter key.
9 Choose a RAID set from the “Create Volume From Raid Set” window.
Press the Enter key to conrm the selection.
10 Choosing Foreground (Fast Completion) or Background (Instant Avail-
ability) initiation: during Background Initialization, the initialization proceeds as a background task and the volume set is fully accessible for system reads and writes. The operating system can instantly access the newly created arrays without requiring a reboot and waiting for initializa­tion complete. In Fast Initialization, the initialization must be completed before the volume set is ready for system accesses. In Fast Initialization, initiation is completed more quickly but volume access by the operating system is delayed.
11 If space remains in the raid set, the next volume set can be congured.
Repeat steps 8 to 10 to congure another volume set.
61
Page 62
BIOS CONFIGURATION
Note:
A user can use this method to examine the existing congura­tion. The “modify volume set conguration” method provides the same functions as the “create volume set conguration” method. In the volume set function, you can use “modify volume set” to change all volume set parameters except for capacity (size).
3.7 Main Menu
The main menu shows all functions that are available for executing actions, which is accomplished by clicking on the appropriate link.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Verify Password
Note:
The manufacture default password is set to 0000; this password can be modied by selecting
Change Password in the Raid System Function section.
Option Description
Quick Volume/Raid Setup Create a default conguration based on the number
Raid Set Function Create a customized RAID set
Volume Set Function Create a customized volume set
Physical Drives View individual disk information
Raid System Function Setup the RAID system conguration
Ethernet Conguration Ethernet LAN setting (ARC-1x30/1x60/1x70 only)
View System Events Record all system events in the buffer
Clear Event Buffer Clear all information in the event buffer
Hardware Monitor Show the hardware system environment status
System Information View the controller system information
of physical disk installed
62
Page 63
BIOS CONFIGURATION
This password option allows user to set or clear the raid controller’s password protection feature. Once the password has been set, the user can only monitor and congure the raid controller by providing the correct password. The password is used to protect the internal RAID controller from unauthorized entry. The controller will prompt for the password only when entering the Main menu from the initial screen. The SATA RAID controller will automatically return to the initial screen when it does not receive any command in twenty seconds.
3.7.1 Quick Volume/RAID Setup
“Quick Volume/RAID Setup” is the fastest way to prepare a RAID set and volume set. It requires only a few keystrokes to com­plete. Although disk drives of different capacity may be used in the RAID set, it will use the capacity of the smallest disk drive as the capacity of all disk drives in the RAID set. The “Quick Vol­ume/RAID Setup” option creates a RAID set with the following properties:
1. All of the physical drives are contained in one RAID set.
2. The RAID level, hot spare, capacity, and stripe size options are selected during the conguration process.
3. When a single volume set is created, it can consume all or a portion of the available disk capacity in this RAID set.
4. If you need to add an additional volume set, use the main menu “Create Volume Set” function.
The total number of physical drives in a specic RAID set deter­mine the RAID levels that can be implemented within the RAID
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
63
Page 64
BIOS CONFIGURATION
Set. Select “Quick Volume/RAID Setup” from the main menu; all possible RAID level will be displayed on the screen.
If volume capacity will exceed 2TB, controller will show the “Greater 2 TB volume Support” sub-menu.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
Greater Two TB Volume Support
No Use 64bit LBA For Windows
For Windows
No It keeps the volume size with max. 2TB limitation.
LBA 64
This option use 16 bytes CDB instead of 10 bytes. The maximum volume capacity up to 512TB.
This option works on different OS which supports 16 bytes CDB. such as : Windows 2003 with SP1 Linux kernel 2.6.x or latter
For Windows
It change the sector size from default 512 Bytes to 4k Bytes. the maximum volume capacity up to 16TB.
This option works under Windows platform only. And it CAN NOT be converted to Dynamic Disk, because 4k sector size is not a standard format.
For more details please download PDF le from ftp://ftp.areca.
com.tw/RaidCards/Documents/Manual_Spec/Over2TB_
050721.zip
64
Page 65
BIOS CONFIGURATION
A single volume set is created and consumes all or a portion of the disk capacity available in this raid set. Dene the capacity of volume set in the Available Capacity popup. The default value for the volume set, which is 100% of the available capacity, is displayed in the selected capacity. To enter a value less than the available capacity, type the new value and press the Enter key to accept this value. If the volume set uses only part of the RAID Set capacity, you can use the “Create Volume Set” option in the main menu to dene additional volume sets.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
Stripe size This parameter sets the size of the stripe written to each disk in a RAID 0, 1, 5, or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB. A larger stripe size produces better-read performance, especially if your computer does mostly sequential reads. However, if you
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
Select Strip Size
4K 8K 16K 32K
64K
128K
65
Page 66
BIOS CONFIGURATION
are sure that your computer performs random reads more often, select a smaller stripe size. Press the Yes key in the “Create Vol/Raid” Set dialog box, the RAID set and volume set will start to initialize it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare
Raid 6
Raid 5 + Spare
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Create Vol/Raid Set
Select Strip Size
Yes
4K
No
8K 16K 32K
64K
128K
Select “Foreground (Faster Completion)” or “Background (Instant Available)” for initialization.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Create Vol/Raid Set
Initialization Mode
Select Strip Size
Foreground (Faster Completeion)
Yes
4K
No
Background (Instant Available)
8K 16K 32K
64K
128K
3.7.2 Raid Set Function
Manual Conguration gives complete control of the RAID set set­ting, but it will take longer to congure than “Quick Volume/Raid Setup” conguration. Select “Raid Set Function” to manually con­gure the raid set for the rst time or delete existing RAID sets and recongure the RAID set.
66
Page 67
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
3.7.2.1 Create Raid Set
To dene a RAID set, follow the procedure below:
1. Select “Raid Set Function” from the main menu.
2. Select “Create Raid Set “ from the “Raid Set Function” dialog box.
3. A “Select SATA Drive For Raid set” window is displayed showing the SATA drives connected to the current controller. Press the UP and DOWN arrow keys to select specic physical drives. Press the Enter key to associate the selected physical drive with the current RAID Set. Repeat this step; the user can add as many disk drives as are available to a single RAID set.
When nished selecting SATA drives for RAID set, press the Esc key. A Create Raid Set conrmation screen appears, select the Yes option to conrm it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function
Expand Raid Set
Physical Drives
Activate Raid Set
Raid System Function
Create Hot Spare
Ethernet Conguration
Delete Hot Spare
View System Events
Raid Set Information
Clear Event Buffer Hardware Monitor System information
Select IDE Drives For Raid Set
[*]Ch01| 80.0GBST380013AS
[ ]Ch04| 80.0GBST380013AS [ ]Ch05| 80.0GBST380013AS [ ]Ch08| 80.0GBST380013AS
67
Page 68
BIOS CONFIGURATION
4. An “Edit The Raid Set Name” dialog box appears. Enter 1 to 15 alphanumeric characters to dene a unique identier for the RAID Set. The default RAID set name will always appear as Raid Set. #.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2.2 Delete Raid Set
To completely erase and recongure a RAID set, you must rst delete it and re-create the raid set. To delete a raid set, select the raid set number that user want to delete in the “Select Raid Set to Delete” screen. The “Delete Raid Set” dialog box appears, then press Yes key to delete it. Warning, data on RAID set will be lost if this option is used.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function
Expand Raid Set
Physical Drives
Activate Raid Set
Raid System Function
Create Hot Spare
Ethernet Conguration
Delete Hot Spare
View System Events
Raid Set Information
Clear Event Buffer Hardware Monitor System information
Areca Technology Corporation RAID Controller
Select IDE Drives For Raid Set
[*]Ch01| 80.0GBST380013AS
Edit The Raid Set Name
[ ]Ch04| 80.0GBST380013AS [ ]Ch05| 80.0GBST380013AS
R
aid Set # 00
[ ]Ch08| 80.0GBST380013AS
68
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function
Expand Raid Set
Physical Drives
Activate Raid Set
Raid System Function
Create Hot Spare
Ethernet Conguration
Delete Hot Spare
View System Events
Raid Set Information
Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Select Raid Set To Delete
Raid Set # 00
Raid Set # 01
Are you Sure?
Yes
No
3.7.2.3 Expand Raid Set
Instead of deleting a RAID set and recreating it with additional
Page 69
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function
Exp
Expand Raid Set
Expand Raid Set
Physical Drives
Activate Raid Set
Raid System Function
Create Hot Spare
Ethernet Conguration
Delete Hot Spare
View System Events
Raid Set Information
Clear Event Buffer Hardware Monitor System information
Select Drives For Raid Set Expansion
[*]Ch05| 80.0GBST380013AS
[ ]Ch08| 80.0GBST380013AS
Are you Sure?
Yes
No
disk drives, the “Expand Raid Set” function allows the users to add disk drives to the RAID set that have already been created. To expand a raid set: Select the “Expand Raid Set” option. If there is an available disk, then the “Select SATA Drives For Raid Set Expansion” screen appears. Select the target RAID set by clicking on the appropriate radio
button. Select the target disk by clicking on the appropriate check box. Press the Yes key to start expansion of the RAID set. The new additional capacity can be utilized by one or more volume sets. Follow the instruction presented in the volume set Function to modify the volume sets; operation system specic utilities may be required to expand operating system partitions.
Note:
1. Once the Expand Raid Set process has started, user cannot stop it. The process must be completed.
2. If a disk drive fails during raid set expansion and a hot spare is available, an auto rebuild operation will occur after the RAID set expansion completes.
• Migrating
69
Page 70
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid Set Function
Create Raid Set
The Raid Set Information
Delete Raid Set Expand Raid Set
Raid Set Name : Raid Set # 00
Activate Raid Set
Member Disks : 4
Create Hot Spare
Raid State : Migrating
Delete Hot Spare
Total Capacity : 160.1GB Free Capacity : 144.1GB
Raid Set Information
Min Member Disk Size : 40.0GB Member Disk Channels : 1234
Migration occurs when a disk is added to a RAID set. Migration
status is displayed in the raid status area of the Raid set infor-
mation screen when a disk is being added to a Raid set. Migrat-
ing status is also displayed in the associated volume status area
of the Volume Set Information when a disk is added to a RAID
set.
3.7.2.4 Activate Incomplete Raid Set
The following screen is shows “Raid Set Information” after one
of its disk drive was removed in the power off state.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
70
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Raid Set Function
Delete Raid Set
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The Raid Set Information
Expand Raid Set
Raid Set Name : Raid Set # 00
Active Raid Set
Activate Raid Set
Member Disks : 4
Create Hot Spare
Raid State : Migrating
Delete Hot Spare
Total Capacity : 160.1GB
Raid Set Information
Free Capacity : 144.1GB Min Member Disk Size : 40.0GB Member Disk Channels : 1234
When one of the disk drives is removed in power off state, the Raid set state will change to Incomplete State. If a user wants to continue to work while the SATA RAID controller is powered on, the user can use the “Activate Raid Set” option to active the RAID set. After user selects this function, the Raid State will change to Degraded Mode.
Page 71
BIOS CONFIGURATION
3.7.2.5 Create Hot Spare
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
When you choose the “Create Hot Spare” option in the Raid Set Function, all unused physical devices connected to the current controller will result in the following: Select the target disk by clicking on the appropriate check box. Press the Enter key to select a disk drive and press Yes in the “Create Hot Spare” to designate it as a hot spare. The create Hot Spare option gives you the ability to dene a global hot spare.
Areca Technology Corporation RAID Controller
Raid Set Function
Create Raid Set Delete Raid Set
Select Drives For HotSpare, Max 3 HotSpare Supported
Expand Raid Set Activate Raid Set
[*]Ch05| 80.0GBST380013AS
Create Hot Spare
Create Hot Spare
[ ]Ch08| 80.0GBST380013AS
Delete Hot Spare Raid Set Information
Are you Sure?
Yes
No
3.7.2.6 Delete Hot Spare
Select the target Hot Spare disk to delete by clicking on the ap­propriate check box. Press the Enter keys to select a disk drive, and press Yes in the “Delete Hot Spare” window to delete the hot spare.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid Set Function
Create Raid Set Delete Raid Set Expand Raid Set Activate Raid Set Create Hot Spare
Delete Hot Spare
Delete Hot Spare
Delete Raid Set
Raid Set Information
Select The HotSpare Device To be Deleted
[*]Ch05| 80.0GBST380013AS
[ ]Ch08| 80.0GBST380013AS
Are you Sure?
Yes
No
71
Page 72
BIOS CONFIGURATION
3.7.2.7 Raid Set Information
To display Raid Set information, move the cursor bar to the de­sired RAID set number, then press the Enter key. The “Raid Set Information” will display. You can only view information for the RAID set in this screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.3 Volume Set Function
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
The Raid Set Information
Create Raid Set Delete Raid Set
Raid Set Name : Raid Set # 00
Expand Raid Set
Member Disks : 4
Activate Raid Set
Raid State : Normal
Create Hot Spare
Total Capacity : 320.1GB
Delete Hot Spare
Free Capacity : 320.1GB
Raid Set Information
Min Member Disk Size : 80.0GB Member Disk Channels : 1458
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup Raid Set Function
Volume Set Function
Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Volume Set Function
Create Volume Set
Delete Volume Set Modify Volume Set Check Volume Set StopVolume Check Display Volume Info.
A volume set is seen by the host system as a single logical de­vice; it is organized in a RAID level within the controller utiliz­ing one or more physical disks. RAID level refers to the level of data performance and protection of a volume set. A volume set can consume all of the capacity or a portion of the available disk capacity of a RAID set. Multiple volume sets can exist on a RAID set. If multiple volume sets reside on a specied RAID set, all
72
Page 73
BIOS CONFIGURATION
volume sets will reside on all physical disks in the RAID set. Thus each volume set on the RAID set will have its data spread evenly across all the disks in the RAID set rather than one volume set using some of the available disks and another volume set using other disks.
3.7.3.1 Create Volume Set
1. Volume sets of different RAID levels may coexist on the same raid set.
2. Up to 16 volume sets in a RAID set can be created by the SATA RAID controller.
3. The maximum addressable size of a single volume set is not limited to 2 TB as with other cards that support only 32-bit mode. To create a volume set, follow the following steps:
1. Select the “Volume Set Function” from the Main menu.
2. Choose the “Create Volume Set” from “Volume Set Functions” dialog box screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function
Volume Set Function
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Areca Technology Corporation RAID Controller
Volume Set Function
Create Volume Set
Create Volume From Raid Set
Delete Volume Set Modify Volume Set Check Volume Set StopVolume Check Display Volume Info.
Raid Set # 00
Raid Set # 01
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3. The “Create Volume From RAID Set” dialog box appears. This screen displays the existing arranged RAID sets. Select the RAID set number and press the Enter key. The “Volume Creation” dialogue is displayed in the screen.
4. A window with a summary of the current volume set’s set­tings. The “Volume Creation” option allows user to select the volume name, capacity, RAID level, strip size, Disk Info, Cache mode and tag queuing. The user can modify the default val­ues in this screen; the modication procedures are in section
3.5.3.3.
73
Page 74
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
5. After completing the modication of the volume set, press the Esc key to conrm it. An “Initialization” screen is presented.
• Select Foreground (Faster Completion) for Faster Initializa­tion of the selected volume set.
• Select Background (Instant Available) for Normal Initializa­tion of the selected volume set.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Initialization Mode
Foreground (Faster Completion)
Background (Instant Available)
74
6. Repeat steps 3 to 5 to create additional volume sets.
7. The initialization percentage of volume set will be displayed at the button line.
Page 75
• Volume Name
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Edit The Volume Name
V
olume Set # 00
The default volume name will always appear as Volume Set #. You can rename the volume set providing it does not exceed the 15 characters limit.
• Raid Level
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Select Raid Level
0
0 + 1
3
5
6
Set the RAID level for the volume set. Highlight Raid Level and press <Enter>. The available RAID levels for the current volume set are dis­played. Select a RAID level and press the Enter key to conrm.
75
Page 76
BIOS CONFIGURATION
• Capacity
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
The maximum available volume size is the default value for the rst setting. Enter the appropriate volume size to t your ap­plication. The capacity value can be increased or decreased by the UP and DOWN arrow keys. The capacity of each volume set must be less than or equal to the total capacity of the RAID set on which it resides.
If volume capacity will exceed 2TB, controller will show the Greater 2 TB volume Support sub-menu.
76
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Total 4 Drives
Raid 0 Raid 1 + 0 Raid 1 + 0 + Spare Raid 3 Raid 5 Raid 3 + Spare Raid 5 + Spare
Raid 6
Greater Two TB Volume Support
No Use 64bit LBA For Windows
For Windows
Page 77
BIOS CONFIGURATION
No
It keeps the volume size with max. 2TB limitation.
LBA 64
This option use 16 bytes CDB instead of 10 bytes. The maximum
volume capacity up to 512TB.
This option works on different OS which supports 16 bytes CDB.
such as :
Windows 2003 with SP1
Linux kernel 2.6.x or latter
For Windows
It change the sector size from default 512 Bytes to 4k Byetes. the
maximum volume capacity up to 16TB.
This option works under Windows platform only. And it CAN NOT
be converted to Dynamic Disk, because 4k sector size is not a
standard format.
For more details please download PDF file from ftp://ftp.
are ca.c om.t w/Ra idCar ds/D ocum ents /Man ual_ Spec /
Over2TB_050721.zip
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB
Stripe Size : 64K
Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
• Strip Size
This parameter sets the size of the segment written to each disk in a RAID 0, 1, 5, or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
77
Page 78
BIOS CONFIGURATION
• SCSI Channel
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K
SCSI Channel : 0
SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
The SATA RAID controller function simulates a SCSI RAID con­troller. The host bus represents the SCSI channel. Choose the SCSI Channel. A “Select SCSI Channel” dialog box appears; se­lect the channel number and press the Enter key to conrm it.
• SCSI ID
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0
SCSI ID : 0
SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
78
Each device attached to the SATA card, as well as the card itself, must be assigned a unique SCSI ID number. A SCSI channel can connect up to 15 devices. It is necessar to assign a SCSI ID to each device from a list of available SCSI IDs.
Page 79
• SCSI LUN
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0
SCSI LUN : 0
SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Each SCSI ID can support up to 8 LUNs. Most SCSI controllers treat each LUN as if it were a SCSI disk.
• Cache Mode
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back
Cache Mode : Write Back
Tag Queuing : Enabled
User can set the cache mode to either “Write-Through Cache” or “Write-Back Cache”.
79
Page 80
BIOS CONFIGURATION
• Tag Queuing
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
This option, when enabled, can enhance overall system perfor­mance under multi-tasking operating systems. The Command Tag (Drive Channel) function controls the SCSI command tag queuing support for each drive channel. This function should normally remain enabled. Disable this function only when using older drives that do not support command tag queuing.
3.7.3.2 Delete Volume Set
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Volume Creation
Create Volume From Raid Set
Volume Name : Volume Set # 00
Raid Set # 00
Raid Level : 5
Raid Set # 01
Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back
Tag Queuing : Enabled
Tag Queuing : Enabled
80
To delete volume set from a RAID set, move the cursor bar to the “Volume Set Functions” menu and select the “Delete Volume Set” item, then press the Enter key. The “Volume Set Func­tions” menu will show all Raid Set # items. Move the cursor bar to a RAID set number, then press the Enter key to show all volume sets within that Raid Set. Move the cursor to the volume set number that is to be deleted and press Enter to delete it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Select Volume To Delete
Delete Volume From Raid Set
Volume Set # 00
Raid Set # 00
Raid Set # 01
Delete Volume Set
Yes
No
Page 81
BIOS CONFIGURATION
3.7.3.3 Modify Volume Set
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Volume Set Function
Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Use this option to modify volume set conguration. To modify volume set values from RAID set system function, move the cursor bar to the “Volume Set Functions” menu and select the “Modify Volume Set” item, then press the Enter key. The “Vol­ume Set Functions” menu will show all RAID set items. Move the cursor bar to a RAID set number item, then press the Enter key to show all volume set items. Select the volume set from the list to be changed, press the Enter key to modify it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Modify Volume From Raid Set
Delete Volume Set
Modify Volume Set
t Check Volume Set StopVolume Check Display Volume Info.
Select Volume To Modify
Volume Modication
Volume Name : Volume Set # 00
Raid Level : 6 Capacity : 160.1GB Stripe Size : 64K SCSI Channel : 0 SCSI ID : 0 SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Volume Set # 00
Raid Set # 00
Raid Set # 01
As shown, volume information can be modied at this screen. Choose this option to display the properties of the selected vol­ume set; all values can be modied except the capacity.
81
Page 82
BIOS CONFIGURATION
• Volume Growth
Use this option to expand a raid set when a disk is added to the system. The additional capacity can be used to enlarge the volume set size or to create another volume set. The “Modify Volume Set Function” can support the “volume set expansion” function. To expand the volume set capacity from the “Raid Set System Function”, move the cursor bar to the “Volume Set Vol­ume Capacity” item and entry the capacity size. Select “Conrm The Operation” and select on the “Submit” button to complete the action. The volume set starts to expand.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Select Volume To Display
The Volume Set Information
Volume Set Name : Volume Set # 00
Display Volume Info in Raid Raid Set Name : Raid Set # 00 Volume Capacity : 160.1GB Volume State : Migration SCSI CH/Id/Lun : 0/0/0 RAID Level : 6 Stripe Size : 64K Member Disk : 4 Cache Attribute : Write-Back Tag Queuing : Enabled
Volume Set # 00
Raid Set # 00
Raid Set # 01
To Expand an existing volume noticed:
Only the last volume can expand capacity.
When expand volume capacity, you can’t modify stripe size or
modify raid revel simultaneously.
You can expand volume capacity, but can’t reduce volume capacity size.
For Greater 2TB expansion:
If your system installed in the volume, don't expanded the
volume capacity greater 2TB, currently OS can’t support boot up from a greater 2TB capacity device.
Expanded over 2TB used LBA64 mode. Please make sure your OS supports LBA 64 before expand it.
82
Page 83
BIOS CONFIGURATION
• Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID level to another, when a volume set strip size changes, or when a disk is added to a RAID set. Migration status is displayed in the volume status area of the “Volume Set Information” screen when moving from one RAID level to another, when a volume set strip size changes, or when a disk is added to a RAID set.
3.7.3.4 Check Volume Set
Use this option to verify the correctness of the redundant data in a volume set. For example, in a system with a dedicated parity disk drive, a volume set check entails computing the parity of the data disk drives and comparing those results to the contents of the dedicated parity disk drive. To check volume set from “Raid Set System Function”, move the cursor bar to the “Volume Set Functions” menu and select the “Check Volume Set” item, then press the Enter key. The “Volume Set Functions” menu will show all Raid Set number items. Move the cursor bar to an Raid Set number item and then press the Enter key to show all Volume Set items. Select the volume set to be checked from the list and press Enter to select it. After completing the selection, the conrmation screen appears, presses Yes to start the check.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
Select Volume To Check
Check Volume From Raid Set
Volume Set # 00
Raid Set # 00
Raid Set # 01
Check The Volume ?
Yes
No
3.7.3.5 Stop Volume Set Check
Use this option to stop all of the “Check Volume Set” operations.
83
Page 84
BIOS CONFIGURATION
3.7.3.6 Display Volume Set Info.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function
Create Volume Set
Volume Set Function
Volume Set Function
Delete Volume Set
Physical Drives
Modify Volume Set
Raid System Function
Check Volume Set
Ethernet Conguration
StopVolume Check
View System Events
Display Volume Info.
Clear Event Buffer Hardware Monitor System information
To display volume set information, move the cursor bar to the desired volume set number and then press the Enter key. The “Volume Set Information” will be shown. You can only view the information of this volume set in this screen, not modify it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Quick Volume/Raid Setup Raid Set Function Volume Set Function
Volume Set Function
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
The Volume Set Information
Create Volume Set
Volume Set Name : Volume Set # 00
Delete Volume Set
Raid Set Name : Raid Set # 00
Modify Volume Set
Volume Capacity : 160.1GB
Check Volume Set
Volume State : Normal
StopVolume Check
SCSI CH/Id/Lun : 0/0/0
Display Volume Info.
RAID Level : 6 Stripe Size : 64K Member Disk : 4 Cache Attribute : Write-Back Tag Queuing : Enabled
Select Volume To Display
Display Volume Info in Raid
Volume Set # 00
Raid Set # 00
Raid Set # 01
84
Page 85
BIOS CONFIGURATION
3.7.4 Physical Drives
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Physical Drive Information
Raid Set Function Volume Set Function
View Drive Information
Physical Drive Information
Physical Drives
Create Pass-Through Disk
Raid System Function
Modify Pass-Through Disk
Ethernet Conguration
Delete Pass-Through Disk
View System Events
Identify Selected Drive
Clear Event Buffer Hardware Monitor System information
Choose this option from the Main Menu to select a physical disk and perform the operations listed above.
3.7.4.1 View Drive Information
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function
Physical Drive Information
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Ch01
Physical Drive Information
Model Name : ST380013AS
View Drive Information
Serial Number : 5JV944ZF
Create Pass-Through Disk
Firmware Rev. : 3.18
Modify Pass-Through Disk
Disk Capacity : 80.0 GB
Delete Pass-Through Disk
PIO Mode : Mode 4
Identify Selected Drive
Ch01| 80.0GB|RaidSet Member|ST380013AS
Current UDMA : SATA150(6) Ch04| 80.0GB|RaidSet Member|ST380013AS
Supported UDMA : SATA150(6) Ch05| 80.0GB|RaidSet Member|ST380013AS
Device State : RaidSet Member Ch08| 80.0GB|RaidSet Member|ST380013AS
Timeout Count : 0 Media Errors : 0 SMART Read Error Rate : 200 (51) SMART Spinup Time : 173 (21) SMART Reallocation Count : 200 (140) SMART Seek Error Rate : 200 (51) SMART Spinup Retries : 100 (51) SMART Calibration Retries : 100 (51)
Select IDE Drives For Raid Set
85
Page 86
BIOS CONFIGURATION
When you choose this option, the physical disks connected to the SATA RAID controller are listed. Move the cursor to the de­sired drive and press Enter to view drive information.
3.7.4.2 Create Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Physical Drive Information
Raid Set Function Volume Set Function
View Drive Information
Pass-Through Disk Attribute
Physical Drive Information
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Select The Drive
Create Pass-Throught
Modify Pass-Through Disk
SCSI Channel : 0
Ch01| 80.0GB| Free |ST380013AS
Delete Pass-Through Disk
SCSI ID : 0
Ch04| 80.0GB| Free |ST380013AS
Identify Selected Drive
SCSI LUN : 0 Cache Mode : Write Back Tag Queuing : Enabled
Create Pass-Through
Yes
No
A Pass-Through Disk is not controlled by the SATA RAID con­troller rmware and thus cannot be a part of a volume set. The disk is available directly to the operating system as an individual disk. It is typically used on a system where the operating sys­tem is on a disk not controlled by the SATA RAID controller rm­ware. The SCSI Channel, SCSI ID, SCSI LUN, Cache Mode, and Tag Queuing must be specied to create a pass-through disk.
3.7.4.3 Modify a Pass-Through Disk
Use this option to modify Pass-Through Disk Attributes. To select and modify a Pass-Through Disk from the pool of Pass-Through Disks, move the cursor bar to the “Physical Drive Function” menu and select the “Modify Pass-Through Drive” option and then press the Enter key. The “Physical Drive Function” menu will show all Raid Pass-Through Drive number options. Move the cursor bar to the desired item and then press the Enter key to show all Pass-Through Disk Attributes. Select the parameter from the list to be changed and them press the Enter key to modify it.
86
Page 87
BIOS CONFIGURATION
3.7.4.4 Delete Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Physical Drive Information
Raid Set Function Volume Set Function
View Drive Information
Physical Drive Information
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Select The Drive
Create Pass-Through Disk Modify Pass-Through Disk
Ch01| 80.0GB| Pass Through |ST380013AS
Delete Pass-Through
Identify Selected Drive
Delete Pass-Through
Yes
No
To delete a Pass-through drive from the Pass-through drive pool, move the cursor bar to the “Physical Drive Function” menu and select the “Delete Pass-Through Drive” item, then press the Enter key. The “Delete Pass-Through conrmation” screen will appear; select Yes to delete it.
3.7.4.5 Identify Selected Drive
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Physical Drive Information
Raid Set Function Volume Set Function
View Drive Information
Physical Drive Information
Physical Drives Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Select The Drive
Create Pass-Through Disk Modify Pass-Through Disk
Ch01| 80.0GB|RaidSet Member|ST380013AS
Delete Pass-Through Disk
Ch04| 80.0GB|RaidSet Member|ST380013AS
Identify Selected Drive
Ch05| 80.0GB|RaidSet Member|ST380013AS Ch08| 80.0GB| Pass Throught |ST380013AS
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
To prevent removing the wrong drive, the selected disk HDD LED Indicator will light for physically locating the selected disk when the “Identify Selected Device” is selected.
87
Page 88
BIOS CONFIGURATION
3.7.5 Raid System Function
To set the raid system function, move the cursor bar to the main menu and select the “Raid System Function” item and then press Enter key. The “Raid System Function” menu will show multiple items. Move the cursor bar to an item, then press Enter key to select the desired function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
3.7.5.1 Mute The Alert Beeper
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
Hardware Monitor
Empty HDD slot LED
System information
HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
Mute Alert Beeper
Yes
No
88
The “Mute The Alert Beeper” function item is used to control the SATA RAID controller Beeper. Select yes and press the Enter key in the dialog box to turn the beeper off temporarily. The beeper will still activate on the next event.
Page 89
BIOS CONFIGURATION
3.7.5.2 Alert Beeper Setting
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Alert Beeper Setting
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
Hardware Monitor
Empty HDD slot LED
System information
HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
Alert Beeper Setting
Disabled
Enabled
The “Alert Beeper Setting” item is used to Disabled or Enable the SATA RAID controller alarm tone generator. Select “Disabled” and press the Enter key in the dialog box to turn the beeper off.
3.7.5.3 Change Password
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Change Password
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
Hardware Monitor
Empty HDD slot LED
System information
HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
Enter New Password
The manufacture default password is set to 0000. The password option allows user to set or clear the password pro­tection feature. Once the password has been set, the user can monitor and congure the controller only by providing the cor-
89
Page 90
BIOS CONFIGURATION
rect password. This feature is used to protect the internal RAID system from unauthorized access. The controller will check the password only when entering the Main menu from the initial screen. The system will automatically go back to the initial screen if it does not receive any command in 20 seconds. To set or change the password, move the cursor to “Raid System Function” screen, press the “Change Password” item. The “Enter New Password” screen will appear. To disable the password, only press Enter in both the “Enter New Password” and “Re-Enter New Password” column. The ex­isting password will be cleared. No password checking will occur when entering the main menu.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid System Function
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
JBOD/RAID Function
Mute The Alert Beeper Alert Beeper Setting Change Password
JBOD/RAID Function
JBOD/RAID Function Background Task Priority Maximum SATA Mode HDD Read Ahead Cache STagger Power on Empty HDD slot LED HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
RAID
JBOD
3.7.5.4 JBOD/RAID Function
JBOD is an acronym for “just a Bunch Of Disks”. It represents a volume set that is created by the concatenation of partitions on the disk. The operating system can see all disks when the JBOD option is selected. It is necessary to delete any RAID set(s) on any disk(s) if switching from a RAID to a JBOD con­guration.
90
Page 91
BIOS CONFIGURATION
3.7.5.5 Background Task Priority
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Raid Rebuild Priority
Mute The Alert Beeper
UltraLow(5%)
Alert Beeper Setting
Low(20%)
Change Password
Medium(50%)
JBOD/RAID Function
High(80%)
Background Task Priority
Background Task Priority Maximum SATA Mode HDD Read Ahead Cache Stagger Power on Empty HDD slot LED HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
The “Background Task Priority” is a relative indication of how much time the controller devotes to a rebuild operation. The SATA RAID controller allows the user to choose the rebuild prior­ity (ultralow, low, normal, high) to balance volume set access and rebuild tasks appropriately.
3.7.5.6 Maximum SATA Mode
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid System Function
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Maximum SATA Mode
Mute The Alert Beeper Alert Beeper Setting
SATA150
Change Password
SATA150+NCQ
SATA150+NCQ
JBOD/RAID Function
SATA300
Background Task Priority
SATA300+NCQ
Maximum SATA Mode
Maximum SATA Mode HDD Read Ahead Cache Stagger Power on Empty HDD slot LED HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
The SATA RAID controller can support up to SATA ll, which runs up to 300MB/s, twice as fast as SATA150. NCQ is a command protocol in Serial ATA that can only be implemented on na­tive Serial ATA hard drives. It allows multiple commands to be outstanding within a drive at the same time. Drives that support NCQ have an internal queue where outstanding commands can
91
Page 92
BIOS CONFIGURATION
be dynamically rescheduled or re-ordered, along with the neces­sary tracking mechanisms for outstanding and completed por­tions of the workload. The SATA RAID controller allows the user to choose the SATA Mode: SATA150, SATA150+NCQ, SATA300, SATA300+NCQ.
3.7.5.7 HDD Read Ahead Cache
Allow Read Ahead (Default: Enabled)—When Enabled, the drive’ s read ahead cache algorithm is used, providing maximum performance under most circumstances.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid System Function
Main Menu
Mute The Alert Beeper
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
HDD Read Ahead Cache
Alert Beeper Setting
Enabled
Enabled
Change Password
Disable Maxtor
JBOD/RAID Function
Disabled
Background Task Priority Maximum SATA Mode
HDD Read Ahead Cache
HDD Read Aead Cache Stagger Power on Empty HDD slot LED HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
92
3.7.5.8 Stagger Power On
In a PC system with only one or two drives, the power can sup­ply enough power to spin up both drives simultaneously. But in systems with more than two drives, the startup current from spinning up the drives all at once can overload the power sup­ply, causing damage to the power supply, disk drives and other system components. This damage can be avoided by allowing the host to stagger the spin-up of the drives. New SATA drives have support staggered spin-up capabilities to boost reliability. Staggered spin-up is a very useful feature for managing multiple disk drives in a storage subsystem. It gives the host the abil­ity to spin up the disk drives sequentially or in groups, allowing the drives to come ready at the optimum time without straining the system power supply. Staggering drive spin-up in a multiple drive environment also avoids the extra cost of a power supply designed to meet short-term startup power demand as well as steady state conditions.
Page 93
BIOS CONFIGURATION
Areca has supported the xed value staggered power up func­tion in its previous version rmware. But from rmware version
1.39 and later, SATA RAID controller has included the option for customer to select the disk drives sequentially stagger power up value. The values can be selected from 0.4ms to 6ms per step which powers up one drive.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.9 Empty HDD slot HDD
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Stagger Power On
Mute The Alert Beeper Alert Beeper Setting
0.4
Change Password
0.7
JBOD/RAID Function
1.0
Background Task Priority
1.5
Maximum SATA Mode
.
HDD Read Aead Cache
.
Stagger Power On
STagger Power on
6.0
Empty HDD slot LED HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode Capacity Truncation
From firmware version 1.39 date: 04/01/2006 and later, the rmware has added the "Empty HDD Slot LED" option to setup the Failed LED light "ON "or "OFF". When each slot has a power LED for the HDD installed identify, user can set this option to "OFF". Choose this option "ON", the failed LED light will ash red light; if no HDD installed.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Raid System Function
Main Menu
Mute The Alert Beeper
Quick Volume/Raid Setup
Alert Beeper Setting
Raid Set Function
Change Password
Volume Set Function
JBOD/RAID Function
Physical Drives
Raid System Function
Background Task Priority
Raid System Function
Maximum SATA Mode
Ethernet Conguration
HDD Read Aead Cache
View System Events
STagger Power on
Clear Event Buffer
Empty HDD slot HDD
Empty HDD slot LED
Hardware Monitor
HDD SMART Status Polling
System information
Controller Fan Detection Disk Write Cache Mode Capacity Truncation
On OFF
Empty HDD slot LED
ON
93
Page 94
BIOS CONFIGURATION
3.7.5.10 HDD SMART Status Polling
An external RAID enclosure has the hardware monitor in the dedicated backplane that can report HDD temperature status to the controller. However, PCI cards do not use backplanes if the drives are internal to the main server chassis. The type of enclosure cannot report the HDD temperature to the controller. For this reason, HDD SMART Status Polling was added to enable scanning of the HDD temperature function in the version 1.36 date: 2005-05-19 (and later). It is necessary to enable “HDD SMART Status Polling” function before SMART information is accessible. This function is disabled by default. The following screen shot shows how to change the BIOS setting to enable the Polling function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
HDD SMART Status Polling
Hardware Monitor
HDD SMART Status Polling
System information
Controller Fan Detection Disk Write Cache Mode Capacity Truncation
HDD SMART Status Polling
Disabled
Enabled
3.7.5.11 Controller Fan Detection
Included in the product box is a field replaceable passive heatsink to be used only if there is enough airow to adequately cool the passive heat sink. The “Controller Fan Detection” function is available in the version 1.36 date: 2005-05-19 and later for preventing the Buzzer warning. When using the passive heatsink, disable the “Controller Fan Detection” function through this BIOS setting. The following screen shot shows how to change the BIOS setting to disable the beeper function. (This function is not available
in the Web Browser setting.)
94
Page 95
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
Hardware Monitor
Empty HDD slot LED
System information
HDD SMART Status Polling
Controller Fan Detection
Controller Fan Detection Disk Write Cache Mode Capacity Truncation
Controller Fan Detection
Disabled
Enabled
3.7.5.12 Disk Write Cache Mode
User can set the “Disk Write Cache Mode” to Auto, Enabled, or Disabled. Enabled increases speed, Disabled increases reliabil­ity.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper Raid Set Function Volume Set Function Physical Drives
Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Disk Write Cache Mode
Alert Beeper Setting
Auto
Change Password
Enabled
Enabled
JBOD/RAID Function
Disabled
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Disk Write Cache Mode
Capacity Truncation
3.7.5.13 Capacity Truncation
SATA RAID controllers use drive truncation so that drives from different vendors are more likely to be usable as spares for one another. Drive truncation slightly decreases the usable capac-
95
Page 96
BIOS CONFIGURATION
ity of a drive that is used in redundant units. The controller provides three truncation modes in the system conguration: Multiples Of 10G, Multiples Of 1G, and No Truncation. Multiples Of 10G: If you have 120 GB drives from different vendors; chances are that the capacity varies slightly. For exam­ple, one drive might be 123.5 GB, and the other 120 GB. Areca drive Truncation mode Multiples Of 10G uses the same capac­ity for both of these drives so that one could replace the other.
Multiples Of 1G: If you have 123 GB drives from different ven­dors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 123.4 GB. Areca drive Truncation mode Multiples Of 1G uses the same capacity for both of these drives so that one could replace the other.
No Truncation: It does not truncate the capacity.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Raid System Function
Quick Volume/Raid Setup
Mute The Alert Beeper
Raid Set Function
Alert Beeper Setting
Volume Set Function
Change Password
Physical Drives
JBOD/RAID Function
Raid System Function
Raid System Function
Background Task Priority
Ethernet Conguration
Maximum SATA Mode
View System Events
HDD Read Ahead Cache
Clear Event Buffer
Stagger Power on
Hardware Monitor
Empty HDD slot LED
System information
HDD SMART Status Polling Controller Fan Detection Disk Write Cache Mode
Capacity Truncation
Capacity Truncation
Truncate Disk Capacity
To Multiples of 10G To Multiples of 1G
To Multiples of 1G
Disabled
3.7.6 Ethernet Conguration (12/16/24-port)
Use this feature to set the controller Ethernet port conguration. It is not necessary to create reserved disk space on any hard disk for the Ethernet port and HTTP service to function; these func­tions are built into the controller rmware.
96
Page 97
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
3.7.6.1 DHCP Function
DHCP (Dynamic Host Conguration Protocol) allows network ad­ministrators centrally manage and automate the assignment of IP (Internet Protocol) addresses on a computer network. When using the TCP/IP protocol (Internet protocol), it is necessary for a computer to have a unique IP address in order to communi­cate to other computer systems. Without DHCP, the IP address must be entered manually at each computer system. DHCP lets a network administrator supervise and distribute IP addresses from a central point. The purpose of DHCP is to provide the automatic (dynamic) allocation of IP client congurations for a specic time period (called a lease period) and to minimize the work necessary to administer a large IP network. To manually
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Ethernet Conguration
DHCP Function : Enable
DHCP Function : Enable
Local IP Address : 192.168.001.100
Select DHCP Setting
Ethernet Address : 00.04.D9.7F.FF.FF
Disabled
Enabled
97
Page 98
BIOS CONFIGURATION
congure the IP address of the controller, move the cursor bar to the Main menu “Ethernet Conguration Function” item and then press the Enter key. The “Ethernet Conguration” menu appears on the screen. Move the cursor bar to DHCP Function item, then press Enter key to show the DHCP setting. Select the “Disabled’ or ‘Enabled” option to enable or disable the DHCP function. If DHCP is disabled, it will be necessary to manually enter a static IP address that does not conict with other de­vices on the network.
3.7.6.2 Local IP address
If you intend to set up your client computers manually (no DHCP), make sure that the assigned IP address is in the same range as the default router address and that it is unique to your private network. However, it is highly recommend to use DHCP if that option is available on your network. An IP address alloca­tion scheme will reduce the time it takes to set-up client com­puters and eliminate the possibilities of administrative errors and duplicate addresses. To manually congure the IP address of the controller, move the cursor bar to the Main menu Ethernet Conguration Function item and then press the Enter key. The Ethernet Conguration menu appears on the screen. Move the cursor bar to Local IP Address item, then press the Enter key to show the default address setting in the SATA RAID controller. You can then reassign the static IP address of the controller.
98
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Ethernet Conguration
DHCP Function : Enable
Local IP Address : 192.168.001.100
Local IP Address : 192.168.001.100
Edit The local IP Address
Ethernet Address : 00.04.D9.7F.FF.FF
1
92.168.001.100
Page 99
BIOS CONFIGURATION
3.7.6.3 Ethernet Address
A MAC address stands for “Media Access Control” address and is unique to every single ethernet device. On an Ethernet LAN, it’s the same as your Ethernet address. When you’re connected to a local network from the SATA RAID controller Ethernet port, a correspondence table relates your IP address to the SATA RAID controller’s physical (MAC) address on the LAN.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Ethernet Conguration
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
Ethernet Conguration
DHCP Function : Enable Local IP Address : 192.168.001.100 Ethernet Address : 00.04.D9.7F.FF.FF
Ethernet Address : 00.04.D9.7F.FF.FF
3.7.7 View System Events
To view the SATA RAID controller’s information, move the cur­sor bar to the main menu and select the “View Events” link, then press the Enter key. The SATA RAID controller’s events screen appear.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Time Device Event Type ElapseTime Errors
2004-1-1 12:00:00 H/W Monitor Raid Powered On 2004-1-1 12:00:00 H/W Monitor Raid Powered On 2004-1-1 12:00:00 H/W Monitor Raid Powered On
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
Raid System Function
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
99
Page 100
BIOS CONFIGURATION
Choose this option to view the system events information: Timer, Device, Event type, Elapsed Time, and Errors. The RAID system does not have a real time clock. The Time information is the rela­tive time from the SATA RAID controller powered on.
3.7.8 Clear Events Buffer
Use this feature to clear the entire events buffer.
3.7.9 Hardware Monitor
To view the RAID controller’s hardware monitor information, move the mouse cursor to the main menu and click the “Hardware Monitor” link. The Hardware Information screen appears.
The Hardware Monitor Information provides the temperature and fan speed (I/O Processor fan) of the SATA RAID controller.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup Raid Set Function Volume Set Function Physical Drives Raid System Function
System Information
Ethernet Conguration
View System Events Clear Event Buffer Hardware Monitor System information
The Hardware Monitor
Fan Speed (RPM) : 2178 Battery Status : Not Installed HDD #1 Temp. : -­HDD #2 Temp. : -­HDD #3 Temp. : 48 HDD #4 Temp. : -­HDD #5 Temp. : -­HDD #6 Temp. : 49 HDD #7 Temp. : -­HDD #8 Temp. : --
3.7.10 System Information
Choose this option to display Main processor, CPU Instruction cache and data cache size, rmware version, serial number, controller model name, and the cache memory size. To check the system information, move the cursor bar to “System Information”
100
Loading...