XtendLan xl-raid-2804issa Installation And Configuration Manual

XL-RAID-2804ISSA
iSCSI – SATA II SUBSYSTEM
Installation and Configuration
Manual
Revision 1.0
2
Chapter 1 Introduction
...........................................................................................
5
1.1 Key
Features
............................................ ............................................................... ............................................ .
6
1.2 TechnicalSpecificati
ons
...................... .......... ............................................................... .................................. .
7
1.3
Terminology
............................................. ............................................................... ................... ........................ .
9
1.4 RAID
Concepts
........................................... ............................................................... ..................................... .
11
Chapter 2 Getting Started
...................................................................................
16
2.1
Packaging,
Shipment
and Delivery
........................ ............................................................... ................. .
16
2.2Unpacking
theSubsy
stem
........................ .......... .............................................................. ......................... .
16
2.3IdentifyingParts of the XL-RAID-2804ISSASubsystem
.......... .............................................................. .
17
2.3.1 FrontView
............................................ ............................................................... .................................... .
17
2.3.2 Rear
View
............................................. .......................................... ..................... ..................................... .
17
2.3.3
EnvironmentalStatus
LED s.............................. .............................................................. ................... .
19
2.3.4 SmartFunction
Panel
.............. .................... ............................................................... ........................ .
20
2.4
Connecting
iSCSI
Subsystem
to Your
Network
.............. ............................................................... ....
20
2.5
PoweringOn
............................................. ............................................................... ....................................... .
21
2.6Installing
Hard Drives
.................................. ............................... ................................ ................................. .
21
2.6.1 HDD
StatusIndicator
................................... .............................................................. ......................... .
22
2.7 iSCSI
Introduction
...... ................................ ............................................................... .................................... .
23
2.8
Management Methods
...................................... ......................................................... ...... .......................... .
24
2.8.1 Web
GU I............................................... ............................................................... ..................................... .
24
2.8.2
Console
Serial Port
....................... ............ ............................................................... ............................ .
25
2.8.3
Remote
Control – Secure Shell
.......................... .............................................................. .............. .
25
2.9
Enclos ure............................................... ............................................................... ............................................. .
26
2.9.1 LCDControl
Module
(LCM) .............................. .................. ............................................. ................ .
26
2.9.2 SystemBu
zzer
......................................... ............................................................... ................................ .
27
Chapter 3 Web GUI Guideline
............................................................................
28
3.1 XL-RAID-2804ISSA GUI Hierarchy
.............................. ............................................................... ..................... .
28
3.2
Login....... ............................................ ............................................................... ................................................. .
29
3.3 QuickInstall
........................................... ................................ ............................... ........................................... .
30
3.4 SystemConfiguration
..................................... .............................................................. ............................... .
32
3.4.1 SystemName
........................................... ............................................................... ............................... .
32
3.4.2 IP
address
............................................ ............................................ ................... ..................................... .
33
3.4.3
Language
.............................................. ............................................................... .................................... .
33
3.4.4 Login Config
....... ................................... ............................................................... .................................. .
34
3
3.4.5
Passw
ord
.............................................. ........................................................ ....... ..................................... .
34
3.4.6
Date.................................................. ............................................................... ........................................... .
34
3.4.7 Mail
..................... ............................. ............................................................... ........................................... .
35
3.4.8
SNM P.................................................. ...................................................... ......... ........................................ .
35
3.4.9
Me
ssenger
............................................. ............................................................... ................................... .
36
3.4.10 System LogServer
......... ........................... ............................................................... .......................... .
36
3.4.11 Event Log
............................................ ............................................................... ......... .......................... .
37
3.5 iSCSI Config
............................................ ............................................................... .......................................... .
38
3.5.1 Entity
Property
............................ ........... ............................................................... ................................. .
38
3.5.2
NIC................................................... ............................................................... ................... ......................... .
39
3.5.3Node
.................................................. ............................................................... ......................................... .
40
3.5.4 Session
..................................... .......... ............................................................... ........................................ .
41
3.5.5 CHAP
Account
.......................................... ............................................................... ............ .................. .
41
3.6 VolumeConfig
uration..................................... .............................................................. .............................. .
42
3.6.1 Volume
Relationship
Dia
gram
............................ ...... ........................................................ .............. .
42
3.6.2 Physical Disk
......................................... ............................................................... ................................... .
43
3.6.3 Volume Group
.......................................... ............................................................... .............................. .
45
3.6.4 User DataVolume
...................................... ............................................. .................. ........................... .
47
3.6.5 Cache Volume.......................................... .
.............................................................. ............................... .
48
3.6.6 Logical UnitNumber................. .................. .
.............................................................. ......................... .
49
3.6.7 Examples
.............................................. ............................................................... ................... .................. .
50
3.7 Enclosure
Man
agement
.................................... ............................................................... ........................... .
60
3.7.1 SES
Configuration
..................................... ........... .................................................... ............................ .
60
3.7.2
Hardware
Monitor....................................... .............................................................. ........................... .
61
3.7.3 Hard Drive
S.M.A.R.T.
FunctionSu
pport
................. .............................................................. .......
62
3.7.4
UPS
................................................... ............................................................... ...... ..................................... .
63
3.8 System M
aintena
nce
...................................... ............................................................... ............................... .
64
3.8.1
Upgrade
.................................... ........... ............................................................... ..................................... .
64
3.8.2Info
.................................................. ............................................................... .............. .............................. .
65
3.8.3 Reset to Default
...................................... ............................................................... ............................... .
65
3.8.4 Config
Import&Export
.......................... ...... ............................................................... ...................... .
65
3.8.5Shutdown
.............................................. ............................................................... ................................... ..
67
3.9
Logou
t.................................................. .
.............................................................. ............................................... .
67
Chapter 3 Advanced Operation
..........................................................................
68
4.1 Re
build................................................. ............................................................... ............................................... .
68
4.2 VG
Migration
andEx
pansion
........... ................... ............................................................... ...................... .
70
4.3 UDVExt
ension
........................................... ............................................................... ...................... ................ .
72
4.4
Snapshot/Ro
llback
...................................... ............................................................... .................................. .
73
4.4.1
Create Snapshot
Volume
................................ ........ ....................................................... ................... .
73
4
4.4.2 Auto
Snapshot
......................................... ............................................................... ............................... .
75
4.4.3 Rollback
.............................................. ............................................................... ....................................... .
75
4.5 Disk Roaming............................................ ....................................... ........................ ....................................... .
76
4.6
Support Microsoft
MPIO and
MC /S......................... ............................................................... .............. .
76
Appendix
.....................................................................................................................
77
A.
Cer
tification
List....................................... ............................................................... ......................................... .
77
B. EventNot
ifications...................................... ............................................................... ..................................... .
79
C.KnownIssues............................................. .
......................... ..................................... ......................................... .
83
D.
Microsoft
iSCSI Initiator
................................ ............................................................... ................................ .
83
E. Trunking/
LACP
SetupInstructions
......................... ............................................................... .................... .
87
F. MPIO and MC/S
Setup
Instructions......................... ...................................................... ......... ................. .
96
G. QLogic QLA4010C
SetupInstructi
ons
....................... ............................................................... .............
111
H.Install
ation
Steps for Large Volume
(TB).................. ................ .............................................. .............
116
5
Chapter 1 Introduction
The XL-RAID-2804ISSARAID Subsystem
XL-RAID-2804ISSA connects to the host system in iSCSI interface. It can be configured to any RAID level. XL-RAID-2804ISSA provides reliable data protection for servers and the RAID 6 function is available. The RAID 6 function allows two HDD failures without any impact on the existing data. Data can be recovered from the remaining data and parity drives. (Data can be recovered from the rest disks/drives.)
Snapshot-on-the-box is a fully usable copy of a defined collection of data that contains an image of the data as it appeared at the point in time, which means a point-in-time data replication. It provides consistent and instant copies of data volumes without any system downtime. XL-RAID­2804ISSA Snapshot-on-the-box can keep up to 32 snapshots for all data volumes. Rollback feature is provided for restoring the previously-snapshot data easily while continuously using the volume for further data access. The data access is regular as usual including read/write without any impact to end users. The "on-the-box" terminology implies that it does not require any proprietary agents installed at host side. The snapshot is taken at target side and done by XL­RAID-2804ISSA. It will not consume any host CPU time thus the server is dedicated to the specific or other application. The snapshot copies can be taken manually or by schedule every hour or every day, depends on the modification.
XL-RAID-2804ISSA is the most cost-effective disk array subsystem with completely integrated high­performance and data-protection capabilities which meet or exceed the highest industry standards, and the best data solution for small/medium business users.
6
1.1 Key Features
2 x GbE ports support independent access, fail-over or load-balancing (802.3ad port
trunking, LACP)
Supports iSCSI jumbo frame Supports Microsoft Multipath I/O (MPIO) Supports RAID levels 0, 1, 0+1, 3, 5, 6, 10, 30, 50, 60 and JBOD Local N-way mirror: Extension to RAID 1 level, N copies of the disk. Global and dedicated hot spare disks Write-through or write-back cache policy for different application usage Dedicated or shared cache allocation for volume usage Supports greater than 2TB per volume set (64-bit LBA support) Supports manual or scheduling volume snapshot (up to 32 snapshot) Snapshot rollback mechanism Online volume migration with no system down-time Online volume expansion Instant RAID volume availability and background initialization Supports S.M.A.R.T, NCQ and OOB Staggered Spin-up capable drives
7
1.2 Technical Specifications
Form Factor: 2U 19-inch rackmount chassis RAID processor: Intel IOP341 64-bit RAID Level: 0, 1, 0+1, 3, 5, 6, 10, 30, 50, 60 and JBOD
N-way mirror (N copies of the disk) Cache memory: 512MB ~ 2GB DDR II
No. of channels (host + drive): 2 + 8 Host bus interface: 1Gb/s Ethernet Drive bus interface: 3Gb/s SATA II Hot-swap drive trays: Eight (8) 1-inch trays Host access control: Read-Write & Read-Only Supports CHAP authentication
802.3ad port trunking, LACP support Jumbo frame support Maximum logical volume: up to 256 Maximum host connection: up to 32 Maximum host clustering: up to 8 for one logical volume Manual/scheduling volume snapshot: up to 32 Snapshot rollback mechanism support Supports Microsoft Multipath I/O (MPIO) Global/dedicated cache configurable by volume Global and dedicated hot spare disks Online Volume migration
Online Volume sets expansion Configurable stripe size Instant RAID volume availability and background initialization support Supports over 2TB per volume Online consistency check Bad block auto-remapping S.M.A.R.T. support
8
New disk insertion / removal detection Auto volume rebuild Array roaming Audible alarm Password protection UPS connection Hot-swap power supplies: Two (2) 350W power supplies w/PFC Cooling fans: 2 Battery backup(Option) Power requirements: AC 90V ~ 264V full range, 8A ~ 4A, 47Hz ~ 63Hz Environmental Relative Humidity: 10% ~ 85% Non-condensing Operating Temp: 10oC ~ 40oC (50oF ~ 104oF) Physical Dimensions: 88(H) x 482(W) x 650(D)mm Weight: 12.5 kgs (without drives)
9
1.3 Terminology
The document uses the following terms:
RAID
RAID is the abbreviation of “Redundant Array of Independent Disks. There are different RAID levels with different degree of the data protection, data availability, performance to host environment.
PD
The Physical Disk belongs to the member disk of one specific volume group.
VG
Volume Group. A collection of removable media or physical disks. One VG
consists of a set of UDVs and owns one RAID level attribute.
UDV
User Data Volume. Each VG could be divided into several UDVs. The UDVs
from one VG share the same RAID level, but may have different volume capacity.
CV
Cache Volume. XL-RAID-2804ISSA uses the on board memory as cache.
All RAM (except for the part which is occupied by the controller) can be used as cache. User can divide the cache for one UDV or sharing among all UDVs. Each UDV will be associated with one CV for data transaction. Each CV could be assigned different cache memory size.
LUN
Logical Unit Number. A logical unit number (LUN) is an unique identifier
used on a iSCSI connection which enables it to differentiate among separate devices (each of which is a logical unit).
GUI
Graphic User Interface.
RAID width, RAID copy, RAID row
(RAID cell in one row)
RAID width, copy and row are used to describe one VG. E.g.:
1. One 4-disk RAID 0 volume: RAID width= 4; RAID copy=1; RAID row=1.
2. One 3-way mirroring volume: RAID width=1; RAID copy=3; RAID row=1.
3. One RAID 10 volume over 3 4-disk RAID 1 volume: RAID width=1; RAID copy=4; RAID row=3.
WT
Write-Through cache write policy. A caching technique in which the
completion of a write request is not signaled until data is safely stored on non-volatile media. Each data is synchronized in both data cache and the accessed physical disks.
WB
Write-Back cache write policy. A caching technique in which the
completion of a write request is signaled as soon as the data is in cache and actual writing to non-volatile media occurs at a later time. It speeds up system write performance but needs to bear the risk where data may be inconsistent between data cache and the physical disks in one short time interval.
10
RO
Set the volume to be Read-Only.
DS
Dedicated Spare disks. The spare disks are only used by one specific VG.
Others could not use these dedicated spare disks for any rebuilding purpose.
GS
Global Spare disks. GS is shared for rebuilding purpose. If some VGs need
to use the global spare disks for rebuilding, they could get the spare disks out from the common spare disks pool for such requirement.
DC
Dedicated Cache.
GC
Global Cache.
DG
DeGraded mode. Not all of the array’s member disks are functioning, but
the array is able to respond to application read and write requests to its virtual disks.
S.M.A.R.T.
Self-Monitoring Analysis and Reporting Technology.
WWN
World Wide Name.
HBA
Host Bus Adapter.
MPIO
Multi-Path Input/Output.
MC/S
Multiple Connections per Session
S.E.S
SCSI Enclosure Services.
NIC
Network Interface Card.
iSCSI
Internet Small Computer Systems Interface.
LACP
Link Aggregation Control Protocol.
MTU
Maximum Transmission Unit.
CHAP
Challenge Handshake Authentication Protocol. An optional security
mechanism to control access to an iSCSI storage system over the iSCSI data ports.
iSNS
Internet Storage Name Service.
11
1.4 RAID Concepts
RAID Fundamentals
The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds that of a single large drive. The array of drives appears to the host computer as a single logical drive.
Five types of array architectures, RAID 1 through RAID 5, were originally defined; each provides disk fault-tolerance with different compromises in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID 0 arrays.
Disk Striping
Fundamental to RAID technology is striping. This is a method of combining multiple drives into one logical storage unit. Striping partitions the storage space of each drive into stripes, which can be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in a rotating sequence, so that the combined space is composed alternately of stripes from each drive. The specific type of operating environment determines whether large or small stripes should be used.
Most operating systems today support concurrent disk I/O operations across multiple drives. However, in order to maximize throughput for the disk subsystem, the I/O load must be balanced across all the drives so that each drive can be kept busy as much as possible. In a multiple drive system without striping, the disk I/O load is never perfectly balanced. Some drives will contain data files that are frequently accessed and some drives will rarely be accessed.
By striping the drives in the array with stripes large enough so that each record falls entirely within one stripe, most records can be evenly distributed across all drives. This keeps all drives in the array busy during heavy load situations. This situation allows all drives to work concurrently on different I/O operations, and thus maximize the number of simultaneous I/O operations that can be performed by the array.
12
Definition of RAID Levels
RAID 0 is typically defined as a group of striped disk drives without parity or data redundancy.
RAID 0 arrays can be configured with large stripes for multi-user environments or small stripes for single-user systems that access long sequential records. RAID 0 arrays deliver the best data storage efficiency and performance of any array type. The disadvantage is that if one drive in a RAID 0 array fails, the entire array fails.
RAID 1, also known as disk mirroring, is simply a pair of disk drives that store duplicate data but appear to the computer as a single drive. Although striping is not used within a single mirrored drive pair, multiple RAID 1 arrays can be striped together to create a single large array consisting of pairs of mirrored drives. All writes must go to both drives of a mirrored pair so that the information on the drives is kept identical. However, each individual drive can perform simultaneous, independent read operations. Mirroring thus doubles the read performance of a single non-mirrored drive and while the write performance is unchanged. RAID 1 delivers the best performance of any redundant array type. In addition, there is less performance degradation during drive failure than in RAID 5 arrays.
RAID 3 sector-stripes data across groups of drives, but one drive in the group is dedicated to storing parity information. RAID 3 relies on the embedded ECC in each sector for error detection. In the case of drive failure, data recovery is accomplished by calculating the exclusive OR (XOR) of
13
the information recorded on the remaining drives. Records typically span all drives, which optimizes the disk transfer rate. Because each I/O request accesses every drive in the array, RAID 3 arrays can satisfy only one I/O request at a time. RAID 3 delivers the best performance for single-user, single-tasking environments with long records. Synchronized-spindle drives are required for RAID 3 arrays in order to avoid performance degradation with short records. RAID 5 arrays with small stripes can yield similar performance to RAID 3 arrays.
Under RAID 5 parity information is distributed across all the drives. Since there is no dedicated parity drive, all drives contain data and read operations can be overlapped on every drive in the array. Write operations will typically access one data drive and one parity drive. However, because different records store their parity on different drives, write operations can usually be overlapped.
Dual-level RAID achieves a balance between the increased data availability inherent in RAID 1 and RAID 5 and the increased read performance inherent in disk striping (RAID 0). These arrays are sometimes referred to as RAID 0+1 or RAID 10 and RAID 0+5 or RAID 50.
14
RAID 6 is similar to RAID 5 in that data protection is achieved by writing parity information to the physical drives in the array. With RAID 6, however, two sets of parity data are used. These two sets are different, and each set occupies a capacity equivalent to that of one of the constituent drives. The main advantage of RAID 6 is High data availability – any two drives can fail without loss of critical data.
In summary:
RAID 0 is the fastest and most efficient array type but offers no fault-tolerance. RAID 0
requires a minimum of two drives.
RAID 1 is the best choice for performance-critical, fault-tolerant environments. RAID 1 is the
only choice for fault-tolerance if no more than two drives are used.
RAID 3 can be used to speed up data transfer and provide fault-tolerance in single-user
environments that access long sequential records. However, RAID 3 does not allow overlapping of multiple I/O operations and requires synchronized-spindle drives to avoid performance degradation with short records. RAID 5 with a small stripe size offers similar performance.
RAID 5 combines efficient, fault-tolerant data storage with good performance characteristics.
However, write performance and performance during drive failure is slower than with RAID 1. Rebuild operations also require more time than with RAID 1 because parity information is also reconstructed. At least three drives are required for RAID 5 arrays.
RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by
using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures. It is a perfect solution for mission critical applications.
15
RAID Management
The subsystem can implement several different levels of RAID technology. RAID levels supported by the subsystem are shown below.
RAID Level
Description
Min. Drives
0
Block striping is provide, which yields higher performance than with individual drives. There is no redundancy.
1
1
Drives are paired and mirrored. All data is 100% duplicated on an equivalent drive. Fully redundant.
2
N-way mirror
Extension to RAID 1 level. It has N copies of the disk.
2
3
Data is striped across several physical drives. Parity protection is used for data redundancy.
3
5
Data is striped across several physical drives. Parity protection is used for data redundancy.
3
6
Data is striped across several physical drives. Parity protection is used for data redundancy. Requires N+2 drives to implement because of two-dimensional parity scheme
4
0 + 1
Mirroring of the two RAID 0 disk arrays. This level provides striping and redundancy through mirroring.
4
10
Striping over the two RAID 1 disk arrays. This level provides mirroring and redundancy through striping.
4
30
Combination of RAID levels 0 and 3. This level is best implemented on two RAID 3 disk arrays with data striped across both disk arrays.
6
50
RAID 50 provides the features of both RAID 0 and RAID
5. RAID 50 includes both parity and disk striping across multiple drives. RAID 50 is best implemented on two RAID 5 disk arrays with data striped across both disk arrays.
6
60
RAID 60 provides the features of both RAID 0 and RAID
6. RAID 60 includes both parity and disk striping across multiple drives. RAID 60 is best implemented on two RAID 6 disk arrays with data striped across both disk arrays.
8
JBOD
The abbreviation of “Just a Bunch Of Disks”. JBOD needs at least one hard drive.
1
16
Chapter 2 Getting Started
2.1 Packaging, Shipment and Delivery
Before removing the subsystem from the shipping carton, you should visually inspect the
physical condition of the shipping carton.
Unpack the subsystem and verify that the contents of the shipping carton are all there and
in good condition.
Exterior damage to the shipping carton may indicate that the contents of the carton are
damaged.
If any damage is found, do not remove the components; contact the dealer where you
purchased the subsystem for further instructions.
2.2 Unpacking the Subsystem
The package contains the following items:
• iSCSI RAID subsystem unit
• Two power cords
• Three Ethernet LAN cables
• One external null modem cable
• Installation Reference Guide
• Spare screws, etc.
If any of these items are missing or damaged, please contact your dealer or sales representative for assistance.
17
2.3 Identifying Parts of the XL-RAID-2804ISSA Subsystem
The illustrations below identify the various parts of the subsystem.
2.3.1 Front View
2.3.2 Rear View
18
1. Power Supply Alarm Reset button
You can push the power supply reset button to stop the power supply buzzer alarm.
2. Uninterrupted Power Supply (UPS) Port (APC Smart UPS only)
The subsystem may come with an optional UPS port allowing you to connect a APC Smart UPS device. Connect the cable from the UPS device to the UPS port located at the rear of the subsystem. This will automatically allow the subsystem to use the functions and features of the UPS.
3. R-Link Port: Remote Link through RJ-45 Ethernet for remote management
The subsystem is equipped with one 10/100 Ethernet RJ45 LAN port. You use web-based browser to management RAID subsystem through Ethernet for remote configuration and monitoring.
4. Monitor Port
The subsystem is equipped with a serial monitor port allowing you to connect a PC or terminal.
5. Fan Fail indicator
If a fan fails, this LED will turn red.
6. Cooling Fan module
Two blower fans are located at the rear of the subsystem. They provide sufficient airflow and heat dispersion inside the chassis. In case a fan fails to function, the “ ” Fan fail LED will turn red and an alarm will sound.
7. Power Supply Power On Indicator
Green LED indicates power is on.
8. System Power On Indicator
Green LED indicates power is on.
9. Power Supply Unit 1 ~ 2
Two power supplies (power supply 1 and power supply 2) are located at the rear of the subsystem. Turn on the power of these power supplies to power-on the subsystem. The “power” LED at the front panel will turn green.
If a power supply fails to function or a power supply was not turned on, the “ ” Power fail LED will turn red and an alarm will sound.
19
2.3.3 Environmental Status LEDs
Parts
Function
Power LED
Green LED indicates power is ON.
Power Fail LED
If a redundant power supply unit fails, this LED will turn to RED and alarm will sound.
Fan Fail LED
When a fan fails, this LED will turn red and an alarm will sound.
Over Temperature LED
If temperature irregularities in the system occurs (HDD slot temperature over 45°C), this LED will turn RED and alarm will sound.
Voltage Warning LED
An alarm will sound warning of a voltage abnormality and this LED will turn red.
Access LED
This LED will blink blue when the RAID controller is busy / active.
20
2.3.4 Smart Function Panel
PARTS
FUNCTION
Up and Down Arrow buttons
Use the Up or Down arrow keys to go through the information on the LCD screen. This is also used to move between each menu when you configure the subsystem.
Select button
This is used to enter the option you have selected.
Exit button EXIT
Press this button to return to the previous menu.
2.4 Connecting iSCSI Subsystem to Your Network
To connect the iSCSI unit to the network, insert the cable that came with the unit into the network connection (LAN1) on the back of iSCSI unit. Insert the other end into a Gigabit BASE-T Ethernet connection on your network hub or switch. You may connect the other network port LAN2 if needed.
For remote management of iSCSI unit, connect the R-Link port to your network.
21
2.5 Powering On
1. Plug in all the power cords into the AC Power Input Socket located at the rear of the subsystem.
2. Turn on Power Switch 1 and 2.
3. The Power LED on the front Panel will turn green.
2.6 Installing Hard Drives
This section describes the physical locations of the hard drives supported by the subsystem and give instructions on installing a hard drive. The subsystem supports hot-swapping allowing you to install or replace a hard drive while the subsystem is running.
a. Pull out an empty disk tray. Pull the handle outwards to remove the carrier from the
enclosure.
b. Take off the bracket before installing hard drive.
c. Place the hard drive in the disk tray. d. Install the mounting screws on each side to secure the drive in the tray.
22
e. Slide the tray into a slot until it clicks into place. The HDD status LED will turn green if
subsystem is on.
f. Press the lever in until you hear the latch click into place.
g. If the HDD power LED did not turn green, check the hard drive is in good condition. If
the hard drive is not being accessed, the HDD access LED will not illuminate. The LED blinks only when being accessed.
2.6.1 HDD Status Indicator
Parts
Function
HDD Status LEDs
Green LED indicates power is on and hard drive status is good for this slot. If hard drive defected in this slot or the hard drive is failure, the LED is orange.
HDD Access LEDs
These LED will blink blue when the hard drive is being accessed.
23
Host 1
NIC
2.7 iSCSI Introduction
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet.
IP SANs are true SANs (Storage Area Networks) which allow few of servers to attach to an infinite number of storage volumes by using iSCSI over TCP/IP networks. IP SANs can scale the storage capacity with any type and brand of storage system. In addition, using any type of network (Ethernet, Fast Ethernet, Gigabit Ethernet) and combining operating systems (Microsoft Windows, Linux, Solaris, …etc.) within the SAN network. IP-SANs also include mechanisms for security, data replication, multi-path and high availability.
Storage protocol, such as iSCSI, has “two ends” in the connection. These ends are the initiator and the target. In iSCSI we call them iSCSI initiator and iSCSI target. The iSCSI initiator requests or initiates any iSCSI communication. It requests all SCSI operations like read or write. An initiator is usually located on the host/server side (either an iSCSI HBA or iSCSI SW initiator).
The iSCSI target is the storage device itself or an appliance which controls and serves volumes or virtual volumes. The target is the device which performs SCSI commands or bridges it to an attached storage device. iSCSI targets can be disks, tapes, RAID arrays, tape libraries, and etc.
Host 2
(initiator)
iSCSI
HBA
IP SAN
iSCSI device 1
(target)
iSCSI device 2
(target)
The host side needs an iSCSI initiator. The initiator is a driver which handles the SCSI traffic over iSCSI. The initiator can be software or hardware (HBA). Please refer to the certification list of iSCSI HBA(s) in Appendix A. OS native initiators or other software initiators use the standard TCP/IP stack and Ethernet hardware, while iSCSI HBA(s) use their own iSCSI and TCP/IP stacks on board.
Hardware iSCSI HBA(s) would provide its initiator tool. Please refer to the vendors’ HBA user manual. Microsoft, Linux and Mac provide software iSCSI initiator driver. Below are the available links:
24
1. Link to download the Microsoft iSCSI software initiator:
http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585-b385­befd1319f825&DisplayLang=en
Please refer to Appendix D for Microsoft iSCSI initiator installation procedure.
2. Linux iSCSI initiator is also available. For different kernels, there are different iSCSI drivers. If you need the latest Linux iSCSI initiator, please visit Open-iSCSI project for most update information. Linux-iSCSI (sfnet) and Open-iSCSI projects merged in April 11, 2005.
Open-iSCSI website: http://www.open-iscsi.org/ Open-iSCSI README: http://www.open-iscsi.org/docs/README Features: http://www.open-iscsi.org/cgi-bin/wiki.pl/Roadmap Support Kernels: http://www.open-iscsi.org/cgi-bin/wiki.pl/Supported_Kernels Google groups: http://groups.google.com/group/open-iscsi/threads?gvc=2
http://groups.google.com/group/open-iscsi/topics
Open-iSCSI Wiki: http://www.open-iscsi.org/cgi-bin/wiki.pl
3. ATTO iSCSI initiator is available for Mac.
Website: http://www.attotech.com/xtend.html
2.8 Management Methods
There are three management methods to manage XL-RAID-2804ISSA, describe in the following:
2.8.1 Web GUI
XL-RAID-2804ISSA support graphical user interface to manage the system. Be sure to connect LAN cable to your R-Link port. The default setting of management port IP is DHCP and the DHCP address displays on LCM; user can check LCM for the IP first, then open the browser and type the DHCP address: (The DHCP address is dynamic and user may need to check every time after reboot again.) When DHCP service is not available, XL-RAID-2804ISSA uses zero config (Zeroconf) to get an IP address.
E.g., on LCM. XL-RAID-2804ISSA gets a DHCP address 192.168.10.50 from DHCP server.
192.168.10.50
XL-RAID-2804ISSA
http://192.168.10.50 or https://192.168.10.50 (https: connection with encrypted
Secure Sockets Layer (SSL). Please be aware of the https function is slower than http.)
25
Click any function at the first time; it will pop up a dialog to authenticate current user. Login name: admin
Default password: 00000000 Or login with the read-only account which only allows seeing the configuration and cannot change
setting. Login name: user
Default password: 1234
2.8.2 Console Serial Port
Use NULL modem cable to connect console port. The console setting is baud rate: 115200, 8 bits, 1 stop bit, and no parity. Terminal type: vt100 Login name: admin Default password: 00000000
2.8.3 Remote Control – Secure Shell
SSH (secure shell) is required for XL-RAID-2804ISSA to remote login. The SSH client software is available at the following web site: SSHWinClient WWW: http://www.ssh.com/ Putty WWW: http://www.chiark.greenend.org.uk/
Host name: 192.168.10.50 (Please check your DHCP address for this field.) Login name: admin Default password: 00000000
NOTE: XL-RAID-2804ISSA Series only support SSH for remote control. For using SSH, the IP address and the password is required for login.
26
2.9 Enclosure
2.9.1 LCD Control Module (LCM)
There are four buttons to control XL-RAID-2804ISSA LCM (LCD Control Module). These are:
c
(up)
d
(down) (Escape) (Enter)
After booting up the system, the following screen shows management port IP and model name:
192.168.10.50
XL-RAID-2804ISSA
Press , the LCM functions “Alarm Mute”, “Reset/Shutdown”, “Quick Install”, “View IP Setting”, “Change IP Config” and “Reset to Default” will rotate by pressingc(up) and
d
(down). When there is WARNING or ERROR level of event happening, the LCM shows the event log to give
users more detail from front panel too. The following table is function description.
System Info
View System information of Firmware Version & RAM Size.
Alarm Mute
Mute alarm when error occurs.
Reset/Shutdown
Reset or shutdown controller.
Quick Install
Quick three steps to create a volume. Please refer to section 3.3 for operation in web UI.
View IP Setting
Display current IP address, subnet mask, and gateway.
Change IP Config
Set IP address, subnet mask, and gateway. There are 2 selections, DHCP (Get IP address from DHCP server) or set static IP.
Reset to Default
Reset to default sets password to default: 00000000, and set IP address to default as DHCP setting.
Default IP address: 192.168.10.50 (DHCP) Default subnet mask: 255.255.255.0 Default gateway: 192.168.10.254
27
The following is LCM menu hierarchy.
[Firmware Version]
[System Info]
[RAM Size]
[Alarm Mute]
[cYes Nod]
[Reset]
[cYes Nod]
[Reset/Shutdown]
[Shutdown]
[cYes Nod]
Volume Size
(xxxxxx G)
Adjust Volume
Size
[Quick Install]
RAID 0 (RAID
1/RAID 3/ RAID
5/RAID 6)
xxxxxx GB
Apply The
Config
[cYes Nod]
[IP Config]
[Static IP]
[IP Address]
[192.168.010.050]
[IP Subnet Mask]
[255.255.255.0]
[View IP Setting]
[IP Gateway]
[192.168.010.254]
[DHCP]
[cYes Nod]
[IP Address]
Adjust IP
address
[IP Subnet
Mask]
Adjust
Submask IP
[IP Gateway]
Adjust Gateway
IP
[Change IP
Config]
[Static IP]
[Apply IP
Setting]
[cYes Nod]
proIPS
cd
[Reset to Default]
[cYes Nod]
CAUTION! Before power off, it is better to execute “Shutdown” to flush the data from cache to physical disks.
2.9.2 System Buzzer
The system buzzer features are describing in the following:
1. The system buzzer alarms 1 second when system boots up successfully.
2. The system buzzer alarms continuously when there is error level event happened. The alarm will be stopped after mute.
3. The alarm will be muted automatically when the error situation is resolved. E.g., when RAID 5 is degraded and alarm rings immediately, after user changes/adds one physical disk for rebuilding, and when the rebuilding is done, the alarm will be muted automatically.
28
Chapter 3 Web GUI Guideline
3.1 XL-RAID-2804ISSA GUI Hierarchy
The below table is the hierarchy of XL-RAID-2804ISSA GUI.
Quick Install
Step 1 / Step 2 / Step 3 / Confirm
System Config
iSCSI config
Volume config
System nameSystem name
IP addressDHCP / Static / HTTP port / HTTPS port / SSH port
LanguageLanguage
Login configAuto logout / Login lock
Password
Old password / Password / Confirm
DateDate / Time / Time zone / Daylight saving / NTP
Mail
Mail-from address / Mail-to address / SMTP relay /
Authentication / Send test mail / Send events
SNMP
SNMP trap address / Community
System log serverServer IP / Port / Facility / Event level
Event log
Filter / Download / Mute / Clear
Entity PropertyEntity name / iSNS
NICLink aggregation or Multi-homed / IP settings /
Default gateway / Set MTU / MAC address
NodeNode name / CHAP Authentication
SessioniSCSI sessions and connections
CHAP account
Create / Delete CHAP account
Physical diskFree disc / Global spares / Dedicated spares / More
information / Auto Spindown
Volume group
Create / Delete / More information / Rename /
Migrate / Expand
User data
Volume
Create / Delete / Attach LUN / Snapshot / More
information / Rename / Extend / Set read/write mode / Set priority / Resize Snapshot space / Auto Snapshot
Cache volume
Create / Delete / More information / Resize /
Dedicated cache
Logical unit
Attach / Detach
Enclosure management
SES config
Enable / Disable
Hardware monitor
Status / Auto shutdown
S.M.A.R.T.
S.M.A.R.T. for physical disks
UPSUPS Type / Shutdown Battery Level / Shutdown Delay
/ Shutdown UPS
Maintenance
Upgrade
Browse the firmware to upgrade / Export config
InfoSystem information
Reset to defaultReset to factory default
Logout
Config import & exportController configuration import and export function
Shutdown
Reboot / Shutdown
29
3.2 Login
XL-RAID-2804ISSA supports graphic user interface (GUI) to operate the system. Be sure to connect the LAN cable. The default IP setting is DHCP; open the browser and enter:
http://192.168.10.50 (Please check the DHCP address first on LCM)
Click any function at the first time; it will pop up a dialog for authentication. Login name: admin
Default password: 00000000 After login, you can choose the function blocks on the left side of window to do configuration.
There are six indicators at the top-right corner for backplane solutions, and cabling solutions have three indicators at the top-right corner.
1. RAID light: Green means RAID works well. Red represents RAID failure happening.
2. Temperature light: Green is normal. Red represents abnormal temperature.
3. Voltage light: Green is normal. Red represents abnormal voltage status.
4. UPS light: Green is normal. Red represents abnormal UPS status.
5. Fan light: Green is normal. Red represents abnormal fan status.
6. Power light: Green is normal. Red represents abnormal power status.
30
3.3 Quick Install
It is easy to use “Quick install” function to create a volume. Depend on how many physical disks or how many residual spaces on created VGs are free, the system will calculate maximum spaces on RAID levels 0/1/3/5/6. “Quick install” function will occupy all residual VG space for one UDV, and it has no space for snapshot and spare. If snapshot function is needed, please create volumes by manual.
XL-RAID-2804ISSA Quick Install function has a smarter policy. When the system is full inserted with 8 HDD, and all HDD are in the same size, XL-RAID-2804ISSA Quick Install function lists all possibilities and sizes among different RAID levels, XL-RAID-2804ISSA Quick Install will use all available HDD for the RAID level which user decides. But, when the system is inserted with different sizes of HDD, e.g., 4*200G HDD and 4*80G, XL-RAID-2804ISSA also lists all possibilities and combinations of different RAID Level and different sizes. After user chooses RAID level, user may finds there are still some HDD are not used (Free Status). The result is from XL-RAID-2804ISSA’s smarter policy on Quick Install which gives user:
1. Biggest capacity of RAID level which user chooses and,
2. The fewest disk number for the RAID level/volume size.
E.g., user chooses RAID 5 and the system has 6*200G HDD and 2*80HDD inserted. Then if using all 8 HDD for a RAID 5, the volume max size will be 560G (80G*7). But XL-RAID-2804ISSA will do a smarter check and find out the most efficient use of HDDs. This results in using only the 200G HDD (Volume size is 200*5=1000G). Then, the volume size is bigger, and full use of HDD capacity.
Step 1: Select “Quick install” then choose the RAID level to set. After choosing the RAID level,
click , which links to another page, user can set up “LUN” here.
31
Step 2: Please select a LUN number. Access control of host would show as a wildcard “*”, which
means every host can access this volume. In this page, the “Volume size” can be changed. The maximum volume size is shown. To re-enter the size be sure it has to be less or equal
to maximum volume size. Then click .
Step 3: Confirm page. Click if all setups are correct. Then a page with
the “User data volume” just been created will be shown.
Done. You can start to use the system now.
32
3.4 System Configuration
“System config” selection is for the setup of “System name”, “IP address”, “Login config”, “Password”, “Date”, “Mail”, “SNMP” and view “Event log”.
3.4.1 System Name
Select “System name” to change system name. Default system name composed by model name, ex: XL-RAID-2804ISSA.
33
3.4.2 IP address
Select “IP address” to change IP address for remote administration usage. There are 2 selections, DHCP (Get IP address from DHCP server) or static IP. The default setting is DHCP enabled. User can change the HTTP, HTTPS, and SSH port number when the default port number is not allowed on host/server.
3.4.3 Language
Select “Language” is for changing GUI language. There are 3 selections, Auto Detect, English, and Simplified Chinese. The default language is the same with your browser (IE or Firefox) default language.
34
3.4.4 Login Config
Select “Login config” is to set only one admin and set the auto logout timing. The only one admin can prevent multiple users access the same controller in the same time.
1. Auto logout: Options are (1) Disable (2) 5 mins (3) 30 mins (4) 1 hour. When user is no response for a period of time, the system will logout automatically to allow another user to login.
2. Login lock: Disable/Enable. When the login lock is enabled, the system allows only one user to login/modify the system settings.
3.4.5 Password
Select “Password” is for changing administrator password. The maximum length of admin password is 12 characters.
3.4.6 Date
Select “Date” to set up the current date, time, time zone, and NTP server before using.
35
3.4.7 Mail
Select “Mail” to enter at most 3 mail addresses for receiving the event notification. Some mail servers would check “Mail-from address” and need authentication for anti-spam. Please fill the necessary fields and select “Send test mail” to check whether the email works fine. User can also select which levels of event logs are needed to be sent out by Mail. Default setting is only ERROR and WARNING event logs enabled.
3.4.8 SNMP
Select “SNMP” to set up SNMP trap for alert via SNMP. It allows up to 3 SNMP trap addresses. Default community setting is “public”. User can choose the event log type and the default value of SNMP is INFO event log enabled only.
36
3.4.9 Messenger
Select “Messenger” to set up pop-up message alert via Windows messenger (not MSN). User must enable the service “Messenger” in Windows (Start Control Panel Administrative Tools Services Messenger), and then event logs can be received. It allows up to 3 messenger addresses. User can choose the event log levels and the default values are WARNING and ERROR event logs enabled only.
3.4.10 System Log Server
Select “System log server” to set up the system log server for RAID subsystem event log trapping which is able to support remote logging. Remote logging means that event log can be forwarded from the RAID subsystem to another running syslogd which it can actually log to a disk file.
1. Server IP/hostname: enter the IP address or hostname of system log server.
2. Port: enter the UDP port number on which system log server is listening to. The default port number is 514.
3. Facility: select the facility for event log.
4. Event level: Select the event log options
5. Click “Confirm” button.
37
Server side (Linux – RHEL4) The following steps are used to log RAID subsystem messages to a disk file. In the followings, all
messages are setup with facility “Local1” and event level “WARNING” or higher are logged to /var/log/raid.log.
1. Flush firewall
2. Add the following line to /etc/syslog.conf
Local1.warn
/var/log/raid.log
3. Send a HUP signal to syslogd process, this lets syslogd perform a re-initialization. All open files are closed, the configuration file (default is /etc/syslog.conf) will be reread and the syslog(3) facility is started again.
4. Activate the system log daemon and restart
5. Check the syslog port number, e.g., 10514
6. Change controller’s system log server port number as above
7. Then, syslogd will direct the selected event log messages to /var/log/raid.log when syslogd
receive the messages from RAID subsystem.
For more detail features, please check the syslogd and syslog.conf manpage (e.g.,man syslogd). Server side (Windows - 2003)
Windows doesn’t provide system log server, user needs to find or purchase a client from third party, below URL provide evaluation version, you may use it for test first.
http://www.winsyslog.com/en/
1. Install winsyslog.exe
2. Open "Interactives Syslog Server"
3. Check the syslog port number, e.g., 10514
4. Change controller’s system log server port number as above
5. Start logging on "Interactives Syslog Server"
3.4.11 Event Log
Select “Event log” to view the event messages. Press “Filter” button to choose the display. Press “Download” button will save the whole event log as text file with file name “log-ModelName-
SerialNumber-Date-Time.txt” (E.g., log-XL-RAID-2804ISSA-A00021-20061011-114718.txt). Press “Clear” button will clear event log. Press “Mute” button will stop alarm if system alerts.
38
For customizing your own display of event logs, there are total three display methods, on Web UI/Console event log page, popup windows on Web UI, and on LCM. The default setting of these three displays is WARNING and ERROR event logs enabled on Web UI and LCM. The popup is default disabled.
The event log is displayed in reverse order which means the latest event log is on the first page. The event log is actually saved in the first four hard drives; each hard drive has one copy of event log. For one controller, there are four copies of event logs to guarantee users can check event log most of the time when there is/are failed disk(s).
3.5 iSCSI Config
“iSCSI config” selection is for the setup of “Entity Property”, “NIC”, “Node”, “Session”, and “CHAP account”.
3.5.1 Entity Property
Select “Entity property” to view the the entity name of the XL-RAID-2804ISSA, and setup “iSNS IP” for iSNS service. iSNS is the abbreviation of Internet Storage Name Service. Add an iSNS
server IP address to the iSNS servers list which the iSCSI initiator service can send queries.
39
3.5.2 NIC
Select “NIC” to change IP addresses of iSCSI data ports. There are two gigabit LAN ports to transmit data. Each of them must be assigned to one IP address in multi-homed mode unless the link aggregation or trunking mode has been selected. If they are set in link aggregation or trunking mode, the second line will not show in the screen.
User can change IP address by clicking the blue square button “ ” in the “DHCP” column. There are 2 selections, DHCP (Get IP address from DHCP server) or static IP.
Default gateway can be changed by clicking the blue square button in the “Gateway” column. There is only one default gateway. The row of No. 1 would be the default gateway.
Link aggregation setting can be changed by clicking the blue square button in the “Aggregation” column.
40
1. Multi-homed: The two LAN ports are connected to two different networks. Multi- homed is default.
2. Trunking: Trunking links 2 LAN ports together to be a single link. Trunking could multiply the bandwidth. They will be aggregated to one IP. If clicking the blue square
button at “No. 1” row, the IP setting will be set to default value after setting trunking, and vice versa. For detailed setup steps, please refer to Appendix E: Trunking/LACP setup instructions.
3. LACP: Link Aggregation Control Protocol (LACP) could balance the bandwidth. IP setting concept is the same as trunking. For detailed setup steps, please refer to Appendix E: Trunking/LACP setup instructions.
CAUTION! Each of gigabit LAN ports must have IP address in different subnet for backplane solutions.
3.5.3 Node
Select “Node” to view the target name for iscsi initiator. Press “Auth” to enable CHAP authentication. CHAP is the abbreviation of Challenge Handshake Authorization Protocol. CHAP is a strong authentication method used with point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and password. CHAP enables the username and password to transmitting in an encrypted form for protection.
To use CHAP authentication, please follow the procedures.
1. Click in Auth column.
2. Select “CHAP”.
3. Go to \iSCSI config\CHAP account to create account and password.
41
NOTE: After setting CHAP, the initiator in host/server should be set the same Account/Password. Otherwise, user cannot login.
Select “None” to disable the authentication method.
3.5.4 Session
Enter “Session” function, it will display iSCSI session and connection information, including the following items:
1. Host (Initiator Name)
2. Security Protocol
3. TCP Port Number
4. Error Recovery Level
5. Error Recovery Count
6. Detail of Authentication status and Source IP: port number.
3.5.5 CHAP Account
NOTE: Only one CHAP account can be created.
Enter “CHAP account” function to create a CHAP account for authentication.
42
3.6 Volume Configuration
“Volume config” selection is for the setup of volume configurations including “Physical disk”, “Volume group”, “User data volume”, “Cache volume”, and “Logical unit” functions.
3.6.1 Volume Relationship Diagram
The below diagram describes the relationship of RAID components. One VG (Volume Group) consists of a set of UDVs (User Data Volume) and owns one RAID level attribute. Each VG can be divided into several UDVs. The UDVs from one VG share the same RAID level, but may have different volume capacity. Each UDV will be associated with one specific CV (Cache Volume) to execute the data transaction. Each CV could have different cache memory size from user’s modification/setting. LUN is the logical volume/unit, which the users could access through SCSI commands.
43
LUN 1 LUN 2 LUN 3
UDV 1 UDV 2 Snap
UDV
+
+
+
VG Global CV Dedicated
CV
PD 1 PD 2 PD 3 DS
RAM
3.6.2 Physical Disk
Enter “Physical disk” to view the status of hard drives inserted in the system. The following are operation tips:
1. Multiple select. Select one or more checkboxes in front of the slot number. Or select
the checkbox at the top left corner will select all. Check again will select none.
2. The list box will disappear if there is no VG or only VG of RAID 0, JBOD. Because
these RAID levels cannot be set as dedicated spare disk.
3. These three functions “Free disc”, “Global spares”, “Dedicated spares” can do
multiple selects, too.
4. The operations of the other web pages (e.g.: volume config of VG, UDV, CV, LUN pages)
are similar to previous steps.
44
PD column description:
Slot
The position of hard drives. The number of slot begins from left to right at the front side. The blue square button next to the number of slot is “More Information” indication. It shows the details of the hard drive.
WWN
World Wide Name.
Size (GB)
Capacity of hard drive.
VG Name
Related volume group name.
Status
The status of hard drive.
“GOOD” the hard drive is good. “DEFECT” the hard drive has the bad blocks.
“FAIL” the hard drive cannot work in the respective volume.
Status 1
“RD” RAID Disk. This hard drive has been set to
RAID.
“FR” FRee disk. This hard drive is free for use. “DS” Dedicated Spare. This hard drive has been set to the
dedicated spare of the VG.
“GS” Global Spare. This hard drive has been set to a global
spare of all VGs.
“RS” ReServe. The hard drive contains the VG information but
cannot be used. It may be caused by an uncompleted VG set, or hot-plug of this disk in the running time. In order to protect the data in the disk, the status changes to reserve.
45
It can be reused after setting it to “FR” manually.
Status 2
“R” Rebuild. The hard drive is doing rebuilding. “M”Migration. The hard drive is doing migration.
Speed
3.0GFrom SATA ATAPI standard, if the disk can support ATAPI
IDENTIFY PACKET DEVICE command, and the speed can achieve Serial ATA Gen-2 signaling speed (3.0Gbps).
1.5G From SATA ATAPI standard, if the disk can support ATAPI IDENTIFY PACKET DEVICE command, and the speed can achieve Serial ATA Gen-1 signaling speed (1.5Gbps).
Unknown The disk doesn’t support above command, so the speed is defined as unknown.
PD operations description:
FREE DISC
Make the selected hard drive to be free for use.
GLOBAL SPARES
Set the selected hard drive(s) to global spare of all VGs.
DEDICATED SPARES
Set hard drive(s) to dedicated spare of selected VGs.
XL-RAID-2804ISSA also provides HDD auto spin down function to save power. The default value is disabled. User can set up in physical disk page, too.
3.6.3 Volume Group
Enter “Volume group” to view the status of each volume group.
VG column description:
46
No.
Number of volume group. The blue square button next to the No. is “More Information” indication. It shows the details of the volume group.
Name
Volume group name. The blue square button next to the Name is
“Rename” function.
Total(GB)
Total capacity of this volume group.
Free(GB)
Free capacity of this volume group.
#PD
The number of physical disks of the volume group.
#UDV
The number of user data volumes related to the volume group.
Status
The status of volume group.
“Online” volume group is online. “Fail” volume group is fail.
Status 1
“DG” DeGraded mode. This volume group is not completed.
The reason could be lack of one disk or failure of disk.
Status 2
“R” Rebuild. This volume group is doing rebuilding.
Status 3
“M” Migration. This volume group is doing migration.
RAID
The RAID level of the volume group. The blue square button next to the RAID level is “Migrate” function. Click “Migrate” can add disk(s) to do expansion or change the RAID level of the Volume group.
VG operations description:
CREATE
Create a volume group
DELETE
Delete a volume group
47
3.6.4 User Data Volume
Enter “User data volume” function to view the status of each user data volume.
UDV column description:
No.
Number of this user data volume. The blue square button in below to the UDV No. is “More Information” indication. It shows the details of the User data volume.
Name
Name of this user data volume. The blue square button in below to the UDV Name is “Rename” function.
Size(GB)
Total capacity of this user data volume. The blue square button in below to the size is “Extend” function.
Status
The status of this user data volume.
“Online” user data volume is online. “Fail” user data volume is failed.
Status 1
“WT” Write Through. “WB” Write Back.
The blue square button in below to the status1 is “Set read/write mode” function.
Status 2
“HI” HIgh priority. “MD” MiD priority. “LO” LOw priority.
The blue square button in below to the status2 is “Set Priority” function.
Status 3
“I” user data volume is doing initializing. “R” user data volume is doing rebuilding.
Status 4
“M” user data volume is doing migration.
48
R %
Ratio of initializing or rebuilding.
RAID
The RAID levels that user data volume is using.
#LUN
Number of LUN(s) that data volume is attaching.
Snapshot(GB)
The user data volume size that used for snapshot. The blue square button next to the snapshot is “Resize” function to decide the snapshot space. The blue square button next to the resize function is “Auto snapshot” function to setup the frequency of taking snapshots. The number means “Free snapshot space” / “Total snapshot space”. If the snapshot UDV has been created, this column will be the creation time.
VG name
The VG name of the user data volume.
CV (MB)
The cache volume of the user data volume.
UDV operations description:
ATTACH LUN
Attach to a LUN.
SNAPSHTOT
Choose a UDV to execute snapshot.
CREATE
Create a user data volume function.
DELETE
Delete a user data volume function.
3.6.5 Cache Volume
Enter “Cache volume” function to view the status of cache volume. The global cache volume is a default cache volume, which is created after power on automatically,
and cannot be deleted. The size of global cache is based on the RAM size. It is total memory size minus the system usage.
49
CV column description:
No.
Number of the Cache volume. The blue square button next to the CV No. is “More Information” indication. It shows the details of the cache volume.
Size(MB)
Total capacity of the cache volume The blue square button next to the CV size is “Resize” function. The CV size can be adjusted.
UDV Name
Name of the UDV.
CV operations description:
CREATE
Create a cache volume function.
DELETE
Delete a cache volume function.
3.6.6 Logical Unit Number
Enter “Logical unit” function to view the status of attached logical unit number of each UDV. User can attach LUN by clicking the . “Host” must input an initiator node
name for access control, or fill-in wildcard “*”, which means every host can access the volume. Choose LUN and permission, and then click . User can assign up to 256
LUNs per system (controller). For the host connection, the host number limitation is 32 per system (controller) in the same time, and 8 for single user data volume (UDV) which means 8 hosts can access the same UDV in the same time.
50
LUN operations description:
ATTACH
Attach a logical unit number to a user data volume.
DETACH
Detach a logical unit number from a user data volume.
The matching rules of access control are from top to down by sequence. For example: there are 2 rules for the same UDV, one is “*”, LUN 0; the other is “iqn.host1”, LUN 1. The other host “iqn.host2” can login because it matches the rule 1.
The access will be denied when there is no matching rule.
3.6.7 Examples
The followings are examples for creating volumes. Example 1 is to create two UDVs sharing the same CV (global cache volume) and set a global spare disk. Example 2 is to create two UDVs. One shares global cache volume, the other uses dedicated cache volume. Set a dedicated spare disk.
Example 1
Example 1 is to create two UDVs in one VG, each UDV uses global cache volume. Global cache
51
volume is created after system boots up automatically. So, no action is needed to set CV. Then set a global spare disk. The last, delete all of them.
Step 1: Create VG (Volume Group). To create the volume group, please follow the procedures:
1. Select “/ Volume config / Volume group”.
2. Click .
3. Input a VG Name, choose a RAID level from the picklist, press
to choose the RAID PD slot(s), then press .
4. Check the outcome. Press if all setups are correct.
5. Done. A VG has been created.
Step 2: Create UDV (User Data Volume). To create a user data volume, please follow the procedures.
52
1. Select “/ Volume config / User data volume”.
2. Click .
3. Input a UDV name, choose a VG Name and input a size to the UDV; decide the stripe high, block size, read/write mode and set priority, finally click .
4. Done. A UDV has been created.
5. Do one more time to create another UDV.
Step 3: Attach LUN to UDV. There are 2 methods to attach LUN to UDV.
1. In “/ Volume config / User data volume”, press .
2. In “/ Volume config / Logical unit”, press .
The procedures are as follows:
53
1. Select a UDV.
2. Input “Host”, which is an initiator node name for access control, or fill-in wildcard “*”,
which means every host can access this volume. Choose LUN and permission, then click .
3. Done.
NOTE: The matching rules of access control are from top to bottom by sequence.
Step 4: Set global spare disk.
To set global spare disks, please follow the procedures.
1. Select “/ Volume config / Physical disk”.
2. Select the free disk(s) by clicking the checkbox of the row, then click
to set as global spares.
3. There is a “GS” icon shown up at status 1 column.
54
Step 5: Done. They can be used as iSCSI disks. Delete UDVs, VG, please follow the steps.
Step 6: Detach LUN from UDV. In “/ Volume config / Logical unit”,
1. Select LUNs by clicking the checkbox of the row, then click . There will pop up a confirm page.
2. Choose “OK”.
3. Done.
Step 7: Delete UDV (User Data Volume). To delete the user data volume, please follow the procedures:
55
1. Select “/ Volume config / User data volume”.
2. Select UDVs by clicking the checkbox of the row.
3. Click . There will pop up a confirm page.
4. Choose “OK”.
5. Done. Then, the UDVs are deleted.
IMPORTANT! When deleting UDV, the attached LUN(s) related to this UDV will be detached automatically, too.
Step 8: Delete VG (Volume Group).
To delete the volume group, please follow the procedures:
1. Select “/ Volume config / Volume group”.
2. Select a VG by clicking the checkbox of the row, make sure that there is no UDV on this VG, or the UDV(s) on this VG must be deleted first.
3. Click . There will pop up a confirmation page.
4. Choose “OK”
5. Done. The VG has been deleted.
IMPORTANT! The action of deleting one VG will succeed only when all of the related UDV(s) are deleted in this VG. Otherwise, it will have an error when deleting this VG.
Step 9: Free global spare disk.
To free global spare disks, please follow the procedures.
1. Select “/ Volume config / Physical disk”.
2. Select the global spare disk by clicking the checkbox of the row, then click
to free disk.
Step 10: Done, all volumes have been deleted.
Example 2
Example 2 is to create two UDVs in one VG. One UDV shares global cache volume, the other uses dedicated cache volume. First, dedicated cache volume should be created; it can be used in creating UDV. The last, delete them.
Each UDV is associated with one specific CV (cache volume) to execute the data transaction. Each CV could have different cache memory size. If there is no special request in UDVs, it uses global
56
cache volume. Or user can create a dedicated cache for indivifual UDV manually. Using dedicated cache volume, the performance would not be affected by the other UDV’s data access.
The total cache size depends on the RAM size and set all to global cache automatically. To create a dedicated cache volume, first step is to cut down global cache size for the dedicated cache volume. Please follow the procedures.
Step 1: Create dedicated cache volume.
1. Select “/ Volume config / Cache volume”.
2. If there is no free space for creating a new dedicated cache volume, cut down the
global cache size first by clicking the blue square button in the size column. After resized, click to return to cache volume page.
3. Click to enter the setup page.
4. Fill in the size and click .
5. Done. A new dedicated cache volume has been set.
NOTE: The minimum size of global cache volume is 40MB. The minimum size of dedicated cache volume is 20MB.
Step 2: Create VG (Volume Group).
Please refer to Step 1 of Example 1 to create VG. Step 3: Create UDV (User Data Volume).
Please refer to Step 2 of Example 1 to create UDV. To create a user data volume with dedicated cache volume, please follow the below procedures.
57
1. Select “/ Volume config / User data volume”.
2. Click .
3. Input a UDV name, choose a VG Name, select Dedicated cache which is created at
Step 1, and input the size for the UDV; decide the stripe height, block size, read/write mode and set priority, finally click .
4. Done. A UDV using dedicated cache has been created.
58
Step 4: Attach LUN to UDV. Please refer to Step 3 of Example 1 to attach LUN.
Step 5: Set dedicated spare disk. To set dedicated spare disks, please follow the procedures:
1. Select “/ Volume config / Physical disk”.
2. Select a VG from the list box, and then select the free disk(s), click
to set as dedicated spare for the selected VG.
3. There is a “DS” icon shown up at status 1 column.
59
Step 6: Done. The PDs can be used as iSCSI disks.
Delete UDVs, VG, please follow the steps.
Step 7: Detach LUN from UDV.
Please refer to Step 6 of Example 1 to detach LUN.
Step 8: Delete UDV (User Data Volume).
Please refer to Step 7 of Example 1 to delete UDV.
Step 9: Delete VG (User Data Volume).
Please refer to Step 8 of Example 1 to delete VG.
Step 10: Free dedicated spare disk.
To free dedicated spare disks, please follow the procedures:
1. Select “/ Volume config / Physical disk”.
2. Select the dedicated spare disk by clicking the checkbox of the row, then click
to free disk.
Step 11: Delete dedicated cache volume. To delete the cache volume, please follow the procedures:
1. Select “/ Volume config / Cache volume”.
2. Select a CV by clicking the checkbox of the row.
3. Click . There will pop up a confirmation page.
4. Choose “OK”.
5. Done. The CV has been deleted.
WARNING! Global cache volume cannot be deleted.
Step 12: Done, all volumes have been deleted.
60
3.7 Enclosure Management
“Enclosure management” function allows managing enclosure information including “SES config”, “Hardware monitor”, “S.M.A.R.T.” and “UPS” functions. For the enclosure
management, there are many sensors for different purposes, such as temperature sensors, voltage sensors, hard disks, fan sensors, power sensors, and LED status. And due to the hardware characteristics are different among these sensors, for different sensors, they have different polling intervals. Below is the detail polling time intervals:
1. Temperature sensors: 1 minute.
2. Voltage sensors: 1 minute.
3. Hard disk sensors: 10 minutes.
4. Fan sensors: 10 seconds, when there are continuous 3 times of error, controller sends ERROR event log.
5. Power sensors: 10 seconds, when there are continuous 3 times of error, controller sends ERROR event log.
6. LED status: 10 seconds.
3.7.1 SES Configuration
SES represents SCSI Enclosure Services, one of the enclosure management standards. Enter “SES config” function can enable or disable the management of SES.
The SES client software is available at the following web site: SANtools: http://www.santools.com/
61
3.7.2 Hardware Monitor
Enter “Hardware monitor” function to view the information of current voltage and temperature.
If “Auto shutdown” has been checked, the system will shutdown automatically when voltage or temperature is out of the normal range. For better data protection, please check “Auto Shutdown”.
For better protection and to avoid single short period of high temperature triggering Auto shutdown, XL-RAID-2804ISSA use multiple condition judgments for Auto shutdown, below is the detail of when the Auto shutdown will be triggered.
62
1. There are 3 sensors placed on controllers for temperature checking, on core processor, on PCI­X bridge, and on daughter board. XL-RAID-2804ISSA will check each sensor every 30 seconds. When one of these sensors is over the high temperature value for continuous 3 minutes, the Auto shutdown will be triggered immediately.
2. The core processor temperature limit is 85. The PCI-X bridge temperature limit is 80. The daughter board temperature limit is 80℃.
3. If the high temperature situation doesn’t last for 3 minutes, XL-RAID-2804ISSA will not do auto shutdown.
3.7.3 Hard Drive S.M.A.R.T. Function Support
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a diagnostic tool for hard
drives to give advanced warning of drive failures. S.M.A.R.T. provides users chances to take actions before possible drive failure.
S.M.A.R.T. measures many attributes of the hard drive all the time and decide the hard drives which are close to out of tolerance. The advanced notice of possible hard drive failure can allow users to back up hard drive or replace the hard drive. This is much better than hard drive crash when it is writing data or rebuilding a failed hard drive.
Enter “S.M.A.R.T.” function will display S.M.A.R.T. information of hard drives. The number is the current value; the number in parenthesis is the threshold value. The threshold values of hard drive vendors are different, please refer to vendors’ specification for details.
63
3.7.4 UPS
Enter “UPS” function will set UPS (Uninterruptible Power Supply).
Currently, the system only support and communicate with smart-UPS function of APC (American Power Conversion Corp.) UPS. Please check detail from http://www.apc.com/.
First, connect the system and APC UPS via RS-232 for communication. Then set up the shutdown values when the power is gone. UPS of other vendors can work fine, but they have no such function of communication.
UPS Type
Select UPS Type. Choose Smart-UPS for APC, None for other vendors or no UPS.
Shutdown Battery Level (%)
When below the setting level, the system will shutdown. Setting level to “0” will disable UPS function.
Shutdown Delay (s)
If power failure occurred, and the system can not return back to the setting value period, the system will shutdown. Setting delay to “0” will disable the function.
Shutdown UPS
Select ON, when power is gone, UPS will shutdown by itself after the system shutdown successfully. After power comes back, UPS will start working and notify system to boot up. OFF will not.
Status
The status of UPS.
“Detecting…” “Running” “Unable to detect UPS” “Communication lost” “UPS reboot in progress” “UPS shutdown in progress” “Batteries failed. Please change them NOW!”
Battery Level (%)
Current percentage of battery level.
64
3.8 System Maintenance
“Maintenance” function allows operation of the system functions including “Upgrade” to the
latest firmware, “Info” to show the system version, “Reset to default” to reset all controller configuration values to origin settings, “Config import & export” to export and import all controller configuration except for VG/UDV setting and LUN setting, and “Shutdown” to either reboot or shutdown the system.
3.8.1 Upgrade
Enter “Upgrade” function to upgrade firmware. Please prepare new firmware file named “xxxx.bin” in local hard drive, then press to select the file. Click
, it will pop up a message “Upgrade system now? If you want to downgrade to the previous FW later, please export your system config first”, click “Cancel” to export system config first, then click “OK” to start to upgrade firmware.
When upgrading, there is a progress bar running. After finished upgrading, the system must reboot manually.
65
3.8.2 Info
NOTE: When upgrading FW, XL-RAID-2804ISSA can only accept the newer version and guarantee the compatibility issue which means, if customer changes FW to older version, the VG/UDV/LUN config may be lost.
Enter “Info” function will display system type, FW number, CPU type, RAM size, and serial number.
3.8.3 Reset to Default
Enter “Reset to default” function, it allows user to reset the controller to the factory default setting.
3.8.4 Config Import & Export
Enter “Config import & export” function, it allows user to save system configurable values – export function and to apply all configuration – import function. For the volume config setting, the values are available in export function and not available in import function which can avoid confliction/date-deleting between two subsystems. That says if one controller already has valuable data on the disks and user may forget it and use import function. If the volume setting was also imported, the user’s current data will be cleared. Below is table of available configuration in import & export function.
System name
Controller system name
IP address
Web UI IP address with (1) DHCP enabled, (2) IP, (3) Subnet mask, (4) Gateway, (5) DNS
iSCSI
iSCSI data port address with (1) Aggregation, (2) iSNS, (3) CHAP, (4) LAN 1 IP, Aggregation, IP, Subnet mask, Gateway, MTU
66
Login config
Admin account login config with (1) Auto logout setting, (2) Admin login lock
Password
Admin password value with (1) Current password, (2) Old password
Date
Time Zone setting
Mail
Event log mail setting with (1) Mail_from address, (2) SMTP server, (3) Authentication, (4) Mail account ID, (5) Mail password, (6) Mail_To_1 address, (7) Mail_To_2 address, (8) Mailt_To_3 address, (9) Event log filter setting
SNMP
SNMP setting with (1) SNMP trap address 1, (2) SNMP trap address 2, (3) SNMP trap address 3, (4) Community setting, (5) Event log filter function
Event log
Event log filter setting with (1) Web UI and console UI setting, (2) LCM setting, (3) Web UI pop up event setting
SES config
SES management setting
vol_temp
Auto shutdown setting
UPS
UPS setting with (1) UPS type, (2) Shutdown Battery Level, (3) Shutdown Delay, (4) Shutdown UPS
Physical disk
Not available in import function.
Current controller hard disk status with (1) size, (2) block size, (3) VG, (4) hard status
Physical disk spindown
Not available in import function.
Hard disk auto spindown setting
Volume group
Not available in import function.
VG setting with (1) VG name, (2) size, (3) number of physical disks, (4) number of UDVs, (5) RAID level
Cache volume
Not available in import function.
Cache volume setting with (1) size, (2) percentage
User data volume
Not available in import function.
UDV setting with (1) UDV name, (2) size, (3) VG name, (4) cache volume, (5) Stripe height, (6) block size, (7) write through or write back, (8) priority
Logical unit
Not available in import function.
LUN setting with (1) host name, (2) target name, (3) UDV name, (4) LUN number, (5) permission
67
3.8.5 Shutdown
Enter “Shutdown” function; it will display “Reboot” and “Shutdown” buttons. Before power off, it’s better to press “Shutdown” to flush the data from cache to physical disks. The step is better for the data protection.
3.9 Logout
For security reason, “Logout” function will allow logout while no user is operating the system. Re­login the system, please enter username and password again.
68
Chapter 3 Advanced Operation
4.1 Rebuild
If one physical disk of the VG which is set as protected RAID level (e.g.: RAID 3 , RAID 5, or RAID
6) is FAILED or has been unplugged/removed, then, the VG status is changed to degraded mode, the system will search/detect spare disk to rebuild the degraded VG to a complete one. It will detect dedicated spare disk as rebuild disk first, then global spare disk.
XL-RAID-2804ISSA support Auto-Rebuild function. When the RAID level allows disk failures which the VG is protected, such as RAID 3, RAID 5, RAID 6, and etc, XL-RAID-2804ISSA starts Auto-Rebuild as below scenario:
Take RAID 6 for example:
1. When there is no global spare disk or dedicated spare disk on the system, XL-RAID-2804ISSA will be in degraded mode and wait until (A) there is one disk assigned as spare disk, or (B) the failed disk is removed and replaced with new clean disk, then the Auto-Rebuild starts. The new disk will be a spare disk to the original VG automatically.
a. If the new added disk is not clean (with other VG information), it would be marked as
RS (reserved) and the system will not start "auto-rebuild".
b. If this disk is not belonging to any existing VG, it would be FR (Free) disk and the
system will start Auto-Rebuild function.
c. if user only removes the failed disk and plugs the same failed disk in the same slot again,
the auto-rebuild will start by this case. But rebuilding in the same failed disk may impact customer data later from the unstable disk status. XL-RAID-2804ISSA suggests all customers not to rebuild in the same failed disk for better data protection.
2. When there is enough global spare disk(s) or dedicated spare disk(s) for the degraded array, XL-RAID-2804ISSA starts Auto-Rebuild immediately. And in RAID 6, if there is another disk failure happening during the time of rebuilding, XL-RAID-2804ISSA will start the above Auto-Rebuild scenario as well. And the Auto-Rebuild feature only works at "RUNTIME". It will not work the downtime. Thus, it will not conflict with the “Roaming” function.
In degraded mode, the status of VG is “DG”. When rebuilding, the status of PD/VG/UDV is “R”; and “R%” in UDV will display the ratio in
percentage. After complete rebuilding, “R” and “DG” will disappear. VG will become complete one.
IMPORTANT! The list box doesn’t exist if there is no VG or only VG of RAID 0, JBOD because user cannot set dedicated spare disk for these RAID levels.
69
Sometimes, rebuild is called recover; these two have the same meaning. The following table is the relationship between RAID levels and rebuild.
RAID 0
Disk striping. No protection of data. VG fails if any hard drive fails or unplugs.
RAID 1
Disk mirroring over 2 disks. RAID 1 allows one hard drive fails or unplugging. Need one new hard drive to insert to the system and rebuild to be completed.
N-way mirror
Extension to RAID 1 level. It has N copies of the disk. N-way mirror allows N-1 hard drives fails or unplugging.
RAID 3
Striping with parity on the dedicated disk. RAID 3 allows one hard drive fail or unplugging.
RAID 5
Striping with interspersed parity over the member disks. RAID 5 allows one hard drive fail or unplugging.
RAID 6
2-dimensional parity protection over the member disks. RAID 6 allows two hard drives fails or unplugging. If it needs to rebuild two hard drives at the same time, it will rebuild the first one, then the other by sequence.
RAID 0+1
Mirroring of the member RAID 0 volumes. RAID 0+1 allows two hard drives fails or unplugging, but at the same array.
RAID 10
Striping over the member RAID 1 volumes. RAID 10 allows two hard drives fails or unplugging, but at different arrays.
RAID 30
Striping over the member RAID 3 volumes. RAID 30 allows two hard drives fails or unplugging, but at different arrays.
RAID 50
Striping over the member RAID 5 volumes. RAID 50 allows two hard drives fails or unplugging, but at different arrays.
RAID 60
Striping over the member RAID 6 volumes. RAID 60 allows four hard drives fails or unplugging, but each two at different arrays.
JBOD
The abbreviation of “Just a Bunch Of Disks. No protection of data. VG fails if any hard drive fails or unplugs.
70
4.2 VG Migration and Expansion
To migrate the RAID level, please follow the below procedures. If the VG migrates to the same RAID level of the original VG, it is expansion.
1. Select “/ Volume config / Volume group”.
2. Decide which VG to be migrated, click the blue square button / in the RAID column next the RAID level.
3. Change the RAID level by clicking the down arrow mark . There will be a pup-up which shows if the HDD is not enough to support the new RAID level, click
to increase hard drives, then click
to go back to setup page. When doing migration to lower RAID level, such as the original RAID level is RAID 6 and user wants to migrate to RAID 0, the controller will evaluate this operation is safe or not, and display "Sure to migrate to a lower protection array?” to give user warning.
4. Double check the setting of RAID level and RAID PD slot. If no problem, click
.
5. Finally a confirmation page shows detail RAID info. If no problem, click
to start migration. Controller also pops up a message of “Warning: power lost during migration may cause damage of data!” to give user warning. When the power is abnormally off during migration, the data is in high risk.
6. Migration starts and it can be seen from the “status 3” of a VG with a running square and an “M”. In “/ Volume config / User data volume”, it displays an “M” in “Status 4” and complete percentage of migration in “R%”.
71
INPORTANT! To do migration/expansion, the total size of VG must be larger or equal to the original VG. It does not allow expanding the same RAID level with the same hard disks of original VG.
During setting migration, if user doesn’t setup correctly, controller will pop up warning messages. Below is the detail of messages.
"Invalid VG ID": Source VG is invalid.
"Degrade VG not allowed": Source VG is degraded.
"Initializing/rebuilding operation's going": Source VG is initializing or rebuilding.
"Migration operation's going": Source VG is already in migration.
"Invalid VG raidcell parameter": Invalid configuration. E.g., New VG's capacity < Old VG's capacity, New VG's stripe size < Old VG's stripe size. Or New VG's configuration == Old VG's configuration.
"Invalid PD capacity": New VG's minimum PD capacity < Old VG's minimum PD capacity.
WARNING! VG Migration cannot be executed during rebuild or UDV extension.
72
4.3 UDV Extension
To extend UDV size, please follow the procedures.
1.
Select “/ Volume config / User data volume”.
2.3.Decide which UDV to be extended, click the blue square button column next the number. Change the size. The size must be larger than the original,
in the Size
then click
to start extension.
4. Extension starts. If UDV needs initialization, it will display an “I” in “Status 3” and complete percentage of initialization in “R%”.
NOTE: The size of UDV extension must be larger than original.
WARNING! UDV Extension cannot be executed during rebuild or migration.
73
4.4 Snapshot /Rollback
XL-RAID-2804ISSA Snapshot-on-the-box captures the instant state of data in the target volume
in a logical sense. The underlying logic is Copy-on-Write -- moving out the to-be-written data to certain location whenever a write action occurs since the time of data capture. The certain location, named as snap UDV, is essentially a new UDV,.which can be attached to a LUN thus provisioned to a host as a disk just like other ordinary UDVs in the system. Rollback function restores the data back to the state of any point in time previously captured for whatever unfortunate reason it might be (e.g. virus attack, data corruption, human errors and so on). Snap UDV is allocated within the same VG in which the snapshot is taken, we suggest to reserve 20% of VG size or more for snapshot space.
NOTE: Snapshot /rollback features need 512MB RAM at least. Please also refer to RAM certification list in Appendix A.
4.4.1 Create Snapshot Volume
To take a snapshot of the data, please follow the procedures.
1. Select “/ Volume config / User data volume”.
2. Choose a UDV to do snapshot by clicking the blue square button in the
“Snapshot (GB)” column, it will direct to a setup page. The maximum snapshot space is 2TB which user can setup the space no bigger than 2048GB.
3. Set up the size for snapshot. The size is suggested to be 20% of UDV size at least,
and then click . It will go back to the UDV page and the size will show in snapshot column. It may not be the same as the number entered because some is reserved for snapshot internal usage. There will be 2 numbers in “Snapshot
74
(GB)” column. These numbers mean “Free snapshot space” and “Total snapshot space”.
4. Choose a UDV by clicking the checkbox of the row and then
click .
5. A snapshot UDV is created with the date and time taken snapshot of the chosen UDV.
The snapshot UDV size is the same as the chosen UDV no matter the actual snapshot UDV data occupies.
6. Attach LUN to UDV, please refer to section 3.6.6 Logical unit number for more detail.
7. Done. It can be used as a disk.
Snapshot has some constraints as the following:
1. Minimum RAM size of enabling snapshot function is 512MB.
2. For performance concern and future rollback, the system saves snapshot with names in
sequences. For example: three snapshots has been taken and named “snap1”(first), “snap2” and “snap3”(last). When deleting “snap2”, both of “snap1” and “snap2” will be deleted because “snap1” are related to “snap2”.
3. For resource concern, the max number of snapshots is 32.
4. If snapshot space is full, controller will send a warning message about space full and
the new taken snapshot will replace the oldest snapshot by rotation sequence.
5. Snap UDV cannot be migrated, when doing migration of related VG, snap UDV will fail.
6. Snap UDV cannot be extended.
75
4.4.2 Auto Snapshot
The snapshot copies can be taken manually or by schedule such as hourly or daily. Please follow the procedures.
1. Select “/ Volume config / User data volume”.
2. Create a snapshot space. Please refer to section 4.4.1 for more detail.
3. Click in “Snapshot (GB)” column to set auto snapshot .
4. The auto snapshot can be set at the period of monthly, weekly, daily, or hourly.
5. Done. It will take snapshots automatically.
4.4.3 Rollback
The data in snapshot UDV can rollback to original UDV. Please follow the procedures.
1. Select “/ Volume config / User data volume”.
2. Take one or more snapshots. Please refer to section 4.4.1 for more detail.
3. Click in “Snapshot (GB)” column to rollback the data, which user can recover data to the time that snapshot is taken.
Rollback function has some constraints as described in the following:
1. Minimum RAM size of enabling rollback function is 512MB.
2. When doing rollback, the original UDV cannot be accessed for a while. At the same time, the system connects original UDV and snap UDV, and then starts rollback.
3. During rollback data from snap UDV to original UDV, the original UDV can be accessed
76
and the data in it just like it finished rollback. At the same time, the other related snap UDV(s) can not be accessed.
4. After rollback process finished, the other related snap UDV(s) will be deleted, and snapshot space will be set to 0.
IMPORTANT! Before executing rollback, it is better to dismount file system for flushing data from cache to disks in OS first.
4.5 Disk Roaming
Physical disks can be re-sequenced in the same system or move whole physical disks from system­1 to system-2. This is called disk roaming. Disk roaming has some constraints as described in the following:
1. Check the firmware of two systems first. It is better that both have same firmware version or newer.
2. Whole physical disks of related VG should be moved from system-1 to system-2 together. The configuration of both VG and UDV will be kept but LUN configuration will be cleared to avoid conflict with system-2.
4.6 Support Microsoft MPIO and MC/S
MPIO (Multi-Path Input/Output) and MC/S (Multiple Connections per Session) both use multiple physical paths to create logical "paths" between the server and the storage device. In the case which one or more of these components fails, causing the path to fail, multi-path logic uses an alternate path for I/O so that applications can still access their data.
Microsoft iSCSI initiator supports multi-path function. Please follow the procedures to use MPIO feature.
1. A host with dual LAN ports connects cables to XL-RAID-2804ISSA.
2. Create a VG/UDV, attach this UDV to the host.
3. When installing “Microsoft iSCSI initiator”, please install MPIO driver at the same time.
4. Logon to target separately on each port. When logon to target, check “Enable multi­path”. Please refer to Appendix D, step 6.
5. MPIO mode can be selected on Targets Details Devices Advanced.
6. Rescan disk.
7. There will be one disk running MPIO.
77
Appendix
A. Certification List
RAM
RAM Spec: 184pins, DDR333(PC2700), Reg.(register) or UB(Unbufferred), ECC or Non­ECC, from 64MB to 1GB, 32-bit or 64-bit data bus width, x8 or x16 devices, 9 to 11 bits column address.
Vendor
Model
Unigen
UG732D6688KN-DH, 256MB DDR333 (UNBUFFERED) with Hynix Unigen
UG732D6688KS-DH, 256MB DDR333 (UNBUFFERED, LOW PROFILE) with Hynix Unigen
UG732D7588KZ-DH, 256MB DDR333 (REG, ECC) with Elpida Unigen
UG764D6688LS-DH, 512MB DDR333 (UNBUFFERED, LOW PROFILE) with Hynix Unigen
UG764D7588KZ-DH, 512MB DDR333 (REG, ECC) with Elpida Unigen
UG7128D7588LZ-DH, 1GB DDR333 (REG, ECC) with Hynix Unigen
UG7128D7488LN-GJF, 1GB DDR400 (ECC) with Hynix Unigen
UG7128D7588LZ-GJF, 1GB DDR400 (ECC, REG) with Elpida Unigen
UG7128D7588LZ-GJF, 1GB DDR400 (ECC, REG) with Hynix Unigen
UG718D6688LN-GJF, 1GB DDR400 (Non-ECC) with Hynix Unigen
UG718D688LN-GJF, 1GB DDR400 (Non-ECC) with Elpida ATP
AG28L72T8SHC4S, 1GB DDR400 (ECC) with Samsung ATP
AG28L64T8SHC4S, 1GB DDR400 (Non-ECC) with Samsung ATP
AG64L72T8SQC4S, 512MB DDR400 (ECC) with Samsung ATP
AJ56K72G8BJE6S, 2GB DDR2-667 (Unbuffered, ECC) with Samsung Transcend
TS256MLQ64V6U, 2GB DDR2-667 (Unbuffered) with Samsung Unigen
UG25T7200M8DU-5AM, 2GB DDR2-533 (UnBuffered, ECC) with Micron
iSCSI Initiator (Software)
OS
Software/Release Number
Microsoft Windows
Microsoft iSCSI Software Initiator Version 2.03 System Requirements:
1. Windows XP Professional with SP2
2. Windows 2000 Server with SP4
3. Windows Server 2003 with SP1
4. Windows Server 2003 R2 Linux
The iSCSI Initiators are different for different Linux Kernels.
1. For Redhat Linux 9 (Kernel 2.4), install linux-iscsi-3.6.3.tar
2. For Red Hat Enterprise Linux 3 (Kernel 2.4), install linux-iscsi-
3.6.3.tar
3. For Red Hat Enterprise Linux 4 (Kernel 2.6), use the build-in iSCSI initiator in kernel 2.6.9-34.Elsmp
Mac
ATTO XTEND 2.0x SAN / Mac iSCSI Initiator System Requirements: Mac® OS X v10.3.5 or later
78
For ATTO initiator, it is not free. Please contact your local distributor for ATTO initiator.
iSCSI HBA card
Vendor
Model
Adaptec
7211C (Gigabit, 1 port, TCP/IP offload, iSCSI offload) QLogic
QLA4010C (Gigabit, 1 port, TCP/IP offload, iSCSI offload) Qlogic
QLA4052C (Gigabit, 2 port, TCP/IP offload, iSCSI offload)
For detailed setup steps of Qlogic QLA4010C, please refer to Appendix G: QLogic QLA4010C setup instructions.
NIC
Vendor
Model
Intel
PWLA8490MT (Gigabit, 1 port, TCP/IP offload) Intel
PWLA8492MT (Gigabit, 2 port, TCP/IP offload) Intel
PWLA8494MT (Gigabit, 4 port, TCP/IP offload)
GbE Switch
Vendor
Model
Dell
PowerConnect 5324 Dell
PowerConnect 2724 HP
ProCurve 1800-24G
SATA hard drive
Vendor
Model
Hitachi
Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SATA, 8M Hitachi
Deskstar 7K80, HDS728080PLA380, 80GB, 7200RPM, SATA-II, 8M Hitachi
Deskstar 7K500, HDS725050KLA360, 500G, 7200RPM, SATA-II, 16M Hitachi
Deskstar 7K80, HDS728040PLA320, 40G, 7200RPM, SATA-II, 2M Maxtor
DiamondMax Plus 9, 6Y080M0, 80G, 7200RPM, SATA, 8M Maxtor
DiamondMax 11, 6H500F0, 500G, 7200RPM, SATA 3.0Gb/s, 16M Samsung
SpinPoint P80, HDSASP0812C, 80GB7200RPM, SATA, 8M Seagate
Barracuda 7200.7, ST380013AS, 80G, 7200RPM, SATA, 8M Seagate
Barracuda 7200.7, ST380817AS, 80G, 7200RPM, SATA, 8M, NCQ Seagate
Barracuda 7200.8, ST3400832AS, 400G, 7200RPM, SATA, 8M, NCQ Seagate
Barracuda 7200.9, ST3500641AS, 500G, 7200RPM, SATA-II, 16M Seagate
NL35, ST3400633NS, 400G, 7200RPM, SATA 3Gb/s, 16M Seagate
NL35, ST3500641NS, 500G, 7200RPM, SATA 3Gb/s, 16M Western Digital
Caviar SE, WD800JD, 80GB, 7200RPM, SATA, 8M Western Digital
Caviar SE, WD1600JD, 160GB, 7200RPM, SATA, 8M Western Digital
Raptor, WD360GD, 36.7GB, 10000RPM, SATA, 8M Western Digital
Caviar RE2, WD4000YR, 400GB, 7200RPM, SATA, 16M, NCQ
79
B. Event Notifications
PD/S.M.A.R.T. events
Level
Type
Description
Info
Disk inserted
Info: Disk <slot> is inserted.
Info
Disk removed
Info: Disk <slot> is removed.
Warning
S.M.A.R.T. threshold exceed condition
Warning: Disk <slot> S.M.A.R.T. threshold exceed condition occurred for attribute of
1. read error rate
2. spin up time
3. reallocated sector count
4. seek error rate
5. spin up retries
6. calibration retries
Warning
S.M.A.R.T. information
Warning: Disk <slot>: Failure to get S.M.A.R.T information
Physical HW events
Level
Type
Description
Warning
ECC error
Warning: Single-bit ECC error is detected.
Error
ECC error
Error: Multi-bit ECC error is detected.
Info
ECC DIMM Installed
Info: ECC Memory is installed.
Info
Non-ECC installed
Info: Non-ECC Memory is installed.
Error
Host chip failure
Error: Host channel chip failed.
Error
Drive chip failure
Error: Drive channel chip failed.
Warning
Ethernet port failure
Warning: GUI Ethernet port failed.
HDD IO Events
Level
Type
Description
Warning
Disk error
Error: Disk <slot> read block error.
Warning
Disk error
Error: Disk <slot> writes block error.
Warning
HDD failure
Error: Disk <slot> is failed.
Warning
Channel error
Error: Disk <slot> IO incomplete.
SES Events
Level
Type
Description
Info
SES load conf. OK
Info: SES configuration has been loaded.
Warning
SES Load Conf. Failure
Error: Failed to load SES configuration. The SES device is disabled.
Info
SES is disabled
Info: The SES device is disabled.
Info
SES is enabled
Info: The SES device is enabled
80
Environmental events
Level
Type
Description
Info
Admin Login OK
Info: Admin login from <IP or serial console> via <Web UI or Console UI>.
Info
Admin Logout OK
Info: Admin logout from <IP or serial console> via <Web UI or Console UI>.
Info
iSCSI data port login
Info: iSCSI login from <IQN> (<IP:Port Number>) succeeds.
Warning
iSCSI data port login reject
Warning: iSCSI login from <IQN> (<IP:Port Number>) was rejected, reason of
1. initiator error
2. authentication failure
3. authorization failure
4. target not found
5. unsupported version
6. too many connections
7. missing parameter
8. session does not exist
9. target error
10. out of resources
11. unknown
Error
Thermal critical
Error: System Overheated!!! The system will do the auto shutdown immediately.
Warning
Thermal warning
Warning: System temperature is a little bit higher.
Error
Voltage critical
Error: System voltages failed!!! The system will do the auto shutdown immediately
Warning
Voltage warning
Warning: System voltage is a little bit higher/lower.
Info
PSU restore
Info: Power <number> is restored to work.
Error
PSU Fail
Error: Power <number> is out of work.
Info
Fan restore
Info: Fan <number> is restore to work.
Error
Fan Fail
Error: Fan <number> is out of work.
Error
Fan non-exist
Error: System cooling fan is not installed.
Error
AC Loss
Error: AC loss for the system is detected.
Info
UPS Detection OK
Info: UPS detection succeed
Warning
UPS Detection Fail
Warning: UPS detection failed
Error
AC Loss
Error: AC loss for the system is detected
Error
UPS power low
Error: UPS Power Low!!! The system will do the auto shutdown immediately.
Info
Mgmt Lan Port Active
Info: Management LAN Port is active.
Warning
Mgmt Lan Port Failed
Warning: Fail to manage the system via the LAN Port.
Info
RTC Device OK
Info: RTC device is active.
Warning
RTC Access Failed
Warning: Fail to access RTC device
Info
Reset Password
Info: Reset Admin Password to default.
Info
Reset IP
Info: Reset network settings set to default.
81
System config events
Level
Type
Description
Info
Sys Config. Defaults Restored
Info: Default system configurations restored.
Info
Sys NVRAM OK
Info: The system NVRAM is active.
Error
Sys NVRAM IO Failed
Error: Can’t access the system NVRAM.
Warning
Sys NVRAM is full
Warning: The system NVRAM is full.
System maintenance events
Level
Type
Description
Info
Firmware Upgraded
Info: System firmware has been upgraded
Error
Firmware Upgraded Failed
Error: System firmware upgrade failed.
Info
System reboot
Info: System has been rebooted
Info
System shutdown
Info: System has been shutdown.
Info
System Init OK
Info: System has been initialized OK.
Error
System Init Failed
Error: System cannot be initialized in the last boot up.
LVM events
Level
Type
Description
Info
VG Created OK
Info: VG <name> has been created.
Warning
VG Created Fail
Warning: Fail to create VG <name>.
Info
VG Deleted
Info: VG <name> has been deleted.
Info
UDV Created OK
Info: UDV <name> has been created.
Warning
UDV Created Fail
Warning: Fail to create UDV <name>.
Info
UDV Deleted
Info: UDV <name> has been deleted.
Info
UDV Attached OK
Info: UDV <name> has been LUN-attached.
Warning
UDV Attached Fail
Warning: Fail to attach LUN to UDV <name>.
Info
UDV Detached OK
Info: UDV <name> has been detached.
Warning
UDV Detached Fail
Warning: Fail to detach LUN from Bus <number> SCSI_ID <number> LUN <number>.
Info
UDV_OP Rebuild Started
Info: UDV <name> starts rebuilding.
Info
UDV_OP Rebuild Finished
Info: UDV <name> completes rebuilding.
Warning
UDV_OP Rebuild Fail
Warning: Fail to complete UDV <name> rebuilding.
Info
UDV_OP Migrate Started
Info: UDV <name> starts migration.
Info
UDV_OP Migrate Finished
Info: UDV <name> completes migration.
Warning
UDV_OP Migrate Failed
Warning: Fail to complete UDV <name> migration.
Warning
VG Degraded
Warning: VG <name> is under degraded mode.
Warning
UDV Degraded
Warning: UDV <name> is under degraded mode.
82
Info
UDV Init OK
Info: UDV <name> completes the initialization.
Warning
UDV_OP Stop Initialization
Warning: Fail to complete UDV <name> initialization.
Warning
UDV IO Fault
Error: IO failure for stripe number <number> in UDV <name>.
Warning
VG Failed
Error: Fail to access VG <name>.
Warning
UDV Failed
Error: Fail to access UDV <name>.
Warning
Global CV Adjustment Failed
Error: Fail to adjust the size of the global cache.
Info
Global Cache
Info: The global cache is OK.
Error
Global CV Creation Failed
Error: Fail to create the global cache.
Info
UDV Rename
Info: UDV <name> has been renamed as <name>.
Info
VG Rename
Info: VG <name> has been renamed as <name>.
Info
Set VG Dedicated Spare Disks
Info: Assign Disk <slot> to be VG <name> dedicated spare disk.
Info
Set Global Disks
Info: Assign Disk <slot> to the Global Spare Disks.
Info
UDV Read-Only
Info: UDV <name> is a read-only volume.
Info
WRBK Cache Policy
Info: Use the write-back cache policy for UDV <name>.
Info
WRTHRU Cache Policy
Info: Use the write-through cache policy for UDV <name>.
Info
High priority UDV
Info: UDV <name> is set to high priority.
Info
Mid Priority UDV
Info: UDV <name> is set to mid priority.
Info
Low Priority UDV
Info: UDV <name> is set to low priority.
Error
PD configuration read/write error
Error: PD <slot> lba <#> length <#> config <read | write> failed.
Error
PD read/write error
Error: PD <#> lba <#> length <#> <read | write> error.
Error
UDV recoverable read/write error
Error: UDV <name> stripe <#> PD <#> lba <#> length <#> <read | write> recoverable
Error
UDV unrecoverable read/write error
Error: UDV <#> stripe <#> PD <#> lba <#> length <#> <read | write> unrecoverable
Info
UDV stripe rewrite start/fail/succeed
Info: UDV <name> stripe <#> rewrite column bitmap <BITMAP> <started | failed | finished>.
Snapshot events
Level
Type
Description
Warning
Allocate Snapshot Mem Failed
Warning: Fail to allocate snapshot memory for UDV <name>.
Warning
Allocate Snapshot Space Failed
Warning: Fail to allocate snapshot space for UDV <name>.
Warning
Reach Snapshot Threshold
Warning: The threshold of the snapshot of UDV <name> has been reached.
Info
Snapshot Delete
Info: The snapshot of UDV <name> has been deleted.
Info
Snapshot replaced
Info: The oldest snapshot version of UDV <name> has been replaced by the new one.
Info
Take a Snapshot
Info: Take a snapshot to UDV <name>.
Info
Set Size for Snapshot
Info: Set the snapshot size of UDV <name> to <number> GB.
83
Info
Snapshot rollback start
Info: The snapshot of UDV <name> rollback start.
Info
Snapshot rollback finish
Info: The snapshot of UDV <name> rollback finish.
C. Known Issues
1. Microsoft MPIO is not supported on Windows XP or Windows 2000 Professional.
Workaround solution: Using Windows Server 2003 or Windows 2000 server to run MPIO.
D. Microsoft iSCSI Initiator
Here is the step by step to setup Microsoft iSCSI Initiator. Please visit Microsoft website for latest iSCSI initiator. The following setup may not use the latest Microsoft iSCSI initiator.
1. Run Microsoft iSCSI Initiator version 2.03. Please see Figure D.1.
2. Click “Discovery”.
3. Click “Add”. Input IP address or DNS name of iSCSI storage device. Please see Figure
D.2.
84
4. Click “OK”.
5. Click “Targets”.
85
6. Click “Log On”. Check “Enable multi-path” if running MPIO.
7. Click “Advance” if CHAP information is needed.
8. Click “OK”. The status would be “Connected”.
9. Done, it can connect to an iSCSI disk.
86
The following procedure is to log off iSCSI device.
a. Click “Details”.
b. Check the Identifier, which will be deleted. c. Click “Log off”.
d. Done, the iSCSI device log off successfully.
87
E. Trunking/LACP Setup Instructions
Here is the step by step to setup Trunking and LACP. There are 2 kinds of scenarios for Trunking/LACP.
The setup instructions are in the following figures.
Create a VG with RAID 5, using 3 HDDs.
Create a UDV by using the RAID 5 VG.
88
Run Microsoft iSCSI initiator 2.03 and check the Initiator Node Name.
Attaching LUN to R5 UDV. Input the Initiator Node Name in the Host field.
Done, please check the settings.
89
Check iSCSI settings. The IP address of iSCSI data port 1 is 192.168.11.229. Using port 1 as Trunking or LACP. Click the blue square in “Aggregation” field to set Trunking or LACP.
Select “Trunking”. If LACP is needed.
Now, the setting is in Trunking mode.
90
Enable switch Trunking function of port 21 and 23. Below is an example of Dell
PowerConnect 5324. Go to Figure E.14 for next step.
Select “LACP”. If Trunking is needed.
Now, the setting is LACP mode.
91
Enable switch LACP function of port 21 and 23. Below is an example of Dell
PowerConnect 5324.
Add Target Portals in Microsoft iSCSI initiator 2.03.
92
Input the IP address of iSCSI data port 1 (192.168.11.229 as mentioned in previous
page).
Click “Targets” to log on.
Log on.
93
Click “Advanced”.
Select Target Portal to iSCSI data port 1 (192.168.11.229). Then click “OK”.
94
The setting is completed.
Run “Computer Management” in Windows. Make sure the disks are available. Then the
disks can be tested for performance by IOMETER.
95
96
F. MPIO and MC/S Setup Instructions
Here is the step by step to setup MPIO. There are 2 kinds of scenarios for MPIO. Please see Figure F.1. XL-RAID-2804ISSA suggests using scenario 2 for better performance.
Network diagram of MPIO.
The setup instructions are in the following figures.
Create a VG with RAID 5, using 3 HDDs.
97
Create a UDV by using RAID 5 VG.
Run Microsoft iSCSI initiator 2.03 and check the Initiator Node Name.
Attaching LUN to R5 UDV. Input the Initiator Node Name in Host field.
98
The volume config setting is done.
Check iSCSI settings. The IP address of iSCSI data port 1 is 192.168.11.229, port 2 is
192.168.12.229 for example.
Add Target Portals on Microsoft iSCSI initiator 2.03.
99
Input the IP address of iSCSI data port 1 (192.168.11.229 as mentioned in previous
page).
Add second Target Portals on Microsoft iSCSI initiator 2.03.
100
Input the IP address of iSCSI data port 2 (192.168.12.229 as mentioned in previous
page).
The initiator setting is done.
Loading...