Proware EP-4423 User Manual

Fibre to SAS/SATA
RAID Subsystem
User Manual
Revision 1.0
Fibre to SAS/SATA RAID Subsystem
2
User Manual
Table of Contents
Preface ................................................................................................................................ 5
Before You Begin ............................................................................................................. 6
Safety Guidelines............................................................................................................................................................... 6
Controller Configurations .............................................................................................................................................. 6
Packaging, Shipment and Delivery ............................................................................................................................ 6
Unpacking the Shipping Carton ............................................................................................................................ 7
Chapter 1 Product Introduction ................................................................................. 8
1.1 Technical Specifications ..................................................................................................................................... 10
1.2 RAID Concepts ...................................................................................................................................................... 12
1.3 Fibre Functions ...................................................................................................................................................... 17
1.3.1 Overview ......................................................................................................................................................... 17
1.3.2 Four ways to connect (FC Topologies) ................................................................................................. 17
1.3.3 Basic Elements .............................................................................................................................................. 19
1.3.4 LUN Masking ................................................................................................................................................. 19
1.4 Array Definition ..................................................................................................................................................... 20
1.4.1 Raid Set ........................................................................................................................................................... 20
1.4.2 Volume Set .................................................................................................................................................... 20
1.5 High Availability .................................................................................................................................................... 21
1.5.1 Creating Hot Spares ............................................................................................................................. 21
1.5.2 Hot-Swap Disk Drive Support .......................................................................................................... 21
1.5.3 Hot-Swap Disk Rebuild ........................................................................................................................ 21
Chapter 2 Identifying Parts of the RAID Subsystem .......................................... 22
2.1 Main Components ............................................................................................................................................... 22
2.1.1 Front View ...................................................................................................................................................... 22
2.1.1.1 LCD Display Panel LEDs ................................................................................................................... 23
2.1.1.2 Disk Drive Status Indicators ........................................................................................................... 25
2.1.1.3 LCD IP Address in Dual Controller Mode ................................................................................ 26
2.1.2 Rear View ........................................................................................................................................................ 27
2.2 Controller Module................................................................................................................................................ 28
2.2.1 Controller Module Panel .......................................................................................................................... 29
2.3 Power Supply / Fan Module (PSFM) ............................................................................................................ 31
2.3.1 PSFM Panel .................................................................................................................................................... 33
2.4 Turbo Fan (Fan 06-1) .......................................................................................................................................... 34
2.5 Expander Module ................................................................................................................................................. 35
Fibre to SAS/SATA RAID Subsystem
User Manual
3
2.5.1 Expander Module Panel ........................................................................................................................... 35
2.6 Disk Tray .................................................................................................................................................................. 36
2.6.1 Disk Drive Installation ................................................................................................................................ 37
Chapter 3 Getting Started with the Subsystem ................................................... 44
3.1 Installing the Rails and Mounting into Rack ............................................................................................ 44
3.2 Preparing the RAID Subsystem ...................................................................................................................... 57
3.3 Powering On .......................................................................................................................................................... 57
3.4 Powering Off .......................................................................................................................................................... 59
Chapter 4 RAID Configuration Utility Options .................................................... 60
4.1 Configuration through Telnet ......................................................................................................................... 60
4.2 Configuration through the LCD Panel......................................................................................................... 65
4.2.1 Menu Diagram ............................................................................................................................................. 66
4.3 Configuration through web browser-based proRAID Manager ....................................................... 72
Chapter 5 RAID Management .................................................................................. 74
5.1 Quick Function ...................................................................................................................................................... 74
5.1.1 Quick Create .................................................................................................................................................. 74
5.2 RAID Set Functions .............................................................................................................................................. 76
5.2.1 Create RAID Set ........................................................................................................................................... 76
5.2.2 Delete RAID Set ........................................................................................................................................... 77
5.2.3 Expand RAID Set .......................................................................................................................................... 78
5.2.4 Offline RAID Set ........................................................................................................................................... 81
5.2.5 Rename RAID Set ........................................................................................................................................ 82
5.2.6 Activate Incomplete RAID Set ................................................................................................................ 83
5.2.7 Create Hot Spare ......................................................................................................................................... 85
5.2.8 Delete Hot Spare ......................................................................................................................................... 86
5.2.9 Rescue Raid Set ........................................................................................................................................... 86
5.3 Volume Set Function .......................................................................................................................................... 87
5.3.1 Create Volume Set ...................................................................................................................................... 87
5.3.2 Create Raid 30/50/60 ................................................................................................................................ 91
5.3.3 Delete Volume Set ...................................................................................................................................... 92
5.3.4 Modify Volume Set ..................................................................................................................................... 93
5.3.4.1 Volume Set Expansion ...................................................................................................................... 94
5.3.4.2 Volume Set Migration ...................................................................................................................... 95
5.3.5 Check Volume Set ....................................................................................................................................... 96
5.3.6 Schedule Volume Check ........................................................................................................................... 98
5.3.7 Stop Volume Check .................................................................................................................................... 99
5.4 Physical Drive ...................................................................................................................................................... 100
5.4.1 Create Pass-Through Disk .................................................................................................................... 100
Fibre to SAS/SATA RAID Subsystem
4
User Manual
5.4.2 Modify a Pass-Through Disk ............................................................................................................... 101
5.4.3 Delete Pass-Through Disk .................................................................................................................... 102
5.4.4 Identify Enclosure ..................................................................................................................................... 103
5.4.5 Identify Selected Drive ........................................................................................................................... 103
5.5 System Controls ................................................................................................................................................. 104
5.5.1 System Configuration ............................................................................................................................. 104
5.5.2 HDD Power Management ..................................................................................................................... 106
5.5.3 Fibre Channel Config .............................................................................................................................. 108
5.5.4 EtherNet Configuration .......................................................................................................................... 111
5.5.5 Alert By Mail Configuration ................................................................................................................. 112
5.5.6 SNMP Configuration ............................................................................................................................... 113
5.5.7 NTP Configuration ................................................................................................................................... 114
5.5.8 View Events / Mute Beeper ................................................................................................................. 115
5.5.9 Generate Test Event ................................................................................................................................ 116
5.5.10 Clear Event Buffer .................................................................................................................................. 117
5.5.11 Modify Password .................................................................................................................................... 118
5.5.12 Upgrade Firmware ................................................................................................................................. 118
5.5.13 Shutdown Controller ............................................................................................................................ 119
5.5.14 Restart Controller ................................................................................................................................... 120
5.6 Information Menu ............................................................................................................................................. 121
5.6.1 RAID Set Hierarchy .................................................................................................................................. 121
5.6.2 SAS Chip Information ............................................................................................................................. 123
5.6.3 System Information ................................................................................................................................. 124
5.6.4 Hardware Monitor .................................................................................................................................... 125
Chapter 6 Maintenance ........................................................................................... 127
6.1 Upgrading the RAID Controller’s Cache Memory ............................................................................... 127
6.1.1 Replacing the Memory Module ......................................................................................................... 127
6.2 Upgrading the RAID Controller’s Firmware ........................................................................................... 128
6.3 Replacing Subsystem Components ........................................................................................................... 136
6.3.1 Replacing a Disk Drive ........................................................................................................................... 137
6.3.2 Replacing the RAID Controller Module .......................................................................................... 145
6.3.3 Replacing the Power Supply Fan Module...................................................................................... 146
6.3.4 Replacing the Turbo Fan (Fan 06-1) ................................................................................................ 148
6.3.5 Replacing the Expander Module ....................................................................................................... 150
6.3.6 Replacing the Front Panel .................................................................................................................... 156
6.3.7 Replacing the Bottom Board ............................................................................................................... 161
Appendix 1 Disk Power Off/On Function in Web GUI ..................................... 169
Fibre to SAS/SATA RAID Subsystem
User Manual
5
Preface
About this manual
This manual provides information regarding the hardware features, installation and configuration of the RAID subsystem. This document also describes how to use the storage management software. Information contained in the manual has been reviewed for accuracy, but not for product warranty because of the various environment/OS/settings. Information and specifications will be changed without further notice.
This manual uses section numbering for every topic being discussed for easy and convenient way of finding information in accordance with the user’s needs. The following icons are being used for some details and information to be cons idered in going through with this manual:
Copyright
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent.
Trademarks
All products and trade names used in this document are trademarks or registered trademarks of their respective owners.
Changes
The material in this document is for information only and is su bject to change without notice.
IMPORTANT!
These are the important information that the user must remember.
WARNING!
These are the warnings that the user must follow to avoid unnecessary errors and bodily injury during hardware and software operation of the subsystem.
CAUTION:
These are the cautions that user must be aware of to prevent damage to the subsystem and/or its components.
NOTES:
These are notes that contain useful information and tips that the user must give attention to in going through with the subsystem operation.
Fibre to SAS/SATA RAID Subsystem
6
User Manual
Before You Begin
Before going through with this manual, you should read and focus on th e following safety guidelines. Notes about the subsystem’s controller configuration and the product packaging and delivery are also included here.
Safety Guidelines
To provide reasonable protection against any harm on the part of the user and to obtain maximum performance, user is advised to be aware of the following safety guidelines particularly in handling hardware components:
Upon receiving of the product:
Place the product in its proper location. Do not try to lift it by yourself alone. Two or more persons are needed to remove
or lift the product to its packaging. To avoid unnecessary dropping out, make sure that somebody is around for immediate assistance.
It should be handled with care to avoid dropping that may cause damage to the
product. Always use the correct lifting procedures.
Upon installing of the product:
Ambient temperature is very important for the installation site. It must not
exceed 30
C. Due to seasonal climate changes; regulate the installation site
temperature making it not to exceed the allowed ambient temperature.
Before plugging-in any power cords, cables and connectors, make sure that the
power switches are turned off. Disconnect first any power connection if the power supply module is being removed from the enclosure.
Outlets must be accessible to the equipment. All external connections should be made using shielded cables and as much as
possible should not be performed by bare hand. Using anti-static hand gloves is recommended.
In installing each component, secure all the mounting screws and locks. Make
sure that all screws are fully tightened. Follow correctly all the list ed procedures in this manual for reliable performance.
Controller Configurations
This RAID subsystem supports dual controller configurations. The single controller can be configured depending on the user’s requirements.
This manual will discuss single controller configuration.
Packaging, Shipment and Delivery
Before removing the subsystem from the shipping carton, you should visually
inspect the physical condition of the shipping carton.
Unpack and verify that the contents of the shipping carton are complete and in
good condition.
Exterior damage to the shipping carton may indicate that the contents of the
carton are damaged.
If any damage is found, do not remove the components; contact the dealer where
you purchased the subsystem for further instructions.
Fibre to SAS/SATA RAID Subsystem
User Manual
7
Unpacking the Shipping Carton
The shipping package contains the following:
NOTE: If any damage is found, contact the dealer or vendor for assistance
RAID Subsystem Unit
42 Disk Trays
Two (2) power cords
Two(2) Fibre optic cables
Two (2) RJ45 Ethernet cable
Four (4) external serial cables RJ11-to­DB9
Key of Top Cover
Key of Disk Tray
User Manual
Fibre to SAS/SATA RAID Subsystem
8
User Manual
Chapter 1 Product Introduction
The EPICa RAID Subsystem
The EP-4423 series RAID subsystem features 8Gb FC-AL host performance to increase system efficiency and performance. It features high capacity expansion, with 42 hot­swappable SAS2/SATA3 hard disk drive bays in a 19-inch 4U rackmount unit, scaling to a maximum storage capacity in the terabyte range.
Exceptional Manageability
The firmware-embedded Web Browser-based RAID manager allows local or
remote management and configuration
The firmware-embedded SMTP manager monitors all system events and user
notification automatically
The firmware-embedded SNMP agent allows remote to monitor events via LAN
with no SNMP agent required
Menu-driven front panel display Innovative Modular architecture
Fibre to SAS/SATA RAID Subsystem
User Manual
9
Feature
Supports RAID levels 0, 1, 10(1E), 3, 5, 6, 30, 50, 60 and JBOD Supports online Array roaming Online RAID level/stripe size migration Online capacity expansion and RAID level migration simultan eously Support global and dedicated hot spare Online Volume Set Expansion Support multiple array enclosures per host connection Greater than 2TB per volume set (64-bit LBA support) Greater than 2TB per disk drive Supports 4K bytes/sector for Windows up to 16TB per volume set Disk scrubbing/ array verify scheduling for automatic repair of all configured RAID
sets
Login record in the event log with IP address and service (http, telnet and serial) Support intelligent power management to save energy and extend service life Support NTP protocol to synchronize RAID controller clock over the on-board LAN
port
Max 128 LUNs (volume set) Transparent data protection for all popular operating systems Instant availability and background initialization Supports S.M.A.R.T, NCQ and OOB Staggered Spin-up capable drives Supports hot spare and automatic hot rebuild Local audible event notification alarm Redundant flash image for high availability Real time clock support
Fibre to SAS/SATA RAID Subsystem
10
User Manual
1.1 Technical Specifications
RAID Controller 8Gb FC- 6Gb SAS Controller Redundant Host Interface Eight FC-AL (8Gb/s) Disk Interface 6Gb/s SAS 6Gb SATA SAS Expansion Four 6Gb/s SAS (SFF-8088)
- Direct Attached 42 Disks
- Expansion Up to 126 Disks
Processor Type 800MHz RAID-On-Chip storage processor Cache Memory
2GB~8GB DDR2-800 ECC Registered SDRAM
Battery Backup Optional Management Port support Yes
RAID level 0, 1, 10, 3, 5, 6, 30, 50, 60 and JBOD Array Group Up to 128
LUNs Up to 128 Hot Spare Yes Drive Roaming Yes Online Rebuild Yes Variable Stripe Size Yes E-mail Notification Yes
Online capacity expansion, RAID level /stripe size migration
Yes
Online Array roaming Yes Online consistency check Yes
SMTP manager and SNMP agent Yes Redundant Flash image Yes Instant availability and background
initialization
Yes
S.M.A.R.T. support Yes MAID 2.0 Yes Bad block auto-remapping Yes Platform Rackmount Form Factor 4U # of Hot Swap Trays 42
Fibre to SAS/SATA RAID Subsystem
User Manual
11
Tray Lock Yes Disk Status Indicator Access / Fail LED Backplane SAS2 / SATA3 # of PS/Fan Modules 1100W x 2 w/PFC # of Fans 11
Power requirements
AC 90V ~ 254V Full Range 50Hz ~ 60Hz
Relative Humidity 10% ~ 85% Non-condensing Operating Temperature 10°C ~ 40°C (50°F ~ 104°F) Physical Dimension 810(L) x 482.6 (W) x 176(H) mm Weight (Without Disk) 50 Kg
Fibre to SAS/SATA RAID Subsystem
12
User Manual
1.2 RAID Concepts
RAID Fundamentals
The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds that of a single large drive. The array of drives appears to the host computer as a single logical drive.
Five types of array architectures, RAID 1 through RAID 5, were originally defined; each provides disk fault-tolerance with different compromises in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID 0 arrays.
Disk Striping
Fundamental to RAID technology is striping. This is a method of combining multiple drives into one logical storage unit. Striping partitions the storage space of each drive into stripes, which can be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in a rotating sequence, so that the combined space is composed alternately of stripes from each drive. The specific type of operating environment determines whether large or small stripes should be used.
Most operating systems today support concurrent disk I/O operations across multiple drives. However, in order to maximize throughput for the disk subsystem, the I/O load must be balanced across all the drives so that each drive can be kept busy as much as possible. In a multiple drive system without striping, the disk I/O load is never perfectly balanced. Some drives will contain data files that are frequently accessed and some drives will rarely be accessed.
By striping the drives in the array with stripes large enough so that each record falls entirely within one stripe, most records can be evenly distributed across all drives. This keeps all drives in the array busy during heavy load situations. T his situation allows all drives to work concurrently on different I/O operations, and thus maximize the number of simultaneous I/O operations that can be performed by the array.
Fibre to SAS/SATA RAID Subsystem
User Manual
13
Definition of RAID Levels
RAID 0 is typically defined as a group of striped disk drives without parity or data redundancy. RAID 0 arrays can be configured with large stripes for multi-user environments or small stripes for single-user systems that access long sequential records. RAID 0 arrays deliver the best data storage efficiency and performance of any array type. The disadvantage is that if one drive in a RAID 0 array fails, the entire array fails.
RAID 1, also known as disk mirroring, is simply a pair of disk drives that store duplicate data but appear to the computer as a single drive. Although striping is not used within a single mirrored drive pair, multiple RAID 1 arrays can be striped together to create a single large array consisting of pairs of mirrored drives. All writes must go to both drives of a mirrored pair so that the information on the drives is kept identical. Howev er, each individual drive can perform simultaneous, independent read operations. Mirroring thus doubles the read performance of a single non-mirrored drive and while the write performance is unchanged. RAID 1 delivers the best performance of any redundant array type. In addition, there is less performance degradation during drive f ailure than in RAID 5 arrays.
Fibre to SAS/SATA RAID Subsystem
14
User Manual
RAID 3 sector-stripes data across groups of drives, but one drive in the group is dedicated for storing parity information. RAID 3 relies on the embedded ECC in each sector for error detection. In the case of drive failure, data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the remaining drives. Records typically span all drives, which optimizes the disk transfer rate. Because each I/O request accesses every drive in the array, RAID 3 arrays can satisfy only one I/O request at a time. RAID 3 delivers the best performance for single-user, single-tasking environments with long records. Synchronized-spindle drives are required for RAID 3 arrays in order to avoid performance degradation with short records. RAID 5 arrays with small stripes can yield similar performance to RAID 3 arrays.
Under RAID 5 parity information is distributed across all t he drives. Since there is no dedicated parity drive, all drives contain data and read operations can be overlapped on every drive in the array. Write operations will typically access one data drive and one parity drive. However, because different records store their parity on different drives, write operations can usually be overlapped.
Dual-level RAID achieves a balance between the increased data availability inherent in RAID 1, RAID 3, RAID 5, or RAID 6 and the increased read performance inherent in disk striping (RAID 0). These arrays are sometimes referred to as RAID 10 (1E), RAID 30, RAID 50 or RAID 60.
Fibre to SAS/SATA RAID Subsystem
User Manual
15
RAID 6 is similar to RAID 5 in that data protection is achieved by writing parity
information to the physical drives in the array. With RAID 6, however, two sets of parity data are used. These two sets are different, and each set occupies a capacity equivalent to that of one of the constituent drives. The main advantage of RAID 6 is High data availability – any two drives can fail without loss of critical data.
In summary:
RAID 0 is the fastest and most efficient array type but offers no fault-tolerance. RAID
0 requires a minimum of one drive.
RAID 1 is the best choice for performance-critical, fault-tolerant environments. RAID
1 is the only choice for fault-tolerance if no more than two drives are used.
RAID 3 can be used to speed up data transfer and provide fault-tolerance in single-
user environments that access long sequential records. However, RAID 3 does not allow overlapping of multiple I/O operations and requires synch ronized-spindle driv es to avoid performance degradation with short records. RAID 5 with a small stripe size offers similar performance.
RAID 5 combines efficient, fault-tolerant data storage with good performance
characteristics. However, write performance and performance during drive failure is slower than with RAID 1. Rebuild operations also require more time than with RAID 1 because parity information is also reconstructed. At least three drives are required for RAID 5 arrays.
RAID 6 is essentially an extension of RAID level 5 which allows for additional fault
tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures. It is a perfect solution for mission critical applications.
Fibre to SAS/SATA RAID Subsystem
16
User Manual
RAID Management
The subsystem can implement several different levels of RAID technology. RAID levels supported by the subsystem are shown below.
RAID Level Description Min. Drives
0
Block striping is provide, which yields higher performance than with individual drives. There is no redundancy.
1
1
Drives are paired and mirrored. All data is 100% duplicated on an equivalent drive. Fully redundant.
2
3
Data is striped across several physical drives. Parity protection is used for data redundancy.
3
5
Data is striped across several physical drives. Parity protection is used for data redundancy.
3
6
Data is striped across several physical drives. Parity protection is used for data redundancy. Requires N+2 drives to implement because of two-dimensional parity scheme.
4
10
Combination of RAID levels 1 and 0. This level provides striping and redundancy through mirroring. RAID 10 requires the use of an even number of disk drives to achieve data protection, while RAID 1E (Enhanced Mirroring) uses an odd number of drives.
4 (3)
30
Combination of RAID levels 0 and 3. This level is best implemented on two RAID 3 disk arrays with data striped across both disk arrays.
6
50
RAID 50 provides the features of both RAID 0 and RAID 5. RAID 50 includes both parity and disk striping across multiple drives. RAID 50 is best implemented on two RAID 5 disk arrays with data striped across both disk arrays.
6
60
RAID 60 combines both RAID 6 and RAID 0 features. Data is striped across disks as in RAID 0, and it uses double distributed parity as in RAID 6. RAID 60 provides data reliability, good overall performance and supports larger volume sizes.
RAID 60 also provides very high reliability because data is still available even if multiple disk drives fail (two in each disk array).
8
Fibre to SAS/SATA RAID Subsystem
User Manual
17
1.3 Fibre Functions
1.3.1 Overview
Fibre Channel is a set of standards under the auspices of ANSI (American National Standards Institute). Fib re Channel combines the best features from SCSI bus and IP protocols into a single standard interface, including high-performance data transfer (up to 800 MB per second), low error rates, multiple connection topologies, scalability, and more. It retains the SCSI command-set functionality, but uses a Fibre Channel controller instead of a SCSI controller to provide the interface for data transmission. In today’s fast-moving computer environments, Fibre Channel is the serial data transfer protocol choice for high- speed trans portation of large volume of information between workstation, server , mass storage subsystems, and peripherals. Physically, the Fibre Channel can be an interconnection of multiple communication points, called N_Ports. The port itself only manages the conn ect ion between itself and another such end-port which, which could either be part of a switched network, referred to as a Fabric in FC terminology, or a point-to­point link. The fundamental elements of a Fibre Channel Network are Port and Node. So a Node can be a computer system, storage device, or Hub/Switch.
This chapter describes the Fibre-specific functions available in the Fibre Channel R AID controller. Optional functions have been implemented for Fibre Channel operation w h ic h i s only available in the Web browser-based RAID manager. The LCD and VT-100 can’t be u s e d t o configure s o m e of the options available for Fibr e C hannel RAID controller.
1.3.2 Four ways to connect (FC Topologies)
A topology defines the interconnection scheme. It defines the number of devices that can be connected. Fibre Channel supports three different logical or physical arrangements (topologies) for connecting the devices into a network:
Point-to-Point Arbitrated Loop(AL)
Switched ( F abric)
Loop/MNID
The physical connection between devices varies from one topology to another. In all of these topologies, a transmitter node in one device sends information to a receiver node in another device. Fibre Channel networks can use any combination of point-to-point, arbitrated loop (FC_AL), and switched fabric topologies to provide a variety of device sharing options.
Point-to-point
A point-to-point topology consists of two and only two devices connected by N- port s of which are connected dir ectly. In thi s topology, the transmit Fibre of one device connects to the receiver Fib re of the other device and vice versa. The con nect ion is not shared with an y other devices. Simplicity and use of the full data transfer rate make this Point-to-point topology an ideal extension to the standard SCSI bus
Fibre to SAS/SATA RAID Subsystem
18
User Manual
interface. The point-to-point topology extends SCSI connectivity from a server to a peripheral device over longer distances.
Arbitrated Loop
The arbitrated loop (FC-AL) topology provides a relatively simple method of connecting and sharing resources. This topology allows up to 126 devices or no des in a single, continuous loop or ring. The loop is constructed by daisy-chaining the transmit and receive cables from one device to the next or by using a hub or switch to create a virtual loop. The loop can be self-contained or incorporated as an element in a larger network. Increasing the number of devices on the loop can reduce the overall performance of the loop because the amount of time each device can use the l oop is re duced . The por ts in an arbitra ted loop are referred as L-Ports .
Switched Fabric
A switched fabric a term is used in a Fibre channel to describe the generic switching or routing structure that delivers a frame to a destination based on the dest inat ion address in the frame hea de r. I t c an b e u s ed to conn ect up t o 16 mill io n n ode s, ea ch of which is identified by a unique, world-wide name (WWN). In a switched fabric, each data frame is transferred over a virtual point-to-point connection. There can be any number of full-bandwidth transfers occurring through the switch. Devices do not have to arbitrate for control of the network; each device can use the full available bandwidth.
A fabric topology contains one or more switches connecting the ports in the FC network. The benefit of this topology is that many devices (approximately 2-24) can be connected. A port on a Fabric switch is called an F-Port (Fabric Port). Fabric switches can function as an alias server, multi-cast server, broadcast server, quality of service facilitator and directory server as well.
Loop/MNID
Controller supports Multiple Node ID (MNID) mode. A possible application is for zoning within the arbitrated loop. The different zones can be represented by the controller's source. Embodiments of the present invention described above can be implemented within a Switch for FC Arbitrated Loop.
Fibre to SAS/SATA RAID Subsystem
User Manual
19
1.3.3 Basic Elements
The following elements are the connectivity of storages and Server components using the Fibre channel technology.
Cables and connectors
There ar e different ty pes of cables of var ies length s for use in a Fibr e Channel configuration. Two types of cables are support ed: Copper and Optical (fiber). Copper cables are used for short distances and transfer data up to 30 meters per lin k. Fiber cables come in two distinct types: Multi-Mode fiber (MMF) for short distances (up to 2 km), and Single-Mode Fiber (SMF) for longer distances (up to 10 kilometers). B y d e f a u lt , t h e RA I D s ub s y s t em supports two short-wave multi-mode fibre optic SFP connectors.
Fibre Channel Adapter
Fibre Channel Adapter is a device that is connected to a workstation, server, or host system and c ontrol the protocol for communication s.
Hubs
Fibre Channel hubs are used to connect up to 126 nodes into a logical loop. All connected nodes share the bandwidth of this one logical loop. Each port on a h ub contains a Port Bypass Circuit(PBC) to automatically open and close the loop to support hot pluggability.
Switched Fabric
Switched fabric is the highest performing device available for interconnecting large number of devices, increasing bandwidth, reducing congestion and providing aggregate throughput.
Each device is connected to a port on the switch, enabling an on-demand
connection to every connected device. Each node on a Switched fabric uses an aggregate throughput data path to send or receive data.
1.3.4 LUN Masking
LUN masking is a RAID system-centric enforced method of masking multiple LUN s behind a single port. By using World Wide Port Names (WWPNs) of server HBAs, LUN masking is configured at the volume level. LUN masking also a llow s sh aring disk storage resource across multiple independent servers. A single large RAID device can be sub-divided to serve a number of different hosts that are attached to the RAID through the SAN fabric with LUN masking. So that only one or a limited number of servers can see that LUN, each LUN inside the RAID device can be limited.
LUN masking can be done either at the RAID device (behind the RAID port) or at the server HBA. It is more secure to mask LUNs at the RAID device, but n ot all RAID devices have LUN masking capability. Therefore, in order to mask LUNs, some HBA vendors allow persistent binding at the driver-level.
Fibre to SAS/SATA RAID Subsystem
20
User Manual
1.4 Array Definition
1.4.1 Raid Set
A Raid Set is a group of disk drives containing one or more logical volumes called Volume Sets. It is not possible to have multiple Raid Sets on the same disk drives.
A Volume Set must be created either on an existing Raid Set or on a group of available individual disk drives (disk drives that are not yet a p art of a Raid Set). If there are existing Raid Sets with available raw capacity, new Volume Set can be created. New Volume Set can also be created on an existing Raid Set without free raw capacity by expanding the Raid Set using available disk drive(s) which is/are not yet Raid Set member. If disk drives of different capacity are grouped together in a Raid Set, then the capacity of the smallest disk will become the effective capacity of all t he disks in the Raid Set.
1.4.2 Volume Set
A Volume Set is seen by the host system as a single logical device. It is organized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a Volume Set. A Volume Set capacity can consume all or a portion of the raw capacity available in a Raid Set. Multiple Volume Sets can exist on a group of disks in a Raid Set. Additional Volume Sets created in a specified Raid Set will reside on all the physical disks in the Raid Set. Thus each Volume Set on the Raid Set will have its data spread evenly across all the disks in the Raid Set. Volume Sets of different RAID l ev els m ay c oe xis t on the same Raid Set.
In the illustration below, Volume 1 can be assigned a RAID 5 level while Volume 0 might be assigned a RAID 10 level.
Fibre to SAS/SATA RAID Subsystem
User Manual
21
1.5 High Availability
1.5.1 Creating Hot Spares
A hot spare drive is an unused online available drive, which is ready to replace a failed disk drive. In a RAID level 1, 10, 3, 5, 6, 30, 50, or 60 Raid Set, any unused online available drive installed but not belonging to a Raid Set can be defined as a hot spare drive. Hot spares permit you to replace failed drives without powering down the system. When the RAID subsystem detects a drive failure, the system will do automatic and transparent rebuild using the hot spare drives. The Raid Set will be reconfigured and rebuilt in the background while the RAID subsystem continues to handle system request. During the automatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.
IMPORTANT: The hot spare must have at least the same or more capac it y a s t he drive it replaces.
1.5.2 Hot-Swap Disk Drive Support
The RAID subsystem has built-in protection circuit to support the replacement of SATA II hard disk drives without having to shut down or reboot the system. The removable hard drive tray can deliver “hot swappable” fault-toleran t RAID solu tion at a price much less than the cost of conventional SCSI hard disk RAID subsystems. This feature is provided in the RAID subsystem for advance fault tolerant RAID protection and “online” drive replacement.
1.5.3 Hot-Swap Disk Rebuild
The Hot-Swap feature can be used to rebuild Raid Sets with data redundancy su ch as RAID level 1, 10, 3, 5, 6, 30, 50 and 60. If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive can be rebuilt. If a hot spare is available, the rebuild starts automatically when a drive fails. The RAID subsystem automatically and transparently rebuilds fa iled drives in th e background with user-definable rebuild rates. The RAID subsystem w ill automat ically continue the rebuild process if the subsystem is shut down or powered off abnormally during a reconstruction process.
Fibre to SAS/SATA RAID Subsystem
22
User Manual
Chapter 2 Identifying Parts of the RAID Subsystem
The illustrations below identify the various parts of the system. Famili arize yourself with the parts and terms as you may encounter them later in the later chapters and sections.
2.1 Main Components
2.1.1 Front View
IMPORTANT: When powering off the RAID subsystem, turn off first the Main Switch and allow at least 3 minutes (during which each disk slot starting from slot #1 until slot #42 will be powered down) for the subsystem to shutdown properly. Then turn off the switches of the 2 Power Supply Fan Modules.
Fibre to SAS/SATA RAID Subsystem
User Manual
23
2.1.1.1 LCD Display Panel LEDs
Environmental Status
Parts Function Power LED Green indicates power is ON.
Power Fail LED
If one of the redundant power supply unit fails, this LED will turn to RED and alarm will sound.
Fan Fail LED
When a fan’s rotation speed is lower than 1500rpm, this LED will turn red and an alarm will sound.
Over Temperature LED
If temperature irregularities in the system occur (HDD slot temperature over 65°C, Controller temperature over 70°C, CPU Temperature over 90°C), this LED will turn RED and alarm will sound.
Voltage Warning LED
If the output DC voltage is above or below the allowed range, an alarm will sound warning of a voltage abnormality and this LED will turn red.
12V: over 12.8V / under 1 1.12V 5V: over 5.35V / under 4.63V
3.3V: over 3.53V / under 3.05V
1.2V: over 1.28V / under 1.12V
Activity LED
This LED will blink blue when the RAI D subsystem is busy or active.
Fibre to SAS/SATA RAID Subsystem
24
User Manual
Front Panel Function Buttons
If you want to configure or view settings of the RAID subsystem using the LCD panel, please press the Select button.
Parts Function
Up and Down Arrow buttons
Use the Up or Down arrow keys to go through the information on the LCD screen. This is also used to move between each menu when you configure or view information in the subsystem.
NOTE: When the Down Arrow button
is pressed 3 times, the LCD control will shift to the other RAID controller (in redundant controller mode) and the other RAID controller’s IP address will be shown in LCD.
Select button This is used to enter the option you have selected.
Exit button EXIT
Press this button to return to the previous menu. NOTE: This button can also be used to reset the
alarm beeper. For example, if one disk drive fails, pressing this button will mute the beeper.
Fibre to SAS/SATA RAID Subsystem
User Manual
25
2.1.1.2 Disk Drive Status Indicators
The Front Panel shows the disk drives status.
Indicator Color Description
Activity LED
Blue Blinking
Indicates the disk drive is busy or being accessed.
Power On/Fail LED
Green Indicates the disk drive in this slot is good.
RED
Indicates the disk drive in this slot is defective or failed.
LED is off
Indicates there is no disk drive in this slot.
ActivityLED
Power On/Fail LED
Fibre to SAS/SATA RAID Subsystem
26
User Manual
2.1.1.3 LCD IP Address in Dual Controller Mode
In dual controller mode, the RAID subsystem has 2 IP addresses which can be accessed separately.
By default, the IP address of Controller 1 is shown. To view the IP address of Controller 2, press the “Down Arrow”
button in the
front panel three (3) times. When the IP address of Controller 1 is shown, there is no blinking rectangular
character at the end of the IP address. When the IP address of Controller 2 is shown, there is a blinking rectangular
character at the end of the IP address. When the IP address has a link (connected to network), there is an “*” at the end
of the IP address. When there is no link, there is no “*”.
Controller 1 IP Address (No rectangular character)
Controller 1 has Link Controller 1 has no Link
Controller 2 IP Address (With rectangular character blinking)
Controller 2 has Link Controller 2 has no Link
Fibre to SAS/SATA RAID Subsystem
User Manual
27
2.1.2 Rear View
NOTE: Each Power Supply Module has 1 Power Supply and 5 Fans. For purpose of hardware monitoring, the RAID enclosure is logically divided into two enclosures. The functions of the Expander Modules are as follows:
Module: Function/Description:
Expander Module 1-1 (for Controller 1)
Monitors Enclosure 1 (Disk slots 1 to 21, Power Supply 01-1, Fans 01-1, 02-1, 03-1, 04-1, and 05-1, and Turbo Fan 06-1).
Note: “-1” means enclosure 1. Expander Module 2-1 (for Controller 2)
Same function as Expander 1-1
Expander Module 1-2 (for Controller 1)
Monitors Enclosure 2 (Disk slots
22 to 42, Power Supply 01-2,
Fans 01-2, 02-2, 03-2, 04-2, and
05-2.
Note: “-2” means enclosure 2. Expander Module 2-2 (for Controller 2)
Same function as Expander 1-2
Fibre to SAS/SATA RAID Subsystem
28
User Manual
2.2 Controller Module
The RAID system includes single 8Gb Fibre-to-SAS/SATA II RAID Controller Module.
RAID Controller Module
Fibre to SAS/SATA RAID Subsystem
User Manual
29
2.2.1 Controller Module Panel
Note: Only one host cable and one SFP module are included in the package. Additional host cables and SFP modules are optional and can be purchased separately for upgrade.
Fibre to SAS/SATA RAID Subsystem
30
User Manual
Part Description
Host Channel A, B, C, D
There are four Fibre host channels (A, B, C, and D) which can be use to connect to Fibre HBA on the Host system, or to connect to FC switch.
SAS Expansion Ports 1, 2
Use for expansion; connect to the SAS In Port of a JBOD subsystem.
COM2
RJ-11 port; Use to connect to CLI (command line interface) for example to upgrade expander firmware. See section
6.3 Upgrading the Expander Firmware.
COM1
RJ-11 port; Use to check controller debug messages
R-Link Port
10/100 Ethernet RJ-45 port; Use to manage the RAID subsystem via network and web browser.
Indicator LED Color Description
Host Channel A, B, C, D Status LEDs: Link LED and Activity LED
Green
Link LED: Indicates Host Channel has linked if the Fibre HBA Card is 8GB.
Orange
Link LED: Indicates Host Channel has linked if the Fibre HBA Card is 4GB.
Blink Orange
Link LED: Indicates Host Channel has linked if the Fibre HBA Card is 2GB.
Blink Blue
Activity LED: Indicates the Host Channel is busy and being accessed.
SAS Expander Link LED
Green Indicat es exp an der has linked.
SAS Expander Activity LED
Blue
Indicates the expander is busy and being accessed.
Fault LED Blink RED Indicates that controller has failed.
CTRL Heartbeat LED
Blink Green
Indicates that controller is working fine.
Solid Green Indicates that controller is hung.
In replacing the failed Controller Module, refer to section 6.3.1 of this manual.
Fibre to SAS/SATA RAID Subsystem
User Manual
31
2.3 Power Supply / Fan Module (PSFM)
The 42bay RAID subsystem contains two 1100W Power Supply/Fan Modules. All PSFM are inserted at the rear of the chassis.
Front Panel
Rear Side
NOTE: Each PSFM delivers Full-Range 100V ~ 240V (+/-10%) voltage AC electricity. Each PSFM consists of 1 power supply and 5 fans. Two Fans are located at the panel side, and three fans are located in rear side of the PSFM.
Fibre to SAS/SATA RAID Subsystem
32
User Manual
NOTE: The first PSFM (01-1, on the left side of enclosure) has five fans: Fan 01-1 and Fan 02-1 on the front panel; and Fan 03-1, Fan 04-1 and Fan 05-1 on the rear side.
The second PSFM (01-2, on the right side) has five fans also: Fan 01-2 and Fan 02-2 on the front panel; and Fan 03-2, Fan 04-2 and Fan 05-2 on the rear side.
NOTE: “-1” means enclosure 1 and “-2” means enc losure 2.
Front Panel
Rear Side
Fan 01-1 Fan02-1 Fan 01-2 Fan 02-2
Power Supply 01-1 Power Supply 01-2
Fibre to SAS/SATA RAID Subsystem
User Manual
33
2.3.1 PSFM Panel
Part Description
AC Power Input Socket
Use to connect the power cord from power source.
Power On/Off Switch Use to power on or power off the PSFM.
Indicator Color Description
Power Status LED
Green Indicates the power supply module is good.
Red Indicates the power supply module is faulty.
Fan Fail LED Red
Indicates one or more fans in the PSFM has failed.
When the power cord connected from main power source is inserted to the AC Power Input Socket the Power Status LED becomes RED. When the switch of the PSFM is turned on, the LED still shows RED. After the main swit ch in front panel is turned on, the LED turns GREEN, which means it is functioning normally.
The PSFM has a 5V standby DC voltage. When the power cord(s) is/are connected to the AC Power Input Socket, after 1 second, all 42 Activity LEDs will flash once. When the power cord(s) is/are disconnected from AC Power Input Socket, after 3 seconds, all 42 Activity LEDs will flash twice.
Fibre to SAS/SATA RAID Subsystem
34
User Manual
2.4 Turbo Fan (Fan 06-1)
The turbo fan provides additional airflow inside the enclosure.
Turbo Fan LED
Indicator Color Description
Status LED Red Indicates the turbo fail is faulty.
NOTE: The status of Turbo Fan (Fan 06-1) is monitored by Expander Module 1.
Fibre to SAS/SATA RAID Subsystem
User Manual
35
2.5 Expander Module
The Expander Module contains the SAS expander. It can be used to upgrade the SAS expander firmware. It also contains the SES module (SCSI Enclosure Services). SES is the protocol used for enclosure environmental control.
The SES module monitors the following enclosure conditions: temperature, power supply voltage, and fan speed.
2.5.1 Expander Module Panel
Part Description
RS-232 Port
Use to upgrade the firmware of the expander module. Connect the serial cable RJ11-to-DB9 to your system’s serial port.
Indicator Color Description
Activity LED
Blinking Green
Indicates the expander module is busy or active.
Fault LED
Binking Red
Indicates the expander module is faulty or has failed.
Fibre to SAS/SATA RAID Subsystem
36
User Manual
2.6 Disk Tray
The Disk Tray houses a 3.5 inch hard disk drive. It is designed for maximum airflow and incorporates a carrier locking mechanism to prevent unauthorized access to the HDD.
Key for Disk Tray Lock
Fibre to SAS/SATA RAID Subsystem
User Manual
37
2.6.1 Disk Drive Installation
This section describes the physical locations of the hard drives supported by the subsystem and give instructions on installing a hard drive. The subsystem supports hot-swapping allowing you to install or replace a h ard drive while the subsystem is running.
NOTE: When the RAID subsystem is shipped, the disk trays are not placed in the disk slots. If all disk trays will be used to install all 42 disk drives, for quicker and easier installation of disk drives in the RAID subsystem, it is recommended to install first each disk drive in a disk tray. After installing the disk drives, insert 14 disk trays into one row of 14 slots at a time and lock them one by one. Do the same for the next row until the last row.
Disk Slots
NOTE: When the subsystem is already in operational mode, it is not recommended to open the top cover for a long peri od of time; proper air flow within the enclosure might fail causing high disk drive temperature.
Fibre to SAS/SATA RAID Subsystem
38
User Manual
IMPORTANT: In dual controller mode, the installation of SATA disk drive in a disk tray is done differently. In single controller mode, the installation of SATA disk in a disk tray is the same with SAS disk.
HDD Single Controller Dual Controller
SATA No need dongle board Need dongle board SAS No need dongle board No need dongle board
NOTE: In this model, it is recommended to use 6Gb hard drive disks.
To install a SATA disk drive (Dual Controller Mode) in a disk tray:
1. Use the Key for Disk Tray Lock to unlock a disk tray.
2. Prepare the dongle board with metal bracket.
Fibre to SAS/SATA RAID Subsystem
User Manual
39
3. Connect the dongle board into the SATA disk drive.
4. Place the SATA disk drive into the disk tray, then turn the disk tray upside down. To secure the disk drive into the disk tray, tighten 4 screws on the holes of the disk tray. Note in the picture below where the screws should be placed in the disk tray holes.
4 screws #6-32 UNC L=5.0mm
5. Tighten 2 screws of the dongle board metal bracket.
Fibre to SAS/SATA RAID Subsystem
40
User Manual
To install a SAS disk drive (Single or Dual Controller Mode) or SATA disk drive (Single Controller Mode) in a disk tray:
1. Use the Key for Disk Tray Lock to unlock a disk tray.
2. Place the disk drive into the disk tray.
3. Turn the disk tray upside down. To secure the disk drive into the disk tray, tighten 4 screws on the holes of the disk tray. Note in the picture below where the screws should be placed in the disk tray holes.
4 screws #6-32 UNC L=5.0mm
Fibre to SAS/SATA RAID Subsystem
User Manual
41
To install the disk trays into the disk slots:
a. Loosen two screws on both sides of the top cover on the front panel side.
b. Use the Top Cover Key to unlock the key lock on the front panel side.
c. Hold the front part of the top cover and slide the top cover about half an inch
towards the front side then pull upwards to remove it.
Fibre to SAS/SATA RAID Subsystem
42
User Manual
d. Insert each disk tray with disk drive one by one, 14 disk trays or one row first,
and then lock each disk tray. Then do the same for the next 14 disk trays or row.
To install the disk tray into the disk slot, insert it first in the slot.
Then push down the latch part of disk tray as indicated in the picture below until it reached a full stop.
Close the lever handle then use the Key for Disk Tray Lock and turn the disk tray lock into “locked” position.
Fibre to SAS/SATA RAID Subsystem
User Manual
43
e. When all disk trays have been installed and locked, put the top cover back and
place it about half an inch away. Then push the top cover towards the rear.
f. Use the Top Cover Key to lock the key lock on the front panel side.
g. To secure the top cover, tighten two screws on both sides of the top cover on
the front panel side.
Fibre to SAS/SATA RAID Subsystem
44
User Manual
Chapter 3 Getting Started with the Subsystem
This chapter contains information about the steps needed to start using the subsystem. If the subsystem will be installed in a rackmount cabinet, follow the steps in Section 3.1, otherwise, proceed with Section 3.2.
3.1 Installing the Rails and Mounting into Rack
NOTE: At least two person s are needed to lift the subsystem. To reduce the weight of the subsystem, remove the 2 power supply modules from the rear of subsystem. If disk drives are already installed in the disk trays, remove also the disk trays. Refer to appropriate sections on how to remove the power supply modules and how to remove the disk trays/disk drives.
NOTE: The subsystem must be installed near the Disk Array or host system where it will be connected. A Phillips screwdriver is needed in installation.
WARNING! It is prohibited to put other enclosures/subsystems on top of the 42-bay subsystem because the total weight will not be supported by the rails.
Steps:
1. Open the rail box.
2. Remove the 2 rail assemblies and the screws/accessories from the box. Check its contents.
Fibre to SAS/SATA RAID Subsystem
User Manual
45
3. Insert two (2) M5 nuts on the 2 holes of the front left side of the rack post.
Rack Post – Front Left Side
4. Insert two (2) M5 nuts on the 2 holes of the front right side of the rack post.
Rack Post – Front Right Side
Position of M5 nuts on the 2 holes of left rack post
4U
Position of M5 nuts on the 2 holes of right rack post
4U
Fibre to SAS/SATA RAID Subsystem
46
User Manual
5. Prepare the 2 rail assemblies.
Front Side of Rail Assembly Rear Side of Rail Assembly
6. Hold one rail assembly and install in the front left side of rack. To install, align and insert the 2 latches of the rail into the 2 holes on the rack post. Use the Lock Lever to lock the rail assembly in the left rack post.
View from Front Side of Front Left Rack Post
Lock Lever is Not Locked
Lock
Lever
Lower M5 nut
Fibre to SAS/SATA RAID Subsystem
User Manual
47
View from Front Side of Front Left Rack Post
Lock Lever is Locked
View from Rear Side of Front Left Rack Post
2 Latches are inserted in the 4
th
and 6th holes from bottom (M5 nut)
Lower M5 Nut
Lock
2 Latches
6
th
5th
4th 3rd
2
nd
1st
Lock Lever
Fibre to SAS/SATA RAID Subsystem
48
User Manual
7. Install the other end of rail assembly to the left rear side. Align and insert the 2 latches on the 2 holes on the rear rack post, and then push the rail a little towards the rear side and lock the lock lever on the rack post.
View from Rear Side of Rear Left Rack Post
View from Rear Side of Rear Left Rack Post
View from Rear Side of Rear Left Rack Post
Lock Lever
Lock Lever
Latches
Lock Lever
Latches
Fibre to SAS/SATA RAID Subsystem
User Manual
49
8. Repeat step 6 to install the other rail assembly into the right front side.
View from Front Side of Front Right Rack Post
Lock Lever is Not Locked
View from Front Side of Front Right Rack Post
Lock Lever is Locked
Lock
Lever
Lower M5 nut
Lock
Lever
Lower M5 nut
Fibre to SAS/SATA RAID Subsystem
50
User Manual
View from Rear Side of Front Right Rack Post
2 Latches are inserted in the 4
th
and 6th holes from bottom (M5 nut)
9. Repeat step 7 to install the other end of rail assembly to the rack post of rear right side.
View from Rear Side of Rear Right Rack Post
2 Latches
Lower M5 nut
Lock
6th
5th
4th
3
rd
2
nd
1st
Lock Lever
Latches
Fibre to SAS/SATA RAID Subsystem
User Manual
51
View from Rear Side of Rear Right Rack Post
View from Rear Side of Rear Right Rack Post
Lock
Latches
Lock Lever
Latches
Fibre to SAS/SATA RAID Subsystem
52
User Manual
10. Install the inner rail member on the side of the enclosure. Align the holes on the inner rail then slide a little towards the front side until locked.
Inner Rail Member
Inner Rail Member Placed on the Side of Enclosure
Inner Rail Member Pushed Towards the Front Side and Locked
Fibre to SAS/SATA RAID Subsystem
User Manual
53
11. Repeat step 10. Insert the other inner rail member on the other side of enclosure.
12. Pull the 2 middle rail members out from the rail assembly.
Middle Rail Member of Rail Assembly on Left Side of Rack
View from Rear Side
Rear Left Side
Front Left Side
Front Side
Right Front Side Left Front Side
Fibre to SAS/SATA RAID Subsystem
54
User Manual
13. With at least 2 persons carrying the enclosure, insert the 2 inner rails (attached to the sides of the enclosure) into the middle rails. Slide the enclosure until it stops or about half way through.
NOTE: Be careful when inserting the 2 inner rails into the middle rails. The 2 inner rails must be parallel with the 2 middle rails so that 2 inner rails will insert and slide easily.
Inner Rail Aligned with Middle Rail
View from Rear Side
Middle Rail
Inner Rail
Fibre to SAS/SATA RAID Subsystem
User Manual
55
14. Press outwards the blue locks on both sides of the inner rail members at the same time. Then push the enclosure inwards until it goes inside the rack.
View from Right Side of Enclosure
Blue Lock of Inner Rail is Pushed a Little Outwards and
Enclosure is Pushed Inwards
View from Rear Side of Rack Cabinet
Enclosure is Pushed Inwards
Blue Lock of
Inner Rail
Fibre to SAS/SATA RAID Subsystem
56
User Manual
15. Insert the 2 power supply modules.
16. Open the top cover and re-insert the disk drives / disk trays, if disk drives/d isk trays were previously removed. Then close the top cover.
17. Use four (4) M5 screws to lock the enclosure into the rack post.
Front Left Side Front Right Side
Fibre to SAS/SATA RAID Subsystem
User Manual
57
3.2 Preparing the RAID Subsystem
1. Install the disk drives, if not yet installed. Refer to Section 2.6.1 Disk Drive Installation for detailed information.
2. Attach network cable to the R-Link port. Connect the other end to your network hub or switch. Alternatively, you may use the Monitor port and connect the serial cable from the Monitor port to any available serial COM port of a PC.
3. Connect one end of Fibre optic cable to the Host Channel port of the subsystem and the other end to the Fibre HBA on the Host system or to the FC switch.
NOTE: If a JBOD subsystem will be connected to the RAID subsystem, connect the SAS cable from the SAS Expansion Port of RAID subsystem to the SAS In Port of JBOD subsystem.
3.3 Powering On
1. Plug in all the power cords into the AC Power Input Socket located at the PSFM.
NOTE: The subsystem is equipped with redundant, full range power supplies with PFC (power factor correction). The system will automatically select voltage.
NOTE: The PSFM has a 5V standby DC voltage. When the power cord(s) is/are connected to the AC Power Input Socket, after 1 second, all 42 Activity LEDs will flash once. When the power cord(s) is/are disconnected from AC Power Input Socket, after 3 seconds, all 42 Activity LEDs will flash twice.
Fibre to SAS/SATA RAID Subsystem
58
User Manual
2. Turn on each Power On/Off Switch of the PSFM.
NOTE: When the power cord connected from main power source is inserted to the AC Power Input Socket, the Power Status LED becomes RED. When the switch of the PSFM is turned on, the LED still shows RED. After the main switch in front panel is turned on, the LED turns GREEN, which means it is functioning normally.
3. To power on the subsystem, turn on the main switch (open first the switch cover) in the right corner side of front panel.
Main
Switch
4. Allow the machine a few moments to initialize before using it.
NOTE: The system will initialize after turning on the Main Switch. Each disk slot will be checked during subsystem initialization.
5. Configure RAID using the utility options described in the next chapter.
Power Supply 01-1 Power Supply 01-2
Fibre to SAS/SATA RAID Subsystem
User Manual
59
3.4 Powering Off
IMPORTANT: When powering off the RAID subsystem, turn off first the Main Switch and allow at least 3 minutes for the subsystem to shutdown properly. During this time, each disk slot starting from slot #1 until slot #42 will be powered down. When subsystem has totally powered down, turn off the switches of the 2 Power Supply Fan Modules at the rear.
Sequence of disk slot power down (from slot 1 to slot 42)
Fibre to SAS/SATA RAID Subsystem
60
User Manual
Chapter 4 RAID Configuration Utility Options
Configuration Methods
Ther e are t h r e e methods o f configu ring the RAID controller:
a.
Front panel touch-control buttons
b.
W eb browser-based remote RAID management via the R-Link Ethernet port
c.
Telnet connection via
the R-Link Ethernet port
NOTE: The RAI D s u b system allows you to access using only one method
at a time. You cannot use more than one method at the
same time.
4.1 Configuration through Telnet
NOTE: This example uses CRT terminal emulation program. You can also use Windows Hyper terminal as another option.
1. To connect to RAID subsystem using Telnet, open Terminal Emulation program (example, CRT 6.1) and start new session, and select Telnet protocol. Click “Next”.
Fibre to SAS/SATA RAID Subsystem
User Manual
61
2. Enter the RAID subsystem’s IP address. Make sure the PC running the terminal emulation program can connect to the RAID subsystem’s IP address. Click “Next”.
3. Rename the Session name if necessary. Click “Finish”.
4. Select the Session name and click “Connect”.
Fibre to SAS/SATA RAID Subsystem
62
User Manual
5. After successful connection, the Main Menu will be displayed. Select a menu and the Password box will be shown. Enter password (default is 00000000) to login.
Keyboard Function Key Definitions
“A” key - to move to the line above “Z” key - to move to the next line “Enter” key - Submit selection function “ESC” key - Return to previous screen
“L” key - Line draw “X” key – Redraw
Fibre to SAS/SATA RAID Subsystem
User Manual
63
Main Menu
The main menu shows all function that enables the customer to execute actions by selecting the appropriate menu option.
NOTE: The password option allows user to set or clear the RAID sub s y s t em’s password protection feature. Once the password has been set, the user can only monitor and configure the RAID subsystem by providing the correct password. The password is used to protect the RAID subsystem from unauthorized acce s s . The controller will check the password only when entering the Main menu from the initial screen. The RAID subsystem will automatically go back to the initial screen when it does not receive any command in twenty seconds. The RAID subsystem’s factory default password is set to 00000000.
Fibre to SAS/SATA RAID Subsystem
64
User Manual
Configuration Utility Main Menu Options Select an option and the related information or submenu items under it will be displayed.
The submenus for each item are s h o w n i n S ec tion 4 .2.1. The c on f i g u r a t i o n
utility main
menu options are:
Option Description
Quick Volume And Raid Set Setup
Create a RAID configuration which consists of all physical disks installed
Raid Set Functions
Create a customized Raid Set
Volume Set Functions
Create a customized Volume Set
Physical Drive Functions
View individual disk information
Raid System Functions
Setting the Raid system configurations
Hdd Power Management
Setting the HDD p o w e r management configurations
Fibre Channel Config
Setting the Fibre Channel configurations
Ethernet Configuration
Setting the Ethernet configurations
Views System Events
Record all system events in the buffer
Clear Event Buffer
Clear all event buffer information
Hardware Monitor
Show all system environment status
System Information
View the controller information
Fibre to SAS/SATA RAID Subsystem
User Manual
65
4.2 Configuration through the LCD Panel
All configurations can be performed through the LCD Display front panel function keys, except for the “Firmware update”. The LCD provides a system of screens with areas for information, status indication, or menus. The LCD screen displays menu items or other information up to two lines at a time. The RAID controller’s factory default passw ord is set to 00000000.
Function Key Definitions
If you are going to configure the subsystem using the LCD panel, please press first the select button.
Parts Function
Up and Down Arrow buttons
Use the Up or Down arrow keys to go through the information on the LCD screen. This is also used to move between each menu when you configure the RAID.
NOTE: When the Down Arrow button
is pressed 3 times, the LCD control will shift to the other RAID controller (in redundant controller mode) and the other RAID controller’s IP address will be shown in LCD.
Select button This is used to enter the option you have selected.
Exit button EXIT
Press this button to return to the previous menu.
NOTE: This button can also be used to reset the alarm beeper. For example, if one disk drive fails, pressing this button will mute the beeper.
Fibre to SAS/SATA RAID Subsystem
66
User Manual
4.2.1 Menu Diagram
The following menu diagram is a summary of the various configurations and setting functions that can be accessed through telnet.
Fibre to SAS/SATA RAID Subsystem
User Manual
67
Fibre to SAS/SATA RAID Subsystem
68
User Manual
Fibre to SAS/SATA RAID Subsystem
User Manual
69
Fibre to SAS/SATA RAID Subsystem
70
User Manual
Fibre to SAS/SATA RAID Subsystem
User Manual
71
Fibre to SAS/SATA RAID Subsystem
72
User Manual
4.3 Configuration through web browser-based proRAID Manager
The RAID subsystem can be remotely configured via R-Link port with proRAID Manager, a web browser-based application. The proRAID Manager can be used to manage all available functions of the RAID controller.
To configure the RAID subsystem fr o m a remote ma chine, you need to know its IP Address. Launch your web browser from remote machine and enter in the address bar:
http://[IP-Address].
IMPORTANT! The default IP address of the Controller R-Link Port is
192.168.1.100 and subnet mask is 255.255.255.0. DHCP client function is also enabled by default. You can reconfigure the IP Address or disable the DHCP client function through the LCD front pa nel or terminal “Ethernet Configuration” menu.
NOTE: If DHCP client function is enabled but a DHCP server is unavailable and the IP address is changed, a Controller Restart is necessary. If the DHCP client function is disabled and the IP address is changed, Controller Restart is not needed.
Note that you may nee d t o be logged in as administrator with local admin rights on the remote machine to remotely configure the RAID subsystem. The RAID subsystem controller default User Name is “admin” a nd the Password is “00000000”.
Fibre to SAS/SATA RAID Subsystem
User Manual
73
Main Menu
The main menu shows all av a i l a b l e function that user can execute by clicking on the appropriate hyperlink.
Individual Category
Description
Quick Function
Create a RAID configuration, which consists of all physical disks installed. The Volume Set Capacity, Raid Level, and Stripe Size can be modified during setup.
Raid Set Functions
Create customized Raid Sets.
Volume Set Functions
Create customized V olume Sets and allow modification of parameters of existing Volume Sets parameter.
Physical Drives
Create pass through disks and allow modification of parameters of existing pass through drives. This also provides a function to identify a disk drive.
System Controls
For setting the RAID sys tem configurations.
Information
To view the controller and hardware mo ni to r information. The Raid Set hierarchy can also be viewed through the Raid Set Hierarchy item.
Fibre to SAS/SATA RAID Subsystem
74
User Manual
Chapter 5 RAID Management
5.1 Quick Function
5.1.1 Quick Create
The number of physical drives in the RAID subsystem determines the RAID levels that can be implemented with the Raid Set. This feature allows user to create a Raid Set associated with exactly one Volume Set. User can chang e the Raid Level, Capacity, Volume Initialization Mode and Stripe Size. A hot spare can also be created depending upon the existing configuration.
If the Volume Set size is over 2TB, an option “Greater Two TB Volume Support” will be automatically provided in the screen as shown in the example below. There are three options to select: “No”, “64bit LBA”, and “4K Block”).
Greater Two TB Volume Support: No: Volume Set capacity is set to maximum 2TB.
64bit LBA: Use this option for UNIX, Linux Kernel 2.6 or later, Windows Server 2003 +
SP1 or later versions, Windows x64, and other supported operating systems. The maximum Volume Set size is up to 512TB.
4K Block: Use this option for Windows OS such as Windows 2000, 2003, or XP. The maximum Volume Set size is 16TB. Just use the Volu m e as “Basic Disk”. Volume can’t be used as “Dy namic Disk”; also can’t be us ed in 51 2 By t es b loc k se rv ic e program.
Tic k on t he Confirm The Operation option and click on the Submit button in the Quick Create screen. The Raid Set and Volume Set will start to initialize.
You can use RaidSet Hierarchy feature to view th e V olume S et information (Refer to Section 5.6.1).
Fibre to SAS/SATA RAID Subsystem
User Manual
75
NOTE: In Quick Create, your Raid Set is automatically configured based on the number of disks in your system (maximum 32 disks per Raid Set). Use the Raid Set Function and Volume Set Function if you prefer to create customized Raid Set and Volume Set.
NOTE: In Quick Create, the Raid Level options 30, 50, and 60 are not available. If you need to create Volume Set with Raid Level 30, 50, or 60, use the Create Raid Set function and Create Volume Set function.
Fibre to SAS/SATA RAID Subsystem
76
User Manual
5.2 RAID Set Functions
Use the Raid Set Function and Volume Set Function if you prefer to create customized Raid Sets and Volume Sets. User can manually configure and take full control of the Raid Set settings, but it will take a little l onger to setup than the Quick Create configuration. Select the Raid Set Function to manually configure the Raid S et for the first time or to delete existing R aid Set and reconfigure a R aid S et.
5.2.1 Create RAID Set
To create a Raid Set, click on the Create RAID Set link. A “Select The Drives For RAID Set” screen is displayed showing the di sk drives in the system. T ic k t h e b ox o f ea c h d i s k d r iv e t ha t wi ll be i n cl u de d i n Raid Set to be created. Enter the pref erred Raid Set Name (1 to 16 alphanumeric characters) to define a unique identifier for the Raid Set. Default Raid Set name always appear as Raid Set # xxx.
Tick on the Confirm The Operation option and click on the Submit button in the screen.
Fibre to SAS/SATA RAID Subsystem
User Manual
77
5.2.2 Delete RAID Set
To delete a Raid Set, click on the Delete RAID Set link. A “Select The Raid Set To Delete” screen is displayed showing all Raid Sets existing in the system. Select the Raid Set you want to delete in the Select column.
Tick on the Confirm The Operation and click on the Submit button to process with deletion.
NOTE: You cannot delete a Raid Set containing a Raid 30/50/60 Volume Set. You must delete the Raid 30/50/60 Volume Set first.
Fibre to SAS/SATA RAID Subsystem
78
User Manual
5.2.3 Expand RAID Set
Use this option to expand a Raid Set, when one or more disk drives is/are added to the system. This function is active when at least one drive is available.
To expand a R aid Se t, click on the Expan d RAID S et link. Select the Raid Set which you want to expand.
Tick on the available disk(s) and check Confirm The Operation. C lick on the
Submit
button to add the select e d disk(s) to the Raid Set.
NOTE: Once the Expand Raid Set process has started, user cannot stop it. The process must be completed.
NOTE: If a disk drive fails during Raid Set expansion and a hot spare is available, an auto rebuild operation will occur after the Raid Set expansion is completed.
NOTE: A Raid Set cannot be expanded if it contains a Raid 30/50/60 Volume Set.
Fibre to SAS/SATA RAID Subsystem
User Manual
79
Migration occurs when a disk is added to a Raid S et. Migrating status is displayed in the Raid Set statu s area of the Raid Set information. Migrating st atus is also displayed in the Volume Set st atus area of the Volume Set Information for all Volume Sets under the Raid Set which is migrating.
Fibre to SAS/SATA RAID Subsystem
80
User Manual
NOTE: Cannot expand Raid Set when contains Raid30/50/60 volume.
Fibre to SAS/SATA RAID Subsystem
User Manual
81
5.2.4 Offline RAID Set
If user wants to offline (and move) a Raid Set while the system is powered on, use the Offline Raid Set function. After completing the function, th e HDD state will ch an ge to “Offlined” Mode and the HDD Status LEDs will be blinking RED.
To offline a Raid Set, click on the Offline RAID Set link. A “Select The RAID SET To Offline” screen is displayed showing all existing Raid Sets in the subsystem. Select the Raid Set which you want to offline in the Select column.
Tick on the Confirm The Operation, and then click on the Submit button to offline the selected Raid Set.
Fibre to SAS/SATA RAID Subsystem
82
User Manual
5.2.5 Rename RAID Set
Use this function to rename a RAID Set. Select the “Ren a me RAI D S et ” under the RAID Set Functions, and then select the Select the RAID Set to rename and click “Submit”.
Enter the new name for the RAID Set. Tick the “Confirm The Operation” and click “Submit”.
Fibre to SAS/SATA RAID Subsystem
User Manual
83
5.2.6 Activate Incomplete RAID Set
When Raid Set State is “Normal”, this means there is no failed disk drive.
When does a Raid Set State becomes “Incomplete”?
If the RAID subsystem is powered off and one disk drive is removed or has failed in power off state, and when the subsys tem is powered on, the Raid Set State will change to “Incomplete”.
The Volume Set(s) associated with the Raid Set will not be visible and the failed or removed disk will be shown as “Missing”. At the same time, the subsystem will not detect the Volume Set(s); hence the volume(s) is/are not accessible.
Fibre to SAS/SATA RAID Subsystem
84
User Manual
When can the “Activate Incomplete Raid Set” function be used?
In order to access the Volume Set(s) and corresponding data, use the Activate Incomplete RAID Set function to active the Raid Set. After selecting this
function, the Raid State will change to “Degraded” state.
To activate the incomplete the Raid Set, click on the Activate Incomplete RAID Set link. A “Select The Raid Set To Activate” screen is displayed showing all existing Raid Set s in the subsystem. Select the Raid Set wit h “ Incomplete” state which you want to activate in the Select colum n.
Click on the Submit button to activate the Raid Set. The Volume Set(s) associated wit h t h e Ra i d S e t w i l l become accessible in “Degraded” mode.
NOTE: The “Activate Incomplete Raid Set” function is only used when Raid Set State is “Incomplete”. It cannot be used when Raid Set configuration is lost. If in case the RAID Set configuration is lost, please contact your vendor’s support engineer.
Fibre to SAS/SATA RAID Subsystem
User Manual
85
5.2.7 Create Hot Spare
The Create Hot Spare option gives you the ability to define a global hot spare. When you choose the Create Hot Spare option in the Raid Set Function, all
unused (non Raid Set member) disk drives in th e subs yst em appear. Select the target disk drive by clicking on the approp riate check box. Tick on the Confirm The
Operation and click on the Submit button to create hot spare drive(s).
Hot Spare Type Description
Global Hot Spare
The Hot Spare disk is a hot spare on all enclosures connected in daisy chain. It can replace any failed disk in any encl osure.
Dedicated to RaidSet
The Hot Spare disk is a hot spare dedicated only to the RaidSet where it is assigned. It can replace any failed disk in the RaidSet where it is assigned.
Dedicated to Enclosure
The Hot Spare disk is a hot spare dedicated only to the enclosure where it is located. It can replace any failed disk on the enclosure where it is located.
NOTE: When the Raid Set status is in Degraded state, this option will not work.
NOTE: The capacity of the hot spare disk(s) must be equal to or greater than the smallest hard disk size in the subsystem so that it/they can replace any failed disk drive.
NOTE: The Hot Spare Type can also be viewed by clicking on Raid Set Hierarchy in the Information menu.
Fibre to SAS/SATA RAID Subsystem
86
User Manual
5.2.8 Delete Hot Spare
Select the target Hot Spare disk(s) to delete by clicking on the appropriate check box. Tic k on t he Confirm The Operation, and cl ick on the Submit button in the screen to
delete the hot spare(s).
5.2.9 Rescue Raid Set
If you need to recover a missing Raid Set using the “Rescue Raid Set” function, please contact your vendor’s support engineer for assistance.
Fibre to SAS/SATA RAID Subsystem
User Manual
87
5.3 Volume Set Function
Volume Set is seen by the host system as a single logical device. It is organized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a Volume Set. A Volume Set capacity can consume all or a portion of the r a w capacity available in a Raid Set.
Multiple Volume Sets can exist on a group of disks in a Raid Set. Additional Volume Sets created in a specified Raid Set will reside on all the physical disks in the Raid Set. Thus each Volume Set on the Raid Set will have its data spread evenly across all the disks in the Raid Set.
5.3.1 Create Volume Set
The following are the Volume Set features:
1. Volume sets of different RAID levels may coexist on the same Raid Set.
2. Up to 128 Volume Sets in a Raid Set can be created in t h e RAID su b s y st e m .
To create Volume S et from a Raid Set, expand the Volume Set Functions in the main menu and click on the Create Volume Set link. The Select The Raid Set To Create On It screen will show all existing Raid Sets. Tick on the Raid Set w h e r e you want to create the Volume Set and then click on the Submit button.
The Volu me Set s e t u p s c r ee n allows user to configure the Volume Name, C apacity, RAID level, Initialization Mode, Stripe Size, Cache Mode, Ta gg e d Co mm a n d Q u e ui ng , Fi b re Channel/LUN Bas e/LUN, and Volume To Be Created.
Fibre to SAS/SATA RAID Subsystem
88
User Manual
Vo l um e N am e : The default Volume Se t name will appear as “ Volume---VOL#XXX”. You can rename
the Volume Set name provided it does not exceed the 16 characters limit. Volume Raid Level:
Set the RAID level for the Volume Set. Click the down-arrow in the drop-down list. The available RAID lev els for the current Volu me Set are displayed. Select the preferred RAID level.
Select Volume Ca pacity: The maximum Volume Set size is disp l ay e d b y d e f au l t . If n ec e s sa r y, c h a ng e the
Volume Set size ap p ro p r ia t e fo r your a pplication.
Greater Two TB Volume Support:
If the Volume Set size is over 2TB, an option “Greater Two TB Volume Support” will be automatically provided in the screen as shown in the example above. There are three options to select: “No”, “64bit LBA”, and “ 4K Block”).
No: Volume Set size is set to maximum 2TB limitation. 64bit LBA: Use this option for UNIX, Linux Kernel 2.6 or later, Windows Server 2003
+ SP1 or later versions, Windows x64, and other supported operating systems. The maximum Volume Set size is up to 512TB.
4K Block: Use this option for Windows OS such as Windows 2000, 2003, or XP. The maximum Volume Set size is 16TB. Just use the Volu me a s “Basic Disk”. Volume can’t be used as “Dynamic Disk”; also can’t be used in 512Bytes block service program.
Initialization Mode: Set the Initialization Mode for the Volume Set. Initialization in Foreground mode is
completed faster but must be completed before Volume Set becomes accessible.
Fibre to SAS/SATA RAID Subsystem
User Manual
89
Background mode makes the Volume Set insta ntly available but the initialization process takes longer. No Init (To Rescue Volume) is used to create a Volume Set without initialization; normally used to recreate Volume Set configuration to recover data.
Stripe Size: This parameter sets the size of the stripe written to each disk in a RAID 0, 1, 10, 5 or
6 Volum e Set. You can set the st ripe size t o 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
A larger stripe size produces better-read performance, especially if your computer does mostly sequential reads. However, if you are sure that your computer does random reads more often, select a small stripe size.
NOTE: Stripe Size in RAID level 3 can’t be modified.
Cache Mode:
The RAID subsyste m supports t w o t y p e s of wr it e cach i n g : W ri te -Th roug h and Write-Ba ck.
Write-Through: data are both written to the cache and the disk(s) before the write
I/O is acknowledged as complete.
Write-Back: when data is written to cache, the I/O is acknowledged as complete,
and some time later, the cached data is written or flushed to the disk(s). This provides better performance but requires a battery module support for the cache memory, or a UPS for the subsystem.
T agged Command Queuing: When this option is enabled, it enhances the overall system performance under multi-
tasking operating systems by reordering tasks or requests in the command queue of the RAID system. This function should normally remain enabled.
Controller #1 Fibre Port Mapping: Controller #1 has four 8Gbps Fibre Host Channels (Ports 0, 1, 2, and 3). Select the Fibre Port where to map the LUN (volume Set).
Controller #2 Fibre Port Mapping: Controller #2 has four 8Gbps Fibre Host Channels (Ports 4, 5, 6, and 7). Select the Fibre Port where to map the LUN (volume Set).
NOTE: The default Port mapping is Port 0 and 4 and provid es dual path to LUN on both controllers. MPIO must be setup in host/server.
NOTE: If LUN is mapped to a Fibre Port on one controller only (example: Port 0), the cache mirror will be disabled.
NOTE: If LUN is not mapped to any Fibre Port, then LUN is disabled.
Fibre to SAS/SATA RAID Subsystem
90
User Manual
Fi b r e C h a n n e l : LUN Base/MNID: LUN
Controller supports Multiple Node ID (MNID) mode. A possible application is for zoning within the arbitrated loop. The different zones can be represented by the controller's source. Embodiments of the present invention described above can be implemented within a Switch for FC Arbitrated Loop.
LUN Base: The base LUN number. Each LUN Base supports 8 LUNs. LUN: Each Volume Set must be assigned a unique LUN ID number. A Fibre Port can
connect up to 128 devices (LUN ID: 0 to 127). Select the LUN ID for the Volume Set.
Volume s To Be Created:
Use this option to create several Volume Sets with the same Volume Set attributes. Up to 128 Volume Sets can be created.
Fibre to SAS/SATA RAID Subsystem
User Manual
91
5.3.2 Create Raid 30/50/60
To create a Raid30/50/60 Volume Set, move the mouse cursor to the main menu and click on the Create Raid30/50/60 link. The Select Multiple RaidSet For Raid30/50/60 screen will show all Raid S ets. Tick on t h e R aid Sets that you want to in c lu d e i n t h e creation and then click on the Submit button.
NOTE: When creating Raid 30/50/60 Volume set, you need to create first the Raid Sets. Up to 8 Raid Sets maximum is supported in Raid 30/50/60. All Raid Sets must contain the same number of disk drives.
Configure the Volume Set attributes (refer to previous section for the Volume Set attributes). When done, tick Confirm The Operation and click on Submit button.
NOTE: Refer to Section 5.3.1 Create Volume Set for detailed information about the Volume Set settings.
Fibre to SAS/SATA RAID Subsystem
92
User Manual
5.3.3 Delete Volume Set
To delete a Volume Set, select the Volume Set Functions in the main menu and click on the Delete Volume Set link. The Select The Volume Set To Delete screen will show all available Raid Sets. Tick on a Raid Set and check the Confirm The Operation option and then click on the Submit button to show all Volume Sets in the selected Raid S et. Tick on a Volume Set an d c he ck th e Confirm The Operation option. Click on the Submit button to
delete the Volume Set.
Fibre to SAS/SATA RAID Subsystem
User Manual
93
5.3.4 Modify Volume Set
Use this function to modify Volume Set configuration. To modify the attributes of a Volume Se t:
1. Click on the Modify Vo lume Set link.
2. Tick f r o m th e l i s t the Volume Set you want to modify. Click on the Submit button.
The following screen appears.
To modify Volume Set attribute values, select an attribute item and click on the attribute value. After completing the modification, tick on the Confirm The Operation option and click on the Submit button to save the changes.
Fibre to SAS/SATA RAID Subsystem
94
User Manual
5.3.4.1 Volume Set Expansion
Volume Ca pacity (Logical Volume Concatenation Plus Re-stripe)
Use the Expand Raid Set function to expand a Raid Set when a disk is added to your subsystem. (Refer to Section 5.2.3)
The expanded capac ity can be used to enlarge the V o l u m e S et size or c reate another Volume Set. Use the Modify Volume Set function to expand the Volume Set capacity. Select the Volume Set and move the cursor to the Volume Set Capacity item and enter the ca pacity size.
Tick on t he Confirm The Opera tion and click on the Submit button to complete the action. The Volume Set starts to expand.
NOTE: The Volume Set capacity of Raid30/50/60 cannot be expanded.
Fibre to SAS/SATA RAID Subsystem
User Manual
95
5.3.4.2 Volume Set Migration
Migra tion occu r s wh e n a Volum e Set mig r ates fr om one R AID le vel t o another, a Volume Set stripe size changes, or when a disk is added to a Raid Set. Migrating status is displayed in the Volume S et status area of the RaidSet Hierarchy screen during migration.
Fibre to SAS/SATA RAID Subsystem
96
User Manual
5.3.5 Check Volume Set
Use this function to perform Volume Set consistency check, which verifies the correctness of redundant data (data blocks and parity blocks) in a Volume Set. This basically means computing the parity from the data blocks and comparing the results to the contents of the parity blocks, or computing the data from the parity blocks and comparing the results to the contents of the da t a blo c k s.
NOTE: The Volume Set state must be Normal in order to perform Check V olume Set. Only RAID levels with parity (redundant data) such as RAID Levels 3, 5, 6, 30, 50, and 60 support this function.
To perform Check Volume Set function:
1. Click on the Check Volume Set link.
2. Tick from the list the Volume Set you want to check. Select the Check Volume Set options.
Check Volume Set Options:
Scrub Bad Block If Bad Block Found, Assume Parity Data is Good Re-compute Parity if Parity E rror, Assume Da ta is Good
NOTE: When the 2 options a re not select ed, it will o nly check for errors. It is recommended to perform Check Volume Set with the 2 options unselected at first. If the result shows errors, the data must be backed up t o a safe storage. Then the two options can be selected and redo Check Volume Set to corre ct the errors.
Fibre to SAS/SATA RAID Subsystem
User Manual
97
3. Tick on Confirm The Operation and click on the Submit button. The Checking process will be started.
The checking percentage can also be viewed by clicking on RaidSet Hierarchy in the Informatio n menu.
NOTE: The result of Check Volume Set function is shown in System Events Information and Volume Set Information. In System Events Information, it is shown in the Errors column. In Volume Set Information, it is shown in Errors Found field.
Fibre to SAS/SATA RAID Subsystem
98
User Manual
5.3.6 Schedule Volume Check
To perf o r m C heck V o lum e S et by sc hedule, follow these steps:
1. Click on the Schedule Volume Check link.
2. Select th e desired schedule that you wish the Check Volume Set function to run. Tic k o n Confirm The Operation and click on the Submit button.
Scheduler: Disabled, 1Day (For Testing), 1Week, 2Weeks, 3Weeks, 4Weeks, 8Weeks,
12W eeks, 16Weeks, 20Weeks a nd 24Weeks.
Check After System Idle: No, 1 Minute, 3 Minutes, 5 Minutes, 10 Minutes, 15
Minutes,
20 Minutes, 30 Minute s, 45 Minute s and 60 Minutes.
NOTE: To verify the Volume Check schedule, go to Information -> RAID Set Hierarchy -> select the Volume Set -> the Volume Set Information will be displayed.
Fibre to SAS/SATA RAID Subsystem
User Manual
99
5.3.7 Stop Volume Check
Use this option to stop all Volume Set consistency checking process/processes.
Fibre to SAS/SATA RAID Subsystem
100
User Manual
5.4 Physical Drive
Choose this option from the Main Menu to select a disk drive and to perform the functions listed below.
5.4.1 Create Pass-Through Disk
A Pass-Through Disk is a disk drive not controlled by the internal RAID subsystem firmware and thus cannot be a part of a Volume Set. A Pass-Through disk is a se p ar a t e a n d in d i v id u a l Ra i d Se t . The disk is available to the hos t as an individual disk. It is typically used on a system where the operating system is on a disk not controlled by the RAID firmware.
To create pass-through disk, click on the Create Pass-Through link under the Physical Drives main menu. Th e set ting fu ncti on scr een appears.
Select the disk drive to be made as Pass-Through Disk and configure the Pass­Through Disk attributes, such as the Cache Mode, Tagged Command Queuing, Fibre Port Mapping, and Fi b r e C h a n n el : L U N B a s e/ M N I D : L U N f o r this volume.
Loading...