OpenEye GraniteRack 3U User Manual

GraniteRack 3U Chassis
28080AB
Digital Storage System
User Manual
OE-GRANITE3U
Please carefully read these instructions before using this product.
Save this manual for future use.
Copyright
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent.
Trademarks
All products and trade names used in this document are trademarks or regis­tered trademarks of their respective holders.
Changes
The material in this documents is for information only and is subject to change without notice.
FCC Compliance Statement
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in residential installations. This equipment generates, uses, and can radiate ra­dio frequency energy, and if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is not guarantee that interference will not occur in a particular installation. If this equipment does cause interference to radio or television equipment reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures:
1. Reorient or relocate the receiving antenna
2. Move the equipment away from the receiver
3. Plug the equipment into an outlet on a circuit different from that to which the receiver is powered.
4. Consult the dealer or an experienced radio/television technician for help
All external connections should be made using shielded cables.
About This Manual
Welcome to your Redundant Array of Independent Disks System User’s Guide. This manual covers everything you need to know in learning how to install or configure your RAID system. This manual also assumes that you know the basic concepts of RAID technology.
Chapter 1 Introduction
Introduces you to Disk Array’s features and general technology concepts.
Chapter 2 Getting Started
Helps user to identify parts of the Disk Array and prepare the hardware for configuration.
Chapter 3 Configuring
Quick Setup
Provides a simple way to setup your Disk Array.
Customizing Setup
Provides step-by-step instructions to help you to do setup or re-configure your Disk Array.
Chapter 4 Array Maintenance
Adding Cache Memory
Provides a detailed procedure to increase cache memory from the default amount of 128MB to higher.
Updating Firmware
Provides step-by-step instructions to help you to update the firmware to the latest version.
Hot Swap Components
Describes all hot swap modules on Disk Array and provides the detailed procedure to replace them.
It includes the following information :
Table of Contents
Chapter 1 Introduction
1.1 Key Features..........................................................................................................
1.2 RAID Concepts......................................................................................................
1.3 SCSI Concepts......................................................................................................
1.3.1 Multiple SCSI Format Support..................................................................
1.3.2 Host SCSI ID Selection..............................................................................
1.3.3 Terminators..................................................................................................
1.4 Array Definition.......................................................................................................
1.4.1 RAID set........................................................................................................
1.4.2 Volume Set...................................................................................................
1.4.3 Easy of Use features..................................................................................
1.4.4 High Availability............................................................................................
Chapter 2 Getting Started
2. 1 Unpacking the subsystem.........................................................................................
2.2 Identifying Parts of the subsystem.....................................................................
2.2.1 Front View......................................................................................................
2.2.2 Rear View.....................................................................................................
2.3 Connecting to Host...............................................................................................
2.4 SCSI Termination..................................................................................................
2.5 Powering-on the subsystem..............................................................................
2.6 Install Hard Drives................................................................................................
2.7 Connecting UPS...................................................................................................
2.8 Connecting to PC or Terminal............................................................................
1-2
1-3 1-10 1-10 1-10
1-11 1-12 1-12 1-12 1-13 1-15
2-1 2-3 2-3 2-6
2-9 2-10 2-12 2-13 2-15 2-16
Chapter 3 Configuring
3.1 Configuring through a Terminal..............................................................................
3.2 Configuring the Subsystem Using the LCD Panel.........................................
3.3 Menu Diagram.......................................................................................................
3.4 Web browser-based Remote RAID management via R-Link ethernet.......
3.5 Quick Create..........................................................................................................
3.6 Raid Set Functions...............................................................................................
3.6.1 Create Raid Set..........................................................................................
3.6.2 Delete Raid Set............................................................................................
3.6.3 Expand Raid Set...........................................................................................
3.6.4 Activate Incomplete Raid Set...................................................................
3-1
3-9 3-10 3-15 3-17 3-19 3-19 3-20 3-21 3-23
3.6.5 Create Hot Spare........................................................................................
3.6.6 Delete Hot Spare.........................................................................................
3.6.7 Rescue Raid Set..........................................................................................
3. 7 Volume Set Function.................................................................................................
3.7.1 Create Volume Set......................................................................................
3.7.2 Delete Volume Set......................................................................................
3.7.3 Modify Volume Set........................................................................................
3.7.3.1 Volume Expansion.......................................................................
3.7.4 Volume Set Migration..................................................................................
3.7.5 Check Volume Set........................................................................................
3.7.6 Stop Volume Set Check..............................................................................
3.8 Physical Drive..........................................................................................................
3.8.1 Create Pass-Through Disk........................................................................
3.8.2 Modify Pass-Through Disk.........................................................................
3.8.3 Delete Pass-Through Disk........................................................................
3.8.4 Identify Selected Drive.................................................................................
3.9 System Configuration...........................................................................................
3.9.1 System Configuration.................................................................................
3.9.2 U320 SCSI Target Configuration...............................................................
3.9.3 Ethernet Config................................................................................................
3.9.4 Alert By Mail Config......................................................................................
3.9.5 SNMP Configuration.........................................................................................
3.9.6 View Events.....................................................................................................
3.9.7 Generate Test Events.................................................................................
3.9.8 Clear Events Buffer......................................................................................
3.9.9 Modify Password..........................................................................................
3.9.10 Upgrade Firmware.........................................................................................
3.10 Information Menu....................................................................................................
3.10.1 RaidSet Hierarchy.....................................................................................
3.10.2 System Information..................................................................................
3.10.3 Hardware Monitor......................................................................................
3.11 Creating a new RAID or Reconfiguring an Existing RAID..............................
3-25 3-25 3-26 3-27 3-27 3-30 3-31 3-31 3-33 3-34 3-34 3-35 3-35 3-36 3-37 3-37 3-38 3-38 3-40 3-41 3-42 3-43 3-44 3-45 3-46 3-46 3-47 3-48 3-48 3-48 3-49 3-50
Chapter 4 Array Maintenance
4.1 Memory Upgrades................................................................................................
4.1.1 Installing Memory Module.........................................................................
4.2 Upgrading the Firmware.....................................................................................
4.3 Hot Swap components........................................................................................
4.3.1 Replacing a disk.........................................................................................
4.3.2 Replacing a Power Supply........................................................................
4.3.3 Replacing a Fan..........................................................................................
Appendix A T echnical Specification...................................................
4-1 4-2
4-3 4-10 4-10 4-11 4-12
A-1
Chapter 1
Introduction
The RAID subsystem is a Ultra 320 LVD SCSI-to-Serial ATA II RAID (Redundant Arrays of Independent Disks) disk array subsystem. It consists of a RAID disk array controller and sixteen (16) disk trays.
The subsystem is a “Host Independent” RAID subsystem supporting RAID levels 0, 1, 3, 5, 6 0+1 and JBOD. Regardless of the RAID level the sub­system is configured for, each RAID array consists of a set of disks which to the user appears to be a single large disk capacity.
One unique feature of these RAID levels is that data is spread across sepa­rate disks as a result of the redundant manner in which data is stored in a RAID array. If a disk in the RAID array fails, the subsystem continues to func­tion without any risk of data loss. This is because redundant information is stored separately from the data. This redundant information will then be used to reconstruct any data that was stored on a failed disk. In other words, the subsystem can tolerate the failure of a drive without losing data while operat­ing independently of each other.
The subsystem is also equipped with an environment controller which is ca­pable of accurately monitoring the internal environment of the subsystem such as its power supplies, fans, temperatures and voltages. The disk trays allow you to install any type of 3.5-inch hard drive. Its modular design allows hot-swapping of hard drives without interrupting the subsystem’s operation.
Introduction
1-1
1.1 Key Features
Subsystem Features:
Features an Intel 80321 64 bit RISC I/O processor Build-in 128MB cache memory, expandable up to 1024MB Ultra 320 LVD host port Smart-function LCD panel Supports up to sixteen (16) 1" hot-swappable Serial ATA II hard drives Redundant load sharing hot-swappable power supplies High quality advanced cooling fans Local audible event notification alarm Supports password protection and UPS connection Built-in R-Link LAN port interface for remote management & event notifica-
tion
Dual host channels support clustering technology The RAID subsystem is made by aluminum. Aluminum is an excellent ther-
mal conductor and Aluminum offers a unique combination of light weight and high strength.
Real time drive activity and status indicators
RAID Function Features:
Supports RAID levels 0, 1, 0+1, 3, 5, 6 and JBOD Supports hot spare and automatic hot rebuild Allows online capacity expansion within the enclosure Tagged command queuing for 256 commands, allows for overlapping
data streams
Transparent data protection for all popular operating systems Bad block auto-remapping Supports multiple array enclosures per host connection Multiple RAID selection Array roaming Online RAID level migration
1-2
Introduction
1.2 RAID Concepts
RAID Fundamentals
The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds that of a single large drive. The array of drives appears to the host computer as a single logical drive.
Six types of array architectures, RAID 1 through RAID 6, were originally defined, each provides disk fault-tolerance with different compromises in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID 0 array.
Disk Striping
Fundamental to RAID technology is striping. This is a method of combining multiple drives into one logical storage unit. Striping partitions the storage space of each drive into stripes, which can be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in a rotating sequence, so that the combined space is composed alternately of stripes from each drive. The specific type of operating environment deter­mines whether large or small stripes should be used.
Most operating systems today support concurrent disk I/O operations across multiple drives. However, in order to maximize throughput for the disk subsystem, the I/O load must be balanced across all the drives so that each drive can be kept busy as much as possible. In a multiple drive system without striping, the disk I/O load is never perfectly balanced. Some drives will contain data files that are frequently accessed and some drives will rarely be accessed.
Introduction
1-3
By striping the drives in the array with stripes large enough so that each record falls entirely within one stripe, most records can be evenly distributed across all drives. This keeps all drives in the array busy during heavy load situations. This situation allows all drives to work concurrently on different I/O operations, and thus maximize the number of simultaneous I/O operations that can be performed by the array.
Definition of RAID Levels
RAID 0 is typically defined as a group of striped disk drives without parity or data
redundancy. RAID 0 arrays can be configured with large stripes for multi-user environments or small stripes for single-user systems that access long sequential records. RAID 0 arrays deliver the best data storage efficiency and performance of any array type. The disadvantage is that if one drive in a RAID 0 array fails, the entire array fails.
1-4
Introduction
RAID 1, also known as disk mirroring, is simply a pair of disk drives that store
duplicate data but appear to the computer as a single drive. Although striping is not used within a single mirrored drive pair, multiple RAID 1 arrays can be striped together to create a single large array consisting of pairs of mirrored drives. All writes must go to both drives of a mirrored pair so that the information on the drives is kept identical. However, each individual drive can perform simultaneous, independent read operations. Mirroring thus doubles the read performance of a single non-mirrored drive and while the write performance is unchanged. RAID 1 delivers the best performance of any redundant array type. In addition, there is less performance degradation during drive failure than in RAID 5 arrays.
Introduction
1-5
RAID 3 sector-stripes data across groups of drives, but one drive in the group is
dedicated to storing parity information. RAID 3 relies on the embedded ECC in each sector for error detection. In the case of drive failure, data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the remaining drives. Records typically span all drives, which optimizes the disk transfer rate. Because each I/O request accesses every drive in the array, RAID 3 arrays can satisfy only one I/O request at a time. RAID 3 delivers the best performance for single-user, single-tasking environments with long records. Synchronized-spindle drives are required for RAID 3 arrays in order to avoid performance degradation with short records. RAID 5 arrays with small stripes can yield similar performance to RAID 3 arrays.
Under is no dedicated parity drive, all drives contain data and read operations can be overlapped on every drive in the array. Write operations will typically access one data drive and one parity drive. However, because different records store their parity on different drives, write operations can usually be overlapped.
1-6
RAID 5 parity information is distributed across all the drives. Since there
Introduction
RAID 6 is similar to RAID 5 in that data protection is achieved by writing parity
information to the physical drives in the array. With RAID 6, however, parity data are used. These two sets are different, and each set occupies a capacity equivalent to that of one of the constituent drives. The main advantages of RAID 6 is High data availability – any two drives can fail without loss of critical data.
two sets of
Introduction
1-7
Dual-level RAID achieves a balance between the increased data availability
inherent in RAID 1 and RAID 5 and the increased read performance inherent in disk striping (RAID 0). These arrays are sometimes referred to as RAID 10 and RAID 0+5 or RAID 50.
In summary:
RAID 0+1 or
RAID 0 is the fastest and most efficient array type but offers no fault-
tolerance. RAID 0 requires a minimum of two drives.
RAID 1 is the best choice for performance-critical, fault-tolerant
environments. RAID 1 is the only choice for fault-tolerance if no more than two drives are used.
RAID 3 can be used to speed up data transfer and provide fault-tolerance
in single-user environments that access long sequential records. However, RAID 3 does not allow overlapping of multiple I/O operations and requires synchronized-spindle drives to avoid performance degradation with short records. RAID 5 with a small stripe size offers similar performance.
RAID 5 combines efficient, fault-tolerant data storage with good
performance characteristics. However, write performance and performance during drive failure is slower than with RAID 1. Rebuild operations also require more time than with RAID 1 because parity information is also reconstructed. At least three drives are required for RAID 5 arrays.
RAID 6 is essentially an extension of RAID level 5 which allows for
additional fault tolerance by using a second independent distributed par­ity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an ex­tremely high data fault tolerance and can sustain multiple simultaneous drive failures. Perfect solution for mission critical applications.
1-8
Introduction
RAID Management
The subsystem can implement several different levels of RAID technology. RAID levels supported by the subsystem are shown below.
RAID Level
0
1
3
5
6
0 + 1
Description
Block striping is provide, which yields higher performance than with individual drives. There is no redundancy.
Drives are paired and mirrored. All data is 100% duplicated on an equivalent drive. Fully redundant.
Data is striped across several physical drives. Parity protection is used for data redundancy.
Data is striped across several physical drives. Parity protection is used for data redundancy.
Data is striped across several physical drives. Parity protection is used for data redundancy. Requires N+2 drives to implement because of two-dimensional parity scheme
Combination of RAID levels 0 and 1. This level provides striping and redundancy through mirroring.
Min Drives
1
2
3
3
4
4
Introduction
1-9
1.3 SCSI Concepts
Before configuring the subsystem, you must first understand some basic SCSI concepts so that the subsystem and SCSI devices will function properly.
1.3.1 Multiple SCSI Format Support
The subsystem support the SCSI interface standards listed below. Note that the data bit and cable length restrictions must be followed.
SCSI T ype
SCSI-1 Fast SCSI Fast Wide SCSI Ultra SCSI Ultra Wide SCSI Ultra 2 SCSI Ultra 2 Wide SCSI Ultra 160 Wide LVD Ultra 320 LVD
Data Bit
8 Bits 8 Bits 16 Bits 8 Bits 16 Bits
8 Bits
16 Bits 16 Bits 16 Bits
Data Rate
5 MB/Sec 10 MB/Sec 20 MB/Sec 20 MB/Sec 40 MB/Sec 40 MB/Sec 80 MB/Sec 160MB/Sec 320MB/Sec
Cable Length
6 m 3 m 3 m
1.5 m
1.5 m 12 m 12 m
12 m 12 m
1.3.2 Host SCSI ID Selection
A SCSI ID is an identifier assigned to SCSI devices which enables them to communicate with a computer when they are attached to a host adapter via the SCSI bus. Each SCSI device, and the host adapter itself, must have a SCSI ID number (Ultra 320 Wide SCSI = 0 to 15). The ID defines each SCSI device on the SCSI bus. If there are more than one SCSI adapter in the Host subsystem, each adapter forms a separate SCSI bus. SCSI IDs can be re­used as long as the ID is assigned to a device on a separate SCSI bus. Refer to the documentation that came with your peripheral device to deter­mine the ID and how to change it. The subsystem must be assigned a unique SCSI ID ranging from 0 to 15 for the Ultra 320 LVD SCSI host system. The default value is ID 0.
1-10
Introduction
1.3.3 Terminators
Based on SCSI specifications, the SCSI bus must be terminated at both ends, meaning the devices that are connected to the ends of the SCSI bus must have their bus terminators enabled. Devices connected in the middle of the SCSI bus must have their terminators disabled. Proper termination allows data and SCSI commands to be transmitted reliably on the SCSI bus. The host adapter and the SCSI devices attached to it must be properly terminated, or they will not work reliably.
Termination means that terminators are installed in the devices at each end of the bus. Some SCSI devices require you to manually insert or remove the terminators. Other devices have built-in terminators that are enabled or dis­abled via switches or software commands. Refer to the device’s documenta­tion on how to enable or disable termination.
If your RAID subsystem is the last device on the SCSI bus, attach the terminator included in the package to the Host Channel A & B Out port before using the subsystem.
Introduction
1-11
1.4 Array Definition
1.4.1 RAID Set
A RAID Set is a group of disks containing one or more volume sets. It has the following features in the RAID subsystem controller:
1. Up to sixteen RAID Sets are supported per RAID subsystem controller.
2. From one to sixteen drives can be included in an individual RAID Set.
3. It is impossible to have multiple RAID Sets on the same disks. A Volume Set must be created either on an existing RAID set or on a group of available individual disks (disks that are not yet a part of an raid set). If there are pre-existing raid sets with available capacity and enough disks for specified RAID level desired, then the volume set will be created in the exist­ing raid set of the user’s choice. If physical disks of different capacity are grouped together in a raid set, then the capacity of the smallest disk will become the effective capacity of all the disks in the raid set.
1.4.2 Volume Set
A Volume Set is seen by the host system as a single logical device. It is organized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a Volume Set. A Volume Set capacity can consume all or a portion of the disk capacity available in a RAID Set. Multiple Volume Sets can exist on a group of disks in a RAID Set. Additional Volume Sets created in a specified RAID Set will reside on all the physical disks in the RAID Set. Thus each Volume Set on the RAID Set will have its data spread evenly across all the disks in the RAID Set.
1. Volume Sets of different RAID levels may coexist on the same RAID Set.
In the illustration below, Volume 1 can be assigned a RAID 5 level of opera­tion while Volume 0 might be assigned a RAID 0+1 level of operation.
1-12
Introduction
1.4.3 Easy of Use features
1.4.3.1 Instant Availability/Background Initialization
RAID 0 and RAID 1 volume set can be used immediately after the creation. But the RAID 3, 5 and 6 volume sets must be initialized to generate the parity. In the Normal Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. Furthermore, the RAID volume set is also protected against a single disk failure while initialing. In Fast Initialization, the initialization proceeds must be completed before the volume set ready for system accesses.
1.4.3.2 Array Roaming
The RAID subsystem stores configuration information both in NVRAM and on the disk drives It can protect the configuration settings in the case of a disk drive or controller failure. Array roaming allows the administrators the ability to move a completely raid set to another system without losing RAID configura­tion and data on that raid set. If a server fails to work, the raid set disk drives can be moved to another server and inserted in any order.
Introduction
1-13
1.4.3.3 Online Capacity Expansion
Online Capacity Expansion makes it possible to add one or more physical drive to a volume set, while the server is in operation, eliminating the need to store and restore after reconfiguring the raid set. When disks are added to a raid set, unused capacity is added to the end of the raid set. Data on the existing volume sets residing on that raid set is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the raid set. The unused capacity can create additional volume set. The expan­sion process is illustrated as following figure.
The RAID subsystem controller redistributes the original volume set over the original and newly added disks, using the same fault-tolerance configuration. The unused capacity on the expand raid set can then be used to create an additional volume sets, with a different fault tolerance setting if user need to change.
1-14
Introduction
1.4.3.4 Online RAID Level and Stripe Size Migration
User can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/ stripe size migration can prove helpful during performance tuning activities as well as in the event that additional physical disks are added to the RAID subsystem. For example, in a system using two drives in RAID level 1, you could add capacity and retain fault tolerance by adding one drive. With the addition of third disk, you have the option of adding this disk to your existing RAID logical drive and migrating from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the sys­tem off.
1.4.4 High availability
1.4.4.1 Creating Hot Spares
A hot spare drive is an unused online available drive, which is ready for re­placing the failure disk drive. In a RAID level 1, 0+1, 3, 5 or 6 raid set, any unused online available drive installed but not belonging to a raid set can define as a hot spare drive. Hot spares permit you to replace failed drives without powering down the system. When RAID subsystem detects a UDMA drive failure, the system will automatic and transparent rebuilds using hot spare drives. The raid set will be reconfigured and rebuilt in the background, while the RAID subsystem continues to handle system request. During the automatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.
Important:
!
The hot spare must have at least the same or more capacity as the drive it replaces.
Introduction
1-15
1.4.4.2 Hot-Swap Disk Drive Support
The RAID subsystem has built the protection circuit to support the replace­ment of UDMA hard disk drives without having to shut down or reboot the system. The removable hard drive tray can deliver “hot swappable,” fault­tolerant RAID solutions at prices much less than the cost of conventional SCSI hard disk RAID subsystems. We provide this feature for subsystems to provide the advanced fault tolerant RAID protection and “online” drive replacement.
1.4.4.3 Hot-Swap Disk Rebuild
A Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1(0+1), 3, 5 and 6. If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive can be rebuilt. If a hot spare is available, the rebuild starts automatically when a drive fails. The RAID subsystem auto­matically and transparently rebuilds failed drives in the background with user­definable rebuild rates. The RAID subsystem will automatically restart the system and the rebuild if the system is shut down or powered off abnormally during a reconstruction procedure condition. When a disk is Hot Swap, al-
though the system is functionally operational, the system may no longer be fault tolerant. Fault tolerance will be lost until the removed drive is replaced and the rebuild operation is completed.
1-16
Introduction
Chapter 2
Getting Started
Getting started with the subsystem consists of the following steps:
Unpack the storage subsystem.
Identifying Parts of the subsystem.
Connect the SCSI Cables.
SCSI Termination.
Power on the subsystem.
Install Hard Drives.
2.1 Unpacking the Subsystem
Before continuing, first unpack the subsystem and verify that the contents of the shipping carton are all there and in good condition. Before removing the subsystem from the shipping carton, visually inspect the physical condition of the shipping carton. Exterior damage to the shipping carton may indicate that the contents of the carton are damaged. If any damage is found, do not re­move the components; contact the dealer where the subsystem was pur­chased for further instructions.
The package contains the following items:
Getting Started
2-1
RAID subsystem unit
Three power cords
Two external SCSI cables
One external null modem cable
One external UPS cable
One RJ-45 ethernet cable
Two Active LVD/SE terminators
Installation Reference Guide
Spare screws, etc.
If any of these items are missing or damaged, please contact your dealer or sales representative for assistance.
2-2
Getting Started
2.2 Identifying Parts of the subsystem
The illustrations below identify the various features of the subsystem. Get yourself familiar with these terms as it will help you when you read further in
the following sections.
2.2.1 Front View
Getting Started
2-3
1. HDD status Indicator
Access LED Status LED
Parts
HDD Status LEDs
HDD access LEDs
Green LED indicates power is on and hard drive status is good for this slot. If there is no hard drive, the LED is red. If hard drive defected in this slot or the hard drive is failure, the LED is orange.
These LED will blink blue when the hard drive is being accessed.
Function
2. HDD trays 1 ~ 16 (From right to left)
3. Smart Function Panel - Function Keys
Parts
Activity LED
Blue blinking LED indicates controller is activity.
Function
4. LCD display panel
5. Smart Function Panel - Function Keys for RAID configuration
The smart LCD panel is where you will configure the RAID subsystem. If you are configuring the subsystem using the LCD panel, please press the control­ler button to configure your RAID subsystem.
Parts
Up and Down arrow buttons
Select button Exit button
2-4
Function
Use the Up or Down arrow keys to go through the information on the LCD screen. This is also used to move between each menu when you configure the subsystem.
This is used to enter the option you have selected. Press this button to return to the previous menu.
Getting Started
6. Environment status
Parts
Voltage warning LED
Over temp LED
Fan fail LED
Power fail LED
Power LED
7. Tray Lever
8. Tray Latch
Function
An alarm will sound warning of a voltage abnormality and this LED will turn red.
If temperature irregularity in these systems occurs (HDD slot tem­perature over 55oC), this LED will turn red and an alarm will sound.
When a fan’s rotation speed is lower than 2600rpm, this LED will turn red and an alarm will sound.
If a redundant power supply fails, this LED will turn red and an alarm will sound.
Green LED indicates power is on.
Getting Started
2-5
2.2.2 Rear View
1. Host Channel A
The subsystem is equipped with 2 host channels (Host channel A and Host channel B). The host channel with two 68-pin SCSI connectors at the rear of the subsystem for SCSI in and out.
2. Host Channel B
Connect to Host’s SCSI adapter or other devices.
3. R-Link Port : Remote Link through RJ-45 ethernet for remote man­agement
The subsystem is equipped with one 10/100 Ethernet RJ45 LAN port. You use web-based browser to management RAID subsystem through Ethernet for remote configuration and monitoring.
2-6
Getting Started
Link LED: Green LED indicates ethernet is linking. Access LED: The LED will blink orange when the 100Mbps ethernet is being accessed.
4. Uninterrupted Power Supply (UPS) Port
The subsystem may come with an optional UPS port allowing you to connect a UPS device. Connect the cable from the UPS device to the UPS port lo­cated at the rear of the subsystem. This will automatically allow the sub­system to use the functions and features of the UPS.
5. Monitor Port
The subsystem is equipped with a serial monitor port allowing you to connect a PC or terminal.
6. AC power input socket 1 ~ 3 (From left to right)
7. Power Supply Unit 1 ~ 3 (From left to right)
Three power supplies (power supply 1, power supply 2 and power supply 3) are located at the rear of the subsystem. Turn on the power of these power supplies to power-on the subsystem. The “power” LED at the front panel will turn green.
If a power supply fails to function or a power supply was not turned on, the
” Power fail LED will turn red and an alarm will sound. An error message
“ will also appear on the LCD screen warning of power failure.
8. Power Supply Unit on / off switch
9. System power on / off switch
10. Power Supply Fail indicator
If a power supply fails, this LED will turn red.
Getting Started
2-7
11. Power Supply Power On Indicator
Green LED indicates power is on.
12. Cooling Fan module 1 ~ 2 (From left to right)
Two blower fans are located at the rear of the subsystem. They provide suffi­cient airflow and heat dispersion inside the chassis. In case a fan fails to function, the “
” Fan fail LED will turn red and an alarm will sound. You will
also see an error message appear in the LCD screen warning you of fan failure.
14. Fan Fail indicator
If a fan fails, this LED will turn red.
2-8
Getting Started
Loading...
+ 71 hidden pages