Product features and specifications described in this manual are subject to
change without notice.
The manufacturer shall not be liable for any damage or loss of information resulting
from the performance or use of the information contained herein.
Trademarks
Accusys and the names of Accusys products and logos referenced herein are
either trademarks and/or service marks or registered trademarks and/or service
marks of Accusys, Inc.
Microsoft, Windows, Windows NT, Windows 2000, Windows 2003, MS-DOS are either
trademarks or registered trademarks of Microsoft Corporation. Intel and Pentium
are registered trademarks of Intel Corporation. Other product and company
names mentioned herein may be trademarks and/or service marks of their
respective owners.
All contents of this manual are copyrighted by Accusys, Inc.
The information contained herein is the exclusive property of Accusys, Inc. and
shall not be copied, transferred, photocopied, translated on paper, film, electronic
media or computer-readable form, or otherwise reproduced in any way, without
the explicit written permission of Accusys, Inc.
Congratulations on your purchase of the product. This controller allows
you to control your RAID system through a user-friendly GUI, which is
accessed through your web browser.
This manual is designed and written for users installing and using the RAID
controller. The user should have a good working knowledge of RAID
planning and data storage.
Symbols used in this manual
This manual highlights important information with the following icons:
Caution
This icon indicates the existence of a potential hazard that could
result in personal injury, damage to your equipment or loss of data if
the safety instruction is not observed.
Note
This icon indicates useful tips on getting the most from your RAID
controller.
iii
Preface
Company Contact
Accusys, Inc.
•5F., No.38, Taiyuan St., Jhubei City, Hsinchu County 30265, Taiwan(R.O.C)
•Tel: +886-3-560-0288
•Fax: +886-3-560-0299
•http://www.accusys.com.tw/
•E-mail: sales@accusys.com.tw
Accusys U.S.A., Inc.
•1321 W. Foothill Blvd. Azusa, CA91702
•Tel: +1-510-661-0800
•Fax: +1-510-661-9800
•http://www.accusys.com.tw
•E-mail: Maggie@accusys.com.tw
Accusys Korea, Inc.
•Baegang B/D 5F Shinsa-Dong 666-14 Kangnam-Gu, Seoul, Korea
1.012.2.2 Added detailed information of
information icons shown in
Monitor Mode.
2.2.3 Added detailed information of
components shown in Monitor
Mode.
2.4.2 Removed the restrictions on
the number of spare disks for
quick setup.
2.5.1 Added a note for the Disk
Cache field shown in [RAID
Management] > [Hard Disks].
2.6.1/ 2.6.2/ 2.6.3/ 2.6.4/ 2.6.6
Modified the contents for the
Schedule option.
2.6.8 Added the contents for the
Schedule option.
2.7.1 Added a caution for the bootup delay time.
2.9.2 Added a note for the NVRAM
configuration.
2.9.3 Modified the descriptions for
the DHCP method. Added the
Authentication option for the
SMTP server configuration.
Appendix C
Updated event log messages.
2007/01/24
xviii
1.1• Changed all name lengths from
characters to bytes.
• Modified the descriptions for the
‘Force to delete LUN mapping(s)’
option.
• Changed ‘RAID array’ to ‘array’.
1.1 Updated key features.
1.3 Modify volume definition, add
SSL definition.
2.1.1 Added browser language set-
ting.
2.1.2 Added multiple system viewer.
2.2.1 Updated Figure 2-5.
2.3 Added SAS enclosure display.
2007/02/26
VersionDescriptionRelease Date
Contents
1.12.4 Added Figure 2-10 (Overview
screen) and modified the
related descriptions.
2.5.2 Modified the hard disk state for
quick setup.
2.6.1 Added one category, mode,
and its definition. Added a
note for the Modify button.
2.6.4 Added options to LD read
algorithm.
2.6.6 Updated the Figure 2-11.
2.7.5 Modified the note for LD shrink.
2.7.6 Added expanding volumes.
2.7.7 Added shrinking volumes.
2.7.11 Added the contents for the
Schedule option.
2.7.13 Added a note for the Force to
recover disk option.
2.7.14 Add DST to the scheduled task.
2.7.15 Added spare restore control
and task notify.
2.8.1 Added disk standby mode,
added the range for the Delay
Time When Boot-Up option.
2.8.2 Added the connection mode
displayed on the FC ports
page, added the configuration steps.
2.9.1 Added modify event receivers.
2.9.2 Added modify SNMP servers.
2.9.3 Added descriptions for the
event log file.
2.10.1 Modified the hard disk states
for the ‘Erase configurations on
HDD(s)’ option.
2.10.2 Updated the Figure 2-18, 2-19,
2-20, and 2-21 and modified
the related descriptions.
2.10.5 Added SSL setting.
2.10.7 Modified battery information.
2.10.9 Added descriptions for the regular system shutdown procedure.
2.10.10 Added Miscellaneous. Move
the ‘GUI refresh rate’ option to
this section.
2007/02/26
xix
Contents
VersionDescriptionRelease Date
1.12.11 Modified the descriptions
related to the Reset button.
2.11.4 Added a note to explain the
displayed information in the
list.
3.2.3 Added UPS off emergent info.
3.2.5 Added hotkeys.
Chapter 4
Updated CLI commands.
Appendix C
Updated event log messages.
1.21.3 Modified descriptions related
to the logical disk expansion
and logical disk shrink.
2.2.2 Modified descriptions related
to the information icons.
2.2.3 Modified the descriptions
related to the rear side of the
RAID system and picture,
include added SAS controller
picture. added component of
Table 2-5.
2.6.1 Added descriptions related to
the disk identify option of Modify.
2.6.6 Modified descriptions related
to the WWN setting, added a
SAS Address setting for symmetric and selective method.
2.9.4 Added support smart-UPS info.
2.10.8 Modified descriptions related
to the external enclosure F/W.
2.10.10 Add memory testing when
boot-up option in Miscellaneous.
2.11 Removed screen 2-22,2-22,223,2-24.
3.2.1 Modified disk status.
3.2.5 Added a ESC button function
of Hotkeys.
4.1 Added descriptions for SSH info.
Appendix D
Added PathGuard MPIO Utility.
Appendix E
Added DiskPart Utility.
2007/02/26
2007/07/15
xx
VersionDescriptionRelease Date
Contents
1.3• Modified Company Address
2.2.3 Modified the descriptions
related to the rear side of the
RAID system and picture,
include added SCSI controller
picture. added component of
Table 2-5.
2.6.6 Added a SCSI ID setting for simple method.
2.8.2 Added a Default SCSI ID setting
for SCSI ports and provider setting data rate of SCSI on SCSI
ports.
Appendix F
Added RAIDGuard Central.
Appendix G
Added VDS Provider.
1.4Chapter 1
Updated contents.
2.1.1 Added language setting in
Firefox.
2.2.1 Updated the hard disk tray
color.
2.2.3 Added the rear side of redundant-controller system.
2.3 Added the rear view and
descriptions of the SAS JBOD
chassis and its identifiers.
2.5 Added notes for the redundant-controller system and the
different parameters in the
degraded mode.
2.6.2 Added preferred controller
option and VVOL button for
JBOD disks.
2.6.4 Added preferred controller
option and VVOL button for
logical disks.
2.6.5 Added preferred controller
option and VVOL button for
volumes.
2.6.6 Added Virtual Volumes.
2.7.15 Added the new option ‘Mirrored Write Cache Control’
2.8.2 Added a note for the FC port
identifiers in the redundantcontroller system, and descriptions for the WWNN button.
2007/10/29
2008/02/25
xxi
Contents
VersionDescriptionRelease Date
1.42.9.2 Added the new option ‘Port’
for the SNMP setting, and a
note for the OIDs used for
each SNMP version.
2.9.5 Added the new option ‘Path
Failover Alert Delay” and a
new check item ‘Controller
Failure’ for Auto Write-Through
Cache option.
2.10.6/2.10.7/2.10.9/2.11.1/2.11.2/
2.11.3
Added a note to for the screen
difference in the redundantcontroller system.
Chapter 4
Updated CLI commands.
Appendix B
Added Features and Benefits.
Appendix C
Added screen and descriptions for the redundant-controller system.
Appendix D
Updated event log messages.
Chapter 5
Modified Multi-path IO solution
on 5.1.
Added 5.2 Redundant Controller.
Added 5.3 Snapshot
Move all related advanced functions/
utilities from appendices to Chapter 5.
2008/02/25
xxii
1.4.1Chapter 4
Updated CLI commands.
Chapter 5
Modified Multi-path IO solution
on 5.1.
2008/03/28
VersionDescriptionRelease Date
Contents
1.4.2Chapter 1
Added CLI In-band API Features.
Added Snapshot function note.
Chapter 4
Added support CLI In-band API.
Modified 4.12 snapshot com-
mands.
Chapter 5
Added 5.1.8 Multi-path IO solu-
tion on SUN Solaris.
Modified 5.3 Snapshot contents.
Appendix D
Modified D.10 snapshot events.
2008/05/05
xxiii
Contents
VersionDescriptionRelease Date
1.5Chapter 6
Modified content of all.
Chapter 5
Modified section 5.2.
Move 5.2.2 Monitor Mode to 2.2.
Move 5.2.2 SAS JBOD to 2.3.
2.9.2 Added SNMP agent functions.
Added descriptions related
SNMP MIB.
5.3.5 Removed.
5.3.6 Removed.
5.3.7 Removed.
2.6.6 Added descriptions related
spare COW volumes functions.
Added descriptions related
Restore functions.
Appendix C
Revised the subtitles.
2.10.4 Added a Note.
Chapter 1
Modified snapshot functions
content.
Modified system monitoring func-
tions content.
Modified management inter-
faces content.
1.4 Added Virtual disks descriptions.
1.5 Added SNMP Manager descriptions.
Appendix D
Updated snapshot events.
Chapter 4
Updated CLI commands.
2.7.15 Added a Note of mirrored write
cache.
2.1 Added browser support.
2008/07/03
xxiv
VersionDescriptionRelease Date
Contents
1.6Chapter 5
Modified section 5.2.
Inserted Multiple ID solutions in
1.7Add iSCSI midel
Add upgradable controller model
2008/11/14
2009/11/30
xxv
Introduction
Chapter 1: Introduction
Congratulations on your purchase of our RAID controller. Aiming at
serving versatile applications, the RAID controller ensures not only data
reliability but also improves system availability. Supported with cuttingedge IO processing technologies, the RAID controller delivers outstanding
performance and helps to build dependable systems for heavy-duty
computing, workgroup file sharing, service-oriented enterprise
applications, online transaction processing, uncompressed video editing,
or digital content provisioning. With its advanced storage management
capabilities, the RAID controller is an excellent choice for both on-line
and near-line storage applications. The following sections in this chapter
will present an overview of features of the RAID controller, and for more
information about its features and benefits, please see Appendix B.
1.1 Overview
• Seasoned Reliability
The RAID controller supports various RAID levels, 0, 1, 3, 5, 6, and including
multi-level RAID, like RAID 10, 30, 50, and 60, which perfectly balances
performance and reliability. To further ensure the long-term data integrity,
the controller provides extensive maintenance utilities, like periodic
SMART monitoring, disk cloning, and disk scrubbing to proactively prevent
performance degradation or data loss due to disk failure or latent bad
sectors.
The controller also supports multi-path I/O (MPIO) solutions tolerating path
failure and providing load balance among multiple host connections for
higher availability and performance. Together with active-active
redundant-controller configuration, the RAID system offers high
availability without single point of failure.
• Great Flexibility and Scalability
Nowadays, IT staff is required to make the most from the equipments
purchased, and thus easier sharing and better flexibility is a must for
business-class storage systems. The RAID controller allows different RAID
configurations, like RAID levels, stripe sizes, and caching policies, to be
deployed independently for different logical units on single disk group,
such that the storage resources can be utilized efficiently by fulfilling
different requirements.
As business grows or changes during the lifetime of storage systems, the
requirements are very likely to be changed, and the users need to
reconfigure the system to support the business dynamics while
maintaining normal operations. The RAID controller allows capacity
expansion by adding more disk drives or expansion chassis.
1-1
Introduction
Comprehensive online reconfiguration utilities are available for migration
of RAID level and stripe size, volume management, capacity resizing, and
free space management.
• Outstanding Performance
The RAID controller delivers outstanding performance for both
transaction-oriented and bandwidth-hungry applications. Its superscalar
CPU architecture with L2 cache enables efficient IO command
processing, while its low-latency system bus streamlines large-block data
transfer.
In addition to the elaborated RAID algorithms, the controller implements
also sophisticated buffer caching and IO scheduling intelligence.
Extensive IO statistics are provided for monitoring the performance and
utilization of storage devices. Users can online adjust the optimization
policy of each LUN based on the statistics to unleash the most power of
the controller.
• Comprehensive and Effortless Management
Users can choose to manage the RAID systems from a variety of user
interfaces, including command line interface over local console and
secure shell (SSH), LCD panel, and web-based graphical user interface
(GUI). Events are recorded on the NVRAM, and mail is sent out to notify
the users without installing any software or agents. Maintenance tasks like
capacity resizing and disk scrubbing are online executable, and can be
scheduled or periodically executed. With the comprehensive
management utilities, users can quickly complete the configurations and
perform reconfiguration effortlessly.
• Support dual active-active controller configuration
• Online seamless controller failover and failback
• Cache data mirroring with on/off control option
• Auto background task transfer during controller failover and failback
• Support simultaneous access to single disk drive by two controllers
• Online manual transfer preferred controller of a virtual disk
• Uninterrupted system firmware upgrade
1-6
Introduction
• Snapshot Functions (model-dependent)
• Support copy-on-write compact snapshot
• Instant online copy image creation and export
• Instant online data restore/rollback from snapshot
• Support multiple active snapshots for single LUN
• Support read/writable snapshot
• Support spare volume for overflow
• Support online snapshot volume expansion
• Support snapshot configuration roaming
• Miscellaneous Supporting Functions
• Support configurations download and restore
• Support configurations saving to disks and restore
• Support password-based multi-level administration access control
• Support password reminding email
• Time management by RTC and Network Time Protocol (NTP) with DST
• Support controller firmware upgrade (boot code and system code)
• Support dual flash chips for protecting and recovering system code
• Support object naming and creation-time logging
Note
The features may differ for different RAID system models and
firmware version. You may need to contact your RAID system
supplier to get the updates.
1.3 How to Use This Manual
This manual is organized into the following chapters:
• Chapter 1 (Introduction) provides a feature overview of the RAID
system, and some basic guidelines for managing the RAID system.
• Chapter 2 (Using the RAID GUI) describes how to use the embedded
GUI for monitoring and configurations with information helping you to
understand and utilize the features.
• Chapter 3 (Using the LCD Console) presents the operations of LCD
console, which helps you to quickly get summarized status of the RAID
system and complete RAID setup using pre-defined configurations.
• Chapter 4 (Using the CLI Commands) tabulates all the CLI commands
without much explanation. Because there is no difference in functions
1-7
Introduction
Hard Disks
Vol umes
Logical Disks
Disk Groups
Local
Spare
Logical Units
Unused
Disks
Global
Spare
JBOD
Disks
Figure 1-1 Layered storage objects
or definitions of parameters between GUI and CLI, you can study the
GUI chapter to know how a CLI command works.
• Chapter 5 (Advanced Functions) provides in-depth information about
the advanced functions of the RAID system to enrich your knowledge
and elaborate your management tasks.
• Chapter 6 (TroubleShooting) provides extensive information about how
you can help yourself when encoutering any troubles.
• Appendices describe supporting information for your references.
If you are an experienced user, you may quickly go through the key
features to know the capabilities of the RAID system, and then read only
the chapters for the user interfaces you need. Because this RAID system is
designed to follow the commonly-seen conventions in the industry, you
will feel comfortable when dealing with the setup and maintenance
tasks. However, there are unique features offered only by the RAID
system, and the RAID systems may be shipped with new features. Fully
understanding these features will help you do a better job.
If you are not familiar with RAID systems, you are advised to read all the
chapters to know not only how to use this RAID system but also useful
information about the technologies and best practices. A better starting
point for your management tasks is to get familiar with the GUI because
of its online help and structured menu and web pages. You also need to
know the LCD console because it is the best way for you to have a quick
view of the system’s health conditions. If you live in an UNIX world, you
probably like to use the CLI to get things done more quickly.
To avoid having an ill-configured RAID system, please pay attentions to
the warning messages and tips in the manual and the GUI. If you find
mismatch between the manual and your RAID system, or if you are unsure
of anything, please contact your suppliers.
1.4 RAID Structure Overview
The storage resources are
managed as storage objects in
a hierarchical structure. The
hard disks, the only physical
storage objects in the structure,
are the essence of all other
storage objects. A hard disk can
be a JBOD disk, a data disk of a
disk group, or a local spare disk
of a disk group. It can also be an
unused disk or a global spare
disk. The capacity of a disk
group is partitioned to form
logical disks with different RAID
configurations, and multiple
1-8
Introduction
logical disks can be put together to create volumes using striping,
concatenation, or both. The JBOD disks, logical disks, and volumes, are
virtual disks, which can be exported to host interfaces as SCSI logical units
(LUN) and serve I/O access from the host systems. Below are more
descriptions about each storage objects.
• JBOD disk
A JBOD (Just a Bunch Of Disks) disk is formed by single hard disk that can
be accessed by hosts as a LUN exported by the controller. The access to
the LUN is directly forwarded to the hard disk without any address
translation. It is often also named as pass-through disk.
• Member disk
The hard disks in a disk group are member disks (MD). A member disk of a
disk group can be a data disk or a local spare disk. A data member disk
provides storage space to form logical disks in a disk group.
• Disk group
A disk group (DG) is a group of hard disks, on which logical disks can be
created. Operations to a disk group are applied to all hard disks in the
disk group.
• Logical disk
A logical disk (LD) is formed by partitioning the space of a disk group.
Logical disks always use contiguous space, and the space of a logical
disk is evenly distributed across all member disks of the disk group. A
logical disk can be exported to hosts as a LUN or to form volumes.
• Local spare and global spare disk
A spare disk is a hard disk that will automatically replace a failed disk and
rebuild data of the failed disk. A local spare disk is dedicated to single
disk group, and a global spare disk is used for all disk groups. When a disk
in a disk group fails, the controller will try to use local spare disks first, and
then global spare disks if no local spare is available.
• Volume
A volume is formed by combining multiple logical disks using striping
(RAID0) and concatenation (NRAID) algorithms. Multiple logical disks form
single volume unit using striping, and multiple volume units are
aggregated to form a volume using concatenation. A volume can be
exported to hosts as a LUN.
• Logical unit
A logical unit (LUN) is a logical entity within a SCSI target that receives
and executes I/O commands from SCSI initiators (hosts). SCSI I/O
commands are sent to a target device and executed by a LUN within the
target.
1-9
• Virtual disk
A virtual disk is an storage entity that can service I/O access from LUNs or
from other virtual disks. It could be JBOD disk, logical disk, or volume. If a
virtual disk is part of other virtual disk, then it cannot be exported to LUNs.
• LUN mapping
A LUN mapping is a set of mapping relationships between LUNs and
virtual disks in the controller. Computer systems can access the LUNs
presented by the controller after inquiring host ports of the controller.
1.5 User Interfaces to Manage the RAID System
A variety of user interfaces and utilities are offered for managing the RAID
systems, and you may choose to use one or multiple of them that suit your
management purposes. Introduction to these interfaces and utilities is
described as below:
• Web-based GUI (chapter 2)
Introduction
Web-based GUI is accessed by web browsers after proper setup of the
network interfaces. It offers an at-a-glance monitoring web page and fullfunction system management capability in structured web pages. It is
advised to use the web-based GUI to fully unleash the power of RAID
system if you are a first-time user.
• SNMP Manager (section 2.9.2 Setting up the SNMP)
SNMP (Simple Network Management Protocol) is a widely used protocol
based on TCP/IP for monitoring the health of network-attached
equipments. The RAID controller is equipped with an embedded SNMP
Agent to support SNMP-based monitoring. You can use SNMP
applications (SNMP v1 or v2c-compliant) at remote computers to get
event notification by SNMP traps and watch the status of a RAID system.
• LCD Console (chapter 3)
LCD console is offered for quick configuration and for display of simplified
information and alerting messages. It is mostly for initializing network
setting to bring up the web-based GUI or for knowing the chassis status.
Using the LCD console for configuration is only advised when you know
clearly the preset configurations.
• CLI Commands (chapter 4)
Command line interface can be accessed by RS-232 port, TELNET, or SSH.
You can also use host-based CLI software to manage RAID systems by inband (FC/SAS/SCSI) or out-of-band (Ethernet) interfaces. It helps you to
complete configurations in a fast way since you can type in text
commands with parameters quickly without the need to do browse and
click. You may also use CLI scripts for repeating configurations when
deploying many systems.
1-10
Introduction
• RAIDGuard Central (chapter 5)
RAIDGuard Central is a software suite that helps you to manage multiple
RAID systems installed in multiple networks. It locates these systems by
broadcasting and will be constantly monitoring them. It receives events
from the systems, and stores all the events to single database. It also
provides event notification by MSN messages.
• Microsoft VDS (chapter 5)
VDS is a standard of RAID management interface for Windows systems.
The RAID system can be accessed by VDS-compliant software after you
install the corresponding VDS provider to your systems. This helps you to
manage RAID systems from different vendors using single software. But
note because VDS is limited to general functions, you need to use Web
GUI or CLI for some advanced functions of this RAID system.
1.6 Initially Configuring the RAID System
Properly configuring your RAID systems helps you to get the most out of
your investments on the storage hardware and guarantee planned
service level agreements. It also reduces your maintenance efforts and
avoids potential problems that might cause data loss or discontinued
operations. It is especially true for a powerful and flexible RAID system like
the one you have now. This section provides some basic steps and
guidelines for your reference. The initial configuration has the following
tasks:
1. Understanding your users’ needs and environments
2. Configuring the hardware settings and doing health check
3. Organizing and presenting the storage resources
4. Installing and launching bundled software (optionally)
5. Getting ready for future maintenance tasks
• Understanding your users’ needs and environments
The first step for procuring or deploying any equipment is to know the
users’ needs and environments, assuming you’ve already known much
about your RAID systems. Users’ needs include the capacity,
performance, reliability, and sharing. The environment information
includes the applications, operating systems (standalone or clustered),
host systems, host adapters, switches, topologies (direct-attached or
networked storage), disk drives (enterprise-class, near-line, or desktop)
and management networks. Extra cares are needed if you are installing
the RAID systems to an existed infrastructure under operations. Check
your RAID system supplier to ensure good interoperability between the
RAID system and the components in your environments. You will also
need to know the potential changes in the future, like capacity growth
rate or adding host systems, such that you can have plans for data
migration and reconfigurations. The quality of your configurations will
1-11
Introduction
largely depend on the information you collect. It is advised to write down
the information of users’ needs and environments as well as the
configurations in your mind, which can be very helpful guidance through
the all the lifetime of the RAID systems.
• Configuring the hardware settings and doing health check
After installing your RAID systems with necessary components, like hard
disks and transceivers, to your environment, enabling the user interfaces is
a prerequisite if you want to do anything useful to your RAID systems. The
only user interface that you can use without any tools is the LCD console,
by which the settings of the RS232 port and the management network
interface can be done to allow you to use the GUI and CLI (see 3.3 Menu on page 3-6).
Now, do a quick health check by examining the GUI monitoring page to
locate any mal-functioning components in the chassis or suspicious
events (section 2.2). Follow the hardware manual to do troubleshooting, if
needed, and contact your supplier if the problems still exist. Make sure
the links of the host interfaces are up and all installed hard disks are
detected. Since your hard disks will be the final data repository, largely
influencing the overall performance and reliability, it is advised to use the
embedded self-test utility and SMART functions to check the hard disks
(see 2.8 Hardware Configurations on page 2-57 ). A better approach
would be to use benchmark or stress testing tools.
You need also be sure that all the attached JBOD systems are detected
and no abnormal event reported for the expansion port hardware (see
2.3 SAS JBOD Enclosure Display (for SAS expansion controller only) on
page 2-13). Sometimes, you will need to adjust the hardware parameters,
under your supplier’s advices, to avoid potential interoperability issues.
• Organizing and presenting the storage resources
The most essential configuration tasks of a RAID system are to organize
the hard disks using a variety of RAID settings and volume management
functions, and eventually to present them to host systems as LUNs (LUN
mapping). This is a process consisted of both top-down and bottom-up
methodology. You see from high-level and logical perspectives of each
host system to define the LUNs and their requirements. On the other hand,
you will do configuration starting from the low-level and physical objects,
like grouping the disk drives into disk groups.
Tradeoff analysis is required when choosing RAID levels, like using RAID 0
for good performance but losing reliability, or using RAID 6 for high
reliability but incurring performance penalty and capacity overhead. The
appendix provides information about the algorithms of each RAID level
and the corresponding applications. You can also use the embedded
volume management functions to build LUNs of higher performance and
larger capacity. The RAID system offers much flexibility in configurations,
like independently-configurable RAID attributes for each logical disk,
1-12
Introduction
such that capacity overhead can be minimized while performance and
reliability can still be guaranteed.
You might need to pay attentions to a few options when doing the tasks
above, like initialization modes, cache settings, alignment offset
rebuilding mode, and etc. Please read the GUI chapter to know their
meanings and choose the most appropriate settings, because they are
directly or indirectly related to how well the RAID system can perform (see
2.6 RAID Management on page 2-22 and 2.7.16 Miscellaneous on
page 2-56).
Note
When planning your storage resources, reserving space for snapshot
operations is needed. Please check chapter 5 for information about
the snapshot functions.
• Installing and launching bundled software (optionally)
The RAID system is equipped with host-side software providing solutions for
multi-path I/O, VDS-compliant management, and centralized
management console on multiple platforms. You can locate their
sections in the chapter 5 and know their features and benefits, as well as
how to do the installation and configuration. Contact your RAID system
supplier to know the interoperability between the software and the
system.
Note
Installing multi-path I/O driver is a must for redundant-controller
systems to support controller failover/failback. Please check
Chapter 5: Advanced Functions for more information about MPIO
and redundant-controller solution.
• Getting ready for future maintenance tasks
The better you’re prepared, the less your maintenance efforts would be.
Below are the major settings you’ll need for maintenance.
Event logging and notification
You can have peace only if you can always get timely notifications of
incidents happening to your RAID systems, so completing the event
notification settings is also a must-do. You might also need to set the
policies for event logging and notifications (see 2.9 Event Management on page 2-66).
Data integrity assurance
For better system reliability, you are advised to set policies for handling
exceptions, like to start disk cloning when SMART warning is detected or
too many bad sectors of a hard disk are discovered (see 2.8.1 Hard disks on page 2-57), or to turn off write cache when something wrong happens
1-13
(see 2.9.5 Miscellaneous on page 2-71). You may also schedule periodic
maintenance tasks to do disk scrubbing(see 2.7.9 Scrubbing on page 2-
50) for defected sectors recovery or to do disk self-tests (see 2.7.11
Performing disk self test on page 2-51).
Miscellaneous settings
There are also minor settings that you might need to do, like checking UPS
(see 2.9.4 UPS on page 2-70), time setup (see 2.10.4 System Time on page 2-76), changing password (strongly suggested) and etc.
Saving the configurations
If you’ve done all the configurations, please save the configurations to
files (human-readable text file for your own reference and binary file for
restoring the configurations if any disaster happens).
1.7 Maintaining the RAID System
Introduction
Properly configuring RAID systems is a good starting point, but you need
to do regular checking and reconfiguration to make sure your RAID
systems are healthy and delivering the best throughout the lifetime.
• Constantly monitoring RAID system health
You can quickly get an overview of the RAID system health by accessing
the monitoring page of the Web GUI (see 2.2 Monitor Mode on page 2-
5). You probably need to do so only when receiving event notification
email or traps. All the events are described in the Appendix D, each of
which has suggested actions for your reference. You need to watch the
status of chassis components, like fans, power supply units, battery
module, and controller module. You need also check the status of hard
disks, and the I/O statistics (see 2.11 Performance Management on page 2-81) to know the system loading level and distribution. A hard disk
with long response time or lots of media errors reported could be in
trouble.
• Performing online maintenance utilities
Comprehensive maintenance utilities are offered for ensuring the best
condition and utilization of your RAID systems all through its lifetime. They
include data integrity assurance, capacity resource reallocation, and
RAID attributes migration.
Data integrity assurance
For data long-term integrity assurance and recovery, you may use disk
scrubbing (see 2.7.9 Scrubbing on page 2-50), disk cloning (see 2.7.8
Cloning hard disks on page 2-48), DST (see 2.7.11 Performing disk self test
on page 2-51), and SMART (see s 2.8.1 Hard disks on page 2-57). For how these can help you, please go to Appendix B: Features and Benefits.
1-14
Introduction
Capacity resource reallocation
If you’d like to add more disks for capacity expansion, you can use disk
group expansion (see 2.7.1 Expanding disk groups on page 2-44).
Resizing logical disks and volumes ( 2.7.4 Expanding the capacity of
logical disks in a disk group on page 2-46 to 2.7.6 Expanding volumes on
page 2-47) can also help you to transfer the unused capacity of a LUN to
others that are desperate for more space without any impact to other
LUNs. If unused space is scattered, you can use disk group
defragmentation (see 2.7.2 Defragmenting disk groups on page 2-44) to
put them together.
RAID level and strip size migration
Changing RAID level of a logical disk (see 2.7.3 Changing RAID level /
stripe size for logical disks on page 2-45) will significantly affect the
performance, reliability, and space utilization. For example, you may add
one disk to a two-disk RAID 1 disk group and change its RAID level to RAID
5, such that you can have a three-disk RAID 5 disk group, offering usable
space of two disks. On the other hand, changing stripe size affects only
the performance, and you may do as many online experiments as
possible to get the performance you want.
Schedule a task
You won’t want the performance degradation during the execution of
the online maintenance utilities, which very like need non-trivial amount
of time. To avoid such impact, you’re allowed to schedule a task
execution to any time you want (see 2.7.14 Schedule task on page 2-55),
like during off-duty hours. You can get event notifications when the task is
done (or unfortunately fails), or at a user-configurable percentage of the
task progress (see 2.7.16 Miscellaneous on page 2-56).
1-15
Using the RAID GUI
Figure 2-1 GUI login screen
Chapter 2: Using the RAID GUI
2.1 Accessing the RAID GUI
1. Open a browser and enter the IP address in the address field. (The
default IP address is 192.168.0.1. You can use the FW customization tool
to set another IP address as the default.)
The supported browsers are listed as below:
• IE 6.x (Windows)
• IE 7.x (Windows)
• FireFox 1.x (Windows, Linux, and Mac)
• Safari 1.x and 2.x (Mac)
2. The following webpage appears when the connection is made. To
login, enter the username and password (see 2.2.4 Login on page 2-
12). You can then access the Config Mode.
2.1.1 Browser Language Setting
The GUI is currently available in English, Traditional Chinese, and Simplified
Chinese. For other languages, you can use the FW customization tool to
add multi-language support. (The following example shows how to set up
language in Internet Explorer 6. Other browsers support the same
functionality. Please refer to the instructions included with your browser
and configure the language accordingly.)
Open your web browser and follow the steps below to change the GUI
language.
1. Click Tools > Internet Options > Language > Add.
2-1
Using the RAID GUI
Figure 2-2 Setting the language in Firefox
2. In the Add Language window, find the language you want to use, and
click OK.
3. In the Language Preference window, select the language you want to
use, and use the Move Up and Move Down buttons to move it up to the
top of the list. Click OK.
4. Click OK again to confirm the settings.
Note
If the GUI does not support the selected language, the webpage will
still appear in English.
• Firefox language settings
Here is an example of how to change the GUI language settings in
Firefox.
1. Open the Firefox browser and select Tools > Options > Advanced >
General tab.
2. Click the Choose... button to specify your preferred language for the
GUI to display.
2-2
Using the RAID GUI
Figure 2-3 Languages dialog (Firefox)
Figure 2-4 Multiple system viewer (side button)
3. The following Languages dialog displays. To add a language, click
Select a language to add..., choose the language, and click the Add
button. Use the Move Up and Move Down buttons to arrange the
languages in order of priority, and the Remove button if you need to
remove a language. Click OK.
4. Click OK again to confirm the settings.
2.1.2 Multiple System Viewer
The RAID GUI features a side button for a quick on-line system view. The
side button is always on the left side of the screen so that you can click to
view all the other on-line systems at anytime. Move the cursor over the
side button and the multiple system viewer appears (see Figure 2-5).
2-3
Using the RAID GUI
Figure 2-5 Opening the multiple system viewer
Move the cursor to a system, and the following system information will
appear: IP address, System name, Model name, Firmware version, and
Status. Click on a system to open its GUI, and you can login to view the
complete system information.
If there are too many on-line systems displayed in the viewer at one time,
you can use the arrow buttons to scroll up and down. Click thebutton
to refresh the viewer.
Move your cursor away from the viewer, and it disappears.
Note
1. The multiple system viewer supports up to 256 on-line systems.
2. Only systems in the same subnet mask will appear in the multiple
system viewer.
2-4
2.2 Monitor Mode
Figure 2-6 Single controller GUI monitor mode
Figure 2-7 Redundant-controller system GUI monitor
RAID GUI monitors the status of your RAID controller(s) through your
Ethernet connection. The RAID GUI window first displays the Monitor
Mode. This mode is also the login to enter Config Mode. The GUI
components shown are introduced in the following sections.
Using the RAID GUI
At the front view panel, there are 16 or 24 HDD trays displayed in the
redundant-controller system. Depending on the redundant-controller
system model, the number of HDDs may differ. Besides a maximum of
eight enclosures can be connected to the subsystem serially while the
single subsystem supports up to seven enclosures. For more information
about the indications of HDD status code and color, see 2.2.1 HDD state on page 2-6.
2-5
Using the RAID GUI
Figure 2-8 HDD Tray (GUI)
There are four buttons at the top right of the page. See the following
table for each button’s function.
ButtonDescription
Switches between Monitor Mode and
Switch Mode
Logout
Help
About
Config Mode.
Logs out the user.
Opens the Help file.
Displays the GUI version, firmware
version, and boot code version.
Table 2-1 Buttons in monitor and config mode
System name, controller name, firmware version, and boot code version
information are also displayed at the bottom left of the page.
2.2.1 HDD state
Through the front panel of the RAID console displayed in the GUI, you can
easily identify the status of each hard disk by its color and status code.
Click on each hard disk to display detailed information.
Note
The RAID system can support up to 24 HDD trays. The number of
HDD trays displayed in the GUI monitor mode may differ depending
on the RAID system model.
The status code and color of hard disks are explained in the following
tables.
CodeHard Disk Status
UUnused disk
J0-J15JBOD
2-6
Table 2-2 Hard disk code
CodeHard Disk Status
Using the RAID GUI
D0-D7
D0-Dv
L0-L7Local spare
GGlobal spare
TClone
Disk group
(The redundant-controller system supports up
to 32 DGs, which are encoding from D0 to Dv)
Table 2-2 Hard disk code
ColorHard Disk StatusColorHard Disk Status
Green
Red
Orange
Blue
Online
Adding (flashing green)
Faulty
Conflict
ForeignEmpty
Purple
Silver
Gray
Unknown
Permanently removed
Removed
Table 2-3 Hard disks tray color
2.2.2 Information icons
When components are working normally, their icons are shown in green.
When components are uninstall, not norms or, failed, the icons are shown
in red. Click on each icon for detailed information.
IconNameDetailed Information
Event log view
Beeper
Temperature
• Seq. No.
• Severity
• Type
• Time
• Description
See 6.2 Beeper on page 6-1 for the
possible beeper reasons.
• Sensor
• Current
• Non-critical*
• Critical*
Voltage
• Sensor
• Current
• High Limit*
• Low Limit*
Table 2-4 Information icons
2-7
Using the RAID GUI
Fan module
(This icon will be
shown when the
fan is installed on
the controller.)
BBM
(This icon will be
shown when the
BBM control is on.)
UPS (This icon will
be shown when the
UPS control is on.)
• Controller Fan
• State
• Remaining Capacity
• Voltage (V)
• Temperature (ºC/ºF)
• Non-critical Temperature (ºC/ºF)*
• Critical Temperature (ºC/ºF)*
UPS Status
• State
• Load Percentage
• Temperature (ºC/ºF)
• AC Input Quality/ High Voltage (V)/
Low Voltage (V)
Battery Status
• State
• Voltage (V)
• Remaining Power in percentage/
seconds
Table 2-4 Information icons
2-8
Using the RAID GUI
A
B
C
E
A
B
Figure 2-9 Rear side of the RAID system (GUI)
A
B
CD
A
B
A
B
A
B
CFB
AA
2.2.3 Rear side view
On the rear side of the RAID system, you can see the fan modules, power
supplies, host ports (fibre, SAS, SCSI or iSCSI), one Ethernet port, and SAS
expansion port (for SAS expansion controller solution). Click on the
components for detailed information.
• For single-controller RAID system
2-9
Using the RAID GUI
Figure 2-10 Rear side of the redundant RAID system
Table 2-5 Components at the rear side of the system
2-11
Using the RAID GUI
Figure 2-12 Login
C
D
E
F
G
Ethernet port
Fiber ports
SAS ports
SCSI ports
iSCSI ports
• IP Address
• Network Mask
• Gateway
• DNS Server
• MAC Address
• FCP ID
• WWN
• Connection Mode
• Date Rate
• Hard Loop ID
• SAS ID
• SAS Address
• SCSI ID
• Data Rate
• Default SCSI ID
• iSCSI ID
• IP address
• Network Mask
• Gateway
• MAC Address
• Jumbo Frame
• Link Status
Table 2-5 Components at the rear side of the system
2.2.4 Login
The RAID GUI provides two sets of default login members.
Username
Password
Table 2-6 Login usernames and passwords
When logging in to the GUI as user, you can only view the settings. To
modify the settings, use admin to log in.
useradmin
00000000
• Forgotten password
In the event that you forget your password, click the Forget password
icon and an email containing your password can be sent to a preset mail
account. To enable this function, make sure the Password Reminding Mail
2-12
Using the RAID GUI
option is set to On (see 2.10.5 Security control on page 2-76), and the mail
server has been configured in System Management > Network.
Note
You can use the FW customization tool to set a new password as the
default.
2.3 SAS JBOD Enclosure Display (for SAS expansion
controller only)
The single controller RAID subsystem provides a SAS expansion port which
allows users to connect a SAS JBOD.The single controller support 64 hard
disks.
Each redundant / upgradable system provides two SAS expansion ports
to connect with one or more SAS JBOD chassis. Depending on the
redundant-controller system and SAS JBOD chassis models (16-bay or 24bay) as well as the memory size in use (1G or 2G), the GUI may have
different enclosure tabs and front tray view displayed. See Table 2-7
below for the supported number of SAS JBOD chassis and hard disks.
RAID Subsystem
model
16-bay
24-bay
Table 2-7 Supported number of redundant SAS JBOD chassis and hard disks
Memory size
1G6432
2G or higher1207*5*
1G643*2*
2G or higher12064
Units of
HDD
SAS JBOD
(16-bay)
SAS JBOD
(24-bay)
* Please note that there are some empty slots shown in the SAS JBOD
enclosure display (in the last enclosure tab) due to the maximum number
of supported drives.
2-13
Using the RAID GUI
Single SAS JBOD chassis:
Redundant SAS JBOD chassis:
Figure 2-13 Rear side of the SAS JBOD chassis (GUI)
Down stream port: Down 1
Up stream ports (from left
to right): Up1/ Up2
Down stream port: Down 1
Up stream ports (from left
to right): Up1/ Up2
2.3.1 Rear side monitor of the SAS JBOD chassis
On the rear side of the SAS JBOD chassis, there are three ports (for single
SAS JBOD) or six ports (for redundant SAS JBOD) available for SAS JBOD
expansion. See the port identifiers as shown in Figure 2-13.
2-14
Using the RAID GUI
Figure 2-14 Single SAS JBOD connection
2.3.2 SAS JBOD Installation with RAID subsystem
• For single controller with single JBODs:
Use the down and up stream ports to connect the RAID subsystem with
up to three SAS JBODs. Figure 2-14 shows a serial construction for
expanded JBOD disks. Connect the RAID subsystem’s SAS port to the up
stream port of a SAS JBOD using a Mini SAS cable. For more expanded
JBOD chassis, connect the down stream port on the previously
connected SAS JBOD to the up stream port on the other SAS JBOD.
2-15
Using the RAID GUI
Figure 2-15 Redundant SAS JBOD loop connection
• For redundant controller with redundant JBODs
To ensure the system can continue its operation without any interruption
in the event of any SAS JBOD failure, a loop construction is suggested.
Figure 2-15 shows an example of the loop implementation with a
redundant RAID system and SAS JBODs. Users can create as below:
The connection shown in Figure 2-15 enables all the three JBOD chassis to
be looped through the redundant-controller system. In this way, the data
is transmitted from node to node around the loop. Once the JBOD2 is
failed and causes interruption, JBOD1 and JBOD3 still work normally via
the redundant path.
2-16
Using the RAID GUI
Figure 2-16 SAS enclosure monitor mode
Enclosure tabs
Figure 2-17 SAS enclosure configuration mode
Enclosure ID drop-down menu
2.3.3 Monitor mode
When SAS JBOD chassis are
connected, the enclosure
tabs will appear in the
Monitor Mode (see Figure 2-
16). Each tab view displays
different information for each
connected enclosure. Click
the Enclosure 0 tab to view
the information of the local
RAID subsystem. Click the
Enclosure 1, Enclosure 2, or
Enclosure 3 tabs for a brief
view of the connected SAS
JBOD.
Each SAS JBOD has an unique chassis identifier, which can be detected
automatically by the GUI when connected. The chassis identifier
corresponds to the enclosure tab number shown in the GUI. In this way,
users can identify and manage each SAS JBOD easily and correctly.
However, the enclosure tabs are always displayed in ascending order of
chassis identifiers instead of the chassis connection order.
The number of enclosure tabs may be different according the number of
connected SAS JBOD chassis. For more information, see • For redundant
controller with redundant JBODs.
Figure 2-17 displays the Config Mode when a SAS enclosure is
connected. Use the drop-down menu at the top of the page to select
the enclosure ID you wish to configure.
Note
In order to use the expansion port on the SAS controller, you must
have firmware version 1.20 or later for complete functionary.
2-17
Using the RAID GUI
2.3.4 Information icons
In Monitor Mode, the following information icons are displayed on the
screen. When components are working normally, their icons are shown in
green. When components fail to work, the icons are shown in red. Click
on each icon for detailed information.
IconNameDetailed Information
Temperature
Voltage
Fan module
Power supply
Table 2-8 Information icons (in SAS monitor mode)
• Sensor
• Current
• Non-critical
• Critical
• Sensor
• Current
• High Limit
• Low Limit
• BP_FAN1
• BP_FAN2
• BP_FAN3
• BP_FAN4
• POW1
• POW2
2.3.5 SAS/SATA HDD information
Through the hard disk codes and tray color shown on the screen, you can
easily identify the status of each connected SAS/SATA hard disk. Click on
each SAS/SATA hard disk to display detailed information.
For more information about hard disk codes and tray colors, see Table 2-2
and Table 2-3 on page 2-7.
2-18
Using the RAID GUI
Figure 2-18 Overview screen
2.4 Config Mode
To configure any settings under Config Mode, log in with admin and its
password. The Overview screen displays as below.
The RAID GUI Config Mode provides the following configuration settings.
Quick SetupAllows you to configure your array quickly.
RAID
Management
Maintenance
Utilities
Hardware
Configurations
Event
Management
System
Management
Performance
Management
Allows you to plan your array.
Allows you to perform maintenance tasks on your
arrays.
Allows you to configure the settings to hard disks,
FC/SAS ports, and COM port settings.
Allows you to configure event mail, event logs,
and UPS settings.
Allows you to erase or restore the NVRAM
configurations, set up the mail server, update the
firmware and boot code and so on.
Allows you to check the IO statistics of hard disks,
caches, LUNs, and FC/SAS ports.
Before configuration, read “Understanding RAID” thoroughly for RAID
management operations.
2-19
Using the RAID GUI
2.5 Quick Setup
2.5.1 Performance profile
The RAID GUI provides three performance profiles for you to apply the
preset settings to the RAID configuration. This allows users to achieve the
optimal performance for a specified application. When using a profile for
the RAID configuration, any attempt to change the settings is rejected.
See the following table for the values of each profile. Select Off if you
want to configure the settings manually.
ProfileAV streaming
Disk IO Retry
Count
Disk IO Timeout
(second)
Bad Block RetryOffOnOn
Bad Block AlertOnN/AN/A
Disk CacheOnOnOn
Write CacheOnOnOn
Write Cache
Periodic Flush
(second)
Write Cache
Flush Ratio (%)
Read Ahead
Policy
Read Ahead
Multiplier
(Degrade: 2)
(Degrade: 10)
0
3
555
454545
AdaptiveOffAdaptive
8-16
Maximum IO
per second
11
3030
Maximum
throughput
2-20
Read Logs32-32
Table 2-9 Performance profile values
Note
When the disks are in the degraded mode with the AV streaming
profile selected, the disk IO retry count and timeout values may be
changed to reduce unnecessary waiting for I/O completion.
Using the RAID GUI
2.5.2 RAID setup
To perform quick setup, all hard disks must be on-line and unused. Users
can specify the RAID level, number of spare disks, and initiation method
for an easy RAID configuration. See the following for details of each
option.
Spare DisksSelect the required number of global spare disks.
Initialization
Option
This shows the number and the minimum size of
hard disks.
30 / RAID 50 / RAID 60
Background: The controller starts a background
task to initialize the logical disk by synchronizing
the data stored on the member disks of the logical
disk. This option is only available for logical disks
with parity-based and mirroring-based RAID levels.
The logical disk can be accessed immediately
after it is created.
Noinit: No initialization process, and the logical disk
can be accessed immediately after it is created.
There is no fault-tolerance capability even for
parity-based RAID levels.
• Single-controller RAID configuration
A volume (for raid30, raid50, or raid60) or a logical disk (for other RAID
levels) will be created with all capacity of all disks in the RAID enclosure. It
will be mapped to LUN 0 of all host ports. All other configurations will
remain unchanged, and all RAID parameters will use the default values.
• Redundant-controller RAID configuration
Two volumes (for raid30, raid50, or raid60) or two logical disks (for other
RAID levels) will be created with all capacity of all disks in the RAID
enclosure. One volume will be based on two disk groups, so totally there
will be four disk groups. The preferred controller of one volume or logical
disk is assgined to controller A and the other is assigned to controller B.
They will be mapped to LUN 0 and LUN 1 of all host ports on both
controllers. All other configurations will remain unchanged, and all RAID
parameters will use the default values.
2-21
Using the RAID GUI
2.6 RAID Management
2.6.1 Hard disks
This feature allows you to add or remove hard disks and set any online
disk as global spare drive. The hard disk information included is listed as
follows.
Unused, JBOD disk, DG data disk, Local spare, Global
spare, or Clone target
• State definition
On-line: The hard disk remains online when it is working properly.
Foreign: The hard disk is moved from another controller.
Conflict: The hard disk may have configurations that conflict with
controller configurations.
Removed: The hard disk is removed.
PRemoved: The hard disk is permanently removed.
Faulty: The hard disk becomes faulty when a failure occurs.
Initializing: The hard disk starts the initialization.
Unknown: The hard disk is not recognized by the controller.
• Mode definition
Ready: The hard disk is in use or ready for use.
Standby: The hard disk is in standby mode.
Unknown: The hard disk is not recognized by the controller.
2-22
Using the RAID GUI
• Buttons
Add: To add hard disks, select a hard disk and click this button.
Remove: To remove hard disks, select a hard disk and click this button. To
remove hard disks permanently, check the Permanent remove box when
removing them.
Modify: Select a hard disk and click this button to enter the settings
screen to enable or disable the disk cache and the disk identify function.
Note
1. When the selected hard disk is not in the on-line state, the Disk
Cache field will not be displayed.
2. If a hard disk belongs to a disk group, you cannot change its disk
cache. To modify it, refer to 2.6.3 Disk groups.
3. If the hard disk belongs to a disk group, you can check the ‘Apply to all members of this DG’ option to apply the disk identify
setting to all the member disks in a disk group.
4. The Disk Identify can let controller correctly identify a hard disk
even when they are moved from one slot to another at system
power off time, and the configurations for the disks can be
restored.
G.Spare: To add or remove global spare disks, click this button to enter
the settings screen.
• Detailed hard disk information
Click to display a complete list of hard disk information. You will see the
following details.
• HDD ID
• UUID
• Physical Capacity (KB)
• Physical Type
• Transfer Speed
• Disk Cache Setting
• Disk Cache Status
• Firmware Version
• Serial Number
• WWN
• NCQ Supported
• NCQ Status
• Command Queue Depth
• Standard Version Number
• Reserved Size of Remap Bad
Sectors
• Bad Sectors Detected
• Bad Sectors Reallocated
• Disk Identify
2-23
Using the RAID GUI
2.6.2 JBOD
This feature allows you to create, delete, and modify your JBOD settings.
• Create JBOD disks
Click Create to add a new JBOD disk, where up to a maximum of 16
JBOD disks can be created. Specify the following options for the
configuration.
JBOD IDSelect a JBOD ID from the drop-down menu.
NameUse the system default name as jbdx. ‘x’ is the
JBOD identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
Member DiskSelect a corresponding hard disk to be used for
JBOD from the drop-down menu.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the JBOD disk.
• Delete JBOD disks
Select the JBOD disk(s) you want to delete and click Delete. To delete all
LUNs of jbdx, check the ‘Force to delete LUN mapping(s)’ box. All access
to the JBOD will be stopped.
• Modify JBOD disks
To modify a setting, select a JBOD and click Modify. Specify the following
options for configuration.
NameType a name for the JBOD ID.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the JBOD disk. However, the controller
ownership will not change unless you check the
‘Change owner controller immediately’ box.
2-24
Write CacheThis option enables or disables the write cache of
a JBOD disk.
Using the RAID GUI
Write SortingThis option enables or disables the sorting in the
write cache. To improve writing performance, it is
recommended to turn this option on for random
access. This option is available only if the write
cache is on.
Read Ahead
Policy
Always: The controller performs pre-fetching data
for every read command from hosts.
Adaptive: The controller performs pre-fetching
only for host read commands that are detected
as sequential reads. The detection is done by read
logs.
Off: If there is no sequential read command, readahead will result in overhead, and you can disable
the read-ahead.
Read Ahead
Multiplier
This option specifies the read ahead multiplier for
the Always and Adaptive read ahead policies.
Select how much additional sequential data will
be pre-fetched. The default value is 8.
Read LogsThis option specifies the number of read logs for
the Adaptive read ahead policy. The range is
between 1 and 128. The default value is 32.
To clear write buffers in the write cache of a JBOD disk, select a JBOD and
click the Flush button.
• Create JBOD volume pair
Instead of creating volume pairs in the Snapshot Volumes page, you can
directly create volume pair to a specified JBOD disk by clicking the S.VOL
button. Specify a virtual disk as the secondary volume from the SV ID
drop-down menu, then click the Apply button to confirm.
• Detailed JBOD disk information
Click to display a complete list of JBOD disk information. You will see
the following details.
• JBOD ID
• UUID
• Created Time and Date
• Write Cache Status
• Write Cache Setting
• Write Sorting
• Read Ahead Policy
• Read Ahead Multiplier
• Read Logs
2-25
Using the RAID GUI
2.6.3 Disk groups
This feature allows you to create, delete, and modify your disk group
settings.
• Create disk groups
Click Create to add a new disk group, where up to a maximum of 8
(single controller) / 32 (redundant controller model) disk groups can be
created. Specify the following options for configuration.
DG IDSelect a DG ID from the drop-down menu.
NameUse the system default name as dgx. ‘x’ is the DG
identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
Members and
Spares
Capacity to
Truncate (GB)
LD Initialization
Mode
Write-zero
immediately
Select member disks and spare disks to be
grouped.
Specifies the capacity to be truncated for the
smallest disk of this disk group.
This option is useful when the replacement disk
that is slightly smaller than the original disk. Without
this option, the capacity to truncate is 0GB.
The initialization mode defines how logical disks of
a disk group are initialized. Different disk groups
can have different initialization modes.
Parallel: The initialization tasks of logical disks are
performed concurrently.
Sequential: Only one initialization task is active at a
time.
When enabled, this function will start a
background task to write zero to all member disks
of the created disk group. The disk group can be
used for logical disks only after this process is
completed.
2-26
Note
The minimum number of member disks in a disk group is two.
Different disk groups may have a different number of member disks.
The number of member disks also determines the RAID level that can
be used in the disk group.
Using the RAID GUI
• Delete disk groups
Select the disk group(s) you want to delete and click Delete.
• Modify disk groups
To modify a setting, select a DG and click Modify. Specify the following
options for configuration.
NameType a name associated for the DG ID.
Spare DisksAssign disks to be used as local spares.
Disk CacheThis option enables or disables the on-disk cache
of hard disks in a disk group. When a new disk
becomes a member of the disk group (for
example, by disk rebuilding and cloning); the ondisk cache uses the same settings as the disk
group.
LD Initialization
Mode
LD Rebuild
Mode
The initialization mode defines how logical disks of
a disk group are initialized. Different disk groups
can have different initialization modes.
Parallel: The initialization tasks of logical disks are
performed concurrently.
Sequential: Only one initialization task is active at a
time.
This determines how to rebuild logical disks in a
disk group. All logical disks can be rebuilt at the
same time or one at a time. Different disk groups
can have different rebuild modes.
Parallel: The rebuilding tasks are started
simultaneously for all logical disks in the disk group.
The progress of each rebuilding task is
independent from each other.
Sequential: Rebuilding always starts from the
logical disk with the smallest relative LBA on the
disk group, continues to the logical disk with the
second smallest relative LBA, and so on.
Rebuild Task
Priority
Prioritized: Similar to sequential rebuild mode, this
rebuilds one logical disk at a time, but the order of
logical disks to be rebuilt can determined by users.
Low / Medium / High
This option sets the priority of the background task
for disk rebuild of disk groups.
2-27
Using the RAID GUI
Initialization
Task Priority
Low / Medium / High
This option sets the priority of the background tasks
for logical disk initialization of disk groups.
Utilities Task
Priority
Low / Medium / High
This option sets the priority of the background tasks
for utilities of disk groups. These include RAID
reconfiguration utilities and data integrity
maintenance utilities.
Note
1. Progress rates increase in proportion to priority (i.e. A high
priority task runs faster than a low priority one.)
2. When there is no host access, all tasks (regardless of priority) run
at their fastest possible speed.
3. When host access exists, tasks run at their minimum possible
speed.
• Detailed disk group information
Click to display a complete list of disk group information. You will see
the following details.
• DG ID
• UUID
• Created Time and Date
• Disk Cache Setting
• LD Initialization Mode
• LD Rebuild Mode
• LD Rebuild Order
• Rebuild Task Priority
• Initialization Task Priority
• Utilities Task Priority
• Member Disk’s Layout
• Original Member Disks
2.6.4 Logical disks
This feature allows you to create, delete, and modify your logical disk
settings.
• Create logical disks
Click Create to add a new logical disk, where up to a maximum of 32
logical disks can be created in each DG. Specify the following options for
configuration.
DG IDSelect a DG ID from the drop-down menu. This is
the disk group to be assigned for logical disk
setting.
2-28
LD IDSelect an LD ID from the drop-down menu.
Using the RAID GUI
NameUse the system default name as dgxldy. ‘x’ is the
DG identifier and ‘y’ is the LD identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
RAID LevelSelect a RAID level for the logical disk. Different
logical disks in a disk group can have different
RAID levels. However, when NRAID is selected,
there must be no non-NRAID logical disks in the
same disk group.
Capacity (MB)Enter an appropriate capacity for the logical disk.
This determines the number of sectors a logical
disk can provide for data storage.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the logical disk.
The stripe size is only available for a logical disk
with a striping-based RAID level. It determines the
maximum length of continuous data to be placed
on a member disk. The stripe size must be larger
than or equal to the cache unit size.
Free ChunkEach free chunk has a unique identifier in a disk
group, which is determined automatically by the
controller when a free chunk is created. Select a
free chunk from the drop-down menu for logical
disk creation.
Initialization
Option
Noinit: No initialization process, and the logical disk
can be accessed immediately after it is created.
Regular: The controller initializes the logical disk by
writing zeros to all sectors on all member disks of
the logical disk. This ensures that all data in the
logical disks are scanned and erased.
Background: The controller starts a background
task to initialize the logical disk by synchronizing
the data stored on the member disks of the logical
disk. This option is only available for logical disks
with parity-based and mirroring-based RAID levels.
2-29
Using the RAID GUI
Alignment
Offset (sector)
Set the alignment offset for the logical disk starting
sector to enhance the controller’s performance.
For Windows OS, it is suggested to set the
alignment offset at sector 63.
Note
Make sure the disk group to be created for a new logical disk is in
OPTIMAL or LD_INIT state, otherwise the new logical disk will not be
created.
• Delete logical disks
Select the logical disk(s) you want to delete and click Delete. To delete all
LUNs of dgxldy, check the ‘Force to delete LUN mapping(s)’ box. All
access to the logical disk will be stopped.
• Modify logical disks
To modify a setting, select an LD and click Modify. Specify the following
options for configuration.
NameType a name for the DG ID/ LD ID.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the logical disk. However, the controller
ownership will not change unless you check the
‘Change owner controller immediately’ box.
Write CacheThis option enables or disables the write cache of
a logical disk.
Write SortingThis option enables or disables the sorting in the
write cache. To improve writing performance, it is
recommended to turn this option on for random
access. This option is available only if the write
cache is on.
Read Ahead
Policy
Always: The controller performs pre-fetching data
for every read command from hosts.
Adaptive: The controller performs pre-fetching
only for host read commands that are detected
as sequential reads. The detection is done by read
logs.
2-30
Off: If there is no sequential read command, readahead will result in overhead, and you can disable
the read-ahead.
Using the RAID GUI
Read Ahead
Multiplier
This option specifies the read ahead multiplier for
the Always and Adaptive read ahead policies.
Select how much additional sequential data will
be pre-fetched. The default value is 8.
Read LogsThis option specifies the number of concurrent
sequential-read streams for the Adaptive read
ahead policy, and the range is between 1 and
128. The default value is 32.
LD Read
Algorithm
This option is only available for logical disks with
parity-based RAID level, i.e. RAID 3/5/6.
None:
None of the algorithms will be used when
accessing data disks.
Intelligent Data Computation: The controller will
access logical disks within the shortest response
time. This greatly enhances read performance.
Fast Read Response: When this option is selected,
you are prompted to enter the maximum
response time for all read requests. The allowed
range for response time is 100 to 15000 msecs.
Check on Read: This option is similar to the Fast
Read Response. In addition to reading the
requested data from disks, the controller will also
perform parity check across corresponding strips
on each data disk.
To clear write buffers in the write cache of a logical disk, select a logical
disk and click the Flush button.
• Create logical disk (LD) snapshot volume pair
Instead of creating volume pairs in the Snapshot Volumes page, you can
directly create volume pair to a specified logical disk by clicking the
S.VOL button. Specify a virtual disk as the secondary volume from the SV
ID drop-down menu, then click the Apply button to confirm.
• Detailed logical disk information
Click to display a complete list of logical disk information. You will see
the following details.
• DG ID
• LD ID
• Write Cache Setting
• Write Sorting
• UUID
• Created Time and Date
• LD Read Algorithm
• Alignment Offset (sector)
• Write Cache Status
• Read Ahead Policy
• Read Ahead Multiplier
• Read Logs
• Member State
2-31
Using the RAID GUI
2.6.5 Volumes
This feature allows you to create, delete, and modify your volume
settings. RAID 30/50/60 are supported by creating striping volumes over
RAID 3/5/6 logical disks.
• Create volumes
Click Create to add a new volume, where up to a maximum of 32
volumes can be created. Specify the following options for the
configuration.
VOL IDSelect a VOL ID from the drop-down menu.
NameUse the system default name as volx. ‘x’ is the VOL
identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
LD LevelSelect a RAID level to filter a list of member LDs.
LD Owner
Controller
This option is only available when the redundantcontroller system is in use. Select the owner
controller of the member LDs. Only the LDs whose
owner controller are equal to the specified will be
filtered out in "Member LDs".
Member LDsSelect the LDs to be grouped.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the volume.
The stripe size must be larger than or equal to the
cache unit size.
Alignment
Offset (sector)
Set the alignment offset for volume starting sector
to enhance the controller’s performance. For
Windows OS, it is suggested to set the alignment
offset at sector 63.
2-32
Note
1. All logical disks must be in the same RAID level.
2. No two logical disks can be in the same disk group.
3. None of the logical disks can be used by other volumes.
4. None of the logical disks can be bound to any LUNs.
5. All logical disks must be in the optimal state.
6. All disk groups of the logical disks must belong to the same owner
controller.
Using the RAID GUI
• Delete volumes
Select the volume(s) you want to delete and click Delete. To delete all
LUNs of volx, check the ‘Force to delete LUN mapping(s)’ box. All access
to the volume will be stopped.
• Modify volumes
To modify a setting, select a volume and click Modify. Specify the
following options for configuration.
NameType a name for the volume ID.
Preferred
Controller
This option is only available when the redundantcontroller system is in use. Select the preferred
controller to be in charge of managing and
accessing the volume. However, the controller
ownership will not change unless you check the
‘Change owner controller immediately’ box.
Write CacheThis option enables or disables the write cache of
a volume.
Write SortingThis option enables or disables the sorting in the
write cache. To improve writing performance, it is
recommended to turn this option on for random
access. This option is available only if the write
cache is on.
Read Ahead
Policy
Always: The controller performs pre-fetching data
for every read command from hosts.
Adaptive: The controller performs pre-fetching
only for host read commands that are detected
as sequential reads. The detection is done by read
logs.
Off: If there is no sequential read command, readahead will result in overhead, and you can disable
the read-ahead.
Read Ahead
Multiplier
This option specifies the read ahead multiplier for
the Always and Adaptive read ahead policies.
Select how much additional sequential data will
be pre-fetched. The default value is 8.
Read LogsThis option specifies the number of concurrent
sequential-read streams for the Adaptive read
ahead policy, and the range is between 1 and
128. The default value is 32.
To clear write buffers in the write cache of a volume, select a volume and
click the Flush button.
2-33
Using the RAID GUI
• Create volume (VOL) snapshot volume pair
Instead of creating volume pairs in the Snapshot Volumes page, you can
directly create volume pair to a specified volume by clicking the S.VOL
button. Specify a virtual disk as the secondary volume from the SV ID
drop-down menu, then click the Apply button to confirm.
• Detailed volume information
Click to display a complete list of volume information. You will see the
following details.
• VOL ID
• UUID
• Created Time and Date
• Alignment Offset (sector)
• Write Cache Status
• Write Cache Setting
• Write Sorting
• Read Ahead Policy
• Read Ahead Multiplier
• Read Logs
2.6.6 Snapshot Volumes
This feature allows you to create, delete, and modify your snapshot
volume settings. This is referred as the snapshot technology. See 5.4 Snapshot on page 5-38 for more information.
• Create snapshot volume pairs (S.VOL.Pair)
Click Add to add a new snapshot volume pair before adding new
snapshot volumes, where up to a maximum of 64 volume pairs can be
created. Specify the following options for the configuration.
PVIDFrom the drop-down menu, specify an LD as the
primary volume of the volume pair.
SVIDFrom the drop-down menu, specify an LD as the
secondary volume of the volume pair.
• Delete snapshot volume pairs
Select the snapshot volume pair(s) you want to delete and click Remove.
2-34
Using the RAID GUI
• Modify snapshot volume pairs
To modify a setting, select a snapshot volume and click Modify. Specify
the following options for configuration.
Overflow Alert
(%)
Specify an overflow alert threshold for a
secondary volume. The range is from 50 to 99.
When the allocated space exceeds the specified
threshold, an alert notification is generated. If not
specified, the default threshold is 80.
To configure the same settings to all snapshot
volume pairs, check the ‘Apply to all volume pairs’ box.
• Expanding the capacity of snapshot volume pairs
To expand the capacity of a snapshot volume pairs, do the following:
1. Click Expand and specify the following options for an secondary
volume expansion task.
Capacity (MB)The capacity of a logical disk can be expanded if
there is a free chunk available on the disk group.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Starting Free
Chunk / Ending
Free Chunk
This option specifies the start and end of free
chunks to be used for the expansion. The Ending
Free Chunk must be bigger than or equal to the
Starting Free Chunk.
Note
At least one free chunk must be adjacent to the logical disk.
Initialization
Option
• Detailed snapshot volume pair information
Click to display a complete list of snapshot volume pair information.
You will see the following details.
• PV ID
Background / Noinit
Background applies only to the logical disks with
parity-based RAID level or mirroring-based RAID
level.
• Overflow Alert(%)
• SV ID
2-35
Using the RAID GUI
• Create Spare COW volumes (S.COW.VOL)
Click Add to add a new spare COW volume, where up to a maximum of
128 volume pairs can be created. Specify the following options for the
configuration.
COW VOL IDFrom the drop-down menu, specify an LD as the
spare COW volume.
• Delete snapshot volume pairs
Select the spare COW volume you want to delete and click Remove.
• Create snapshot volumes
Click Create to add a new snapshot volume, where up to 4 snapshot
volumes can be created per primary volume (snapshot volumes). The
total maximum number of snapshot volumes that can be created is 64.
Specify the following options for the configuration.
SVOL IDSelect a snapshot volume ID from the drop-down
menu.
PV IDSelect a primary volume ID from the drop-down
menu.
NameUse the system default name as svolx. ‘x’ is the
VOL identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
• Delete snapshot volumes
Select the snapshot volume(s) you want to delete and click Delete. To
delete all LUNs of svolx, check the ‘Force to delete LUN mapping(s)’ box.
All access to the snapshot volume will be stopped.
• Modify snapshot volumes
To modify a setting, select a snapshot volume and click Modify. You can
type a name for the specified snapshot volume.
• Restore to snapshot volumes
To restore the primary volume to a snapshot volume in a volume pair.
Select a snapshot volume and click Restore.
2-36
Using the RAID GUI
Figure 2-19 Method switching message
HOST
FCP1 (Port1)
FCP2 (Port2)
LUN0
(DG1LD0)
LUN1
(DG1LD1)
LUN0
(DG0LD0)
LUN1
(DG0LD1)
Figure 2-20 Simple storage
• Detailed snapshot volume information
Click to display a complete list of snapshot volume information. You
will see the following details.
• VOL ID
• Allocated Space on SV (MB)
• UUID
2.6.7 Storage provisioning
The RAID GUI provides three storage provision methods; simple,
symmetric, and selective. Whenever you change the method, the
following confirmation message is displayed. (iSCSI model support simple
method only)
• Simple method
Simple storage is used in direct attached storage (DAS) environments,
where there is no FC switch between the RAID and the hosts.
As the illustration shows, any
computer is allowed to access the
LUNs presented by the controller
after gaining access to the host
ports of the controller.
LUNs are assigned to each virtual
disk in RAID so the host can address
and access the data in those
devices.
Add LUNs in a storage port
In the simple storage main screen, click Add to add a LUN to the default
storage group of an FC port/SAS port/SCSI port/iSCSI port, fcpx/sasy/scpz/
isp, with a virtual disk.
2-37
Using the RAID GUI
HTP IDEach FC/SAS/SCSI port has a unique ID, which is
SCSI ID(For SCSI port)
LUN IDSelect a LUN ID from the drop-down menu,
Mapping Virtual DiskSelect a virtual disk from the drop-down menu
determined according to the physical location
of the port on the controller. Select one from the
drop-down menu. For iSCSI port, at least an iSCSI
target node is necessary for LUN pressented.
Select a SCSI ID from the drop-down menu. A
maximum of 16 SCSI IDs can be added to the
controller.
where up to a maximum of 128 LUNs can be
selected.
for LUN mapping.
Sector Size512Byte / 1KB / 2KB / 4KB
Select a sector size from the drop-down menu as
the basic unit of data transfer in a host.
Number of Cylinder /
Number of Head /
Number of Sector
Define a specific cylinder, head, and sector to
accommodate different host systems and
applications. The default is Auto.
Write CompletionWrite-behind: Write commands are reported as
completed when a host’s data is transferred to
the write cache.
Write-through: Write commands are reported as
completed only when a host’s data has been
written to disk.
Remove LUNs in storage port
Select the LUN(s) you want to remove and click Remove. To remove all
LUNs of a virtual disk from the default storage group of fcpx/sasy/scpz,
check the ‘Remove mapping virtual disk from all storage group’ box.
• Symmetric method
Symmetric storage is used in environments where hosts are equipped with
multi-path IO (MPIO) driver or software that can handle multiple paths
(LUNs) to a single virtual disk. Use the provided PathGuard package to
install and use the MPIO driver. For more information, see 5.1 Multi-Path IO Solutions.
2-38
In this case, the controller’s performance
FCP1
(Port1)
FCP2
(Port2)
LUN0
(DG0LD0)
LUN1
(DG0LD2)
LUN2
(VOL3)
LUN3
(JBOD2)
HOST
MPIO Environment
Figure 2-21 Symmetric storage
is highly elevated. You need not
consider different host ports because
the bindings between hosts and storage
groups are applied to all host ports.
As the illustration shows, LUNs are
assigned according to each host’s
WWPN (World Wide Port Name).
Therefore, you need to set the host
WWPN first. Each host can recognize
LUNs as paths to virtual disks, instead of
individual disks.
To set up symmetric storage groups, first
add host(s).
Using the RAID GUI
Add hosts
In the symmetric storage main screen, click Host > Add.
Host IDSelect a Host ID from the drop-down menu. A
maximum of 32 hosts can be added to the
controller.
WWPNEach FC port needs a WWPN for communicating
with other devices in an FC domain. Users can
choose each WWPN of Fiber HBA from the
‘Choose from detected hosts’ box or directly
enter the WWPN in this field.
SAS AddressFor SAS controller each SAS port needs a SAS
Address for communicating with other devices in
an SAS domain.
Host NameUse the system default name as hostx. ‘x’ is the
Host identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
HG IDSelect a Host Group ID from the drop-down
name length is 63 bytes.
menu. You can select from hg0 to hg31 or No
group.That is must to set for symmetric method.
2-39
Using the RAID GUI
Remove hosts
Select the host(s) you want to delete and click Remove. Check the ‘Only
remove from host group’ box if you want to remove the host(s) from the
host group only.
Modify hosts/host group
Select a host you want to change for its host name, host group ID, or host
group name, and click Modify to enter the settings screen.
Add LUNs in Host Group
After setting the host(s), click Back to return to the symmetric storage
main screen. Then click Add to add LUNs in the HG(s).
HG IDSelect a HG ID from the drop-down menu. A
maximum of 32 hosts can be added to the
controller.
LUN IDSelect a LUN ID from the drop-down menu.
where up to 128 IDs are available for the
selection.
Mapping Virtual DiskSelect a virtual disk from the drop-down menu
for LUN mapping.
Sector Size512Byte / 1KB / 2KB / 4KB
Select a sector size from the drop-down menu as
the basic unit of data transfer in a host.
Number of Cylinder /
Number of Head /
Number of Sector
Write CompletionWrite-behind: Write commands are reported as
Define a specific cylinder, head, and sector to
accommodate different host systems and
applications. The default is Auto.
completed when host’s data is transferred to the
write cache.
Write-through: Write commands are reported as
completed only when host’s data has been
written to disk.
Remove LUNs from host
Select the LUN(s) you want to remove and click Remove. To remove all
LUNs of a virtual disk from one or all hosts, check the ‘Remove mapping virtual disk from all host’ box.
• Selective method
Selective storage is used in complicated SAN environments, where there
are multiple hosts accessing the controller through an FC switch. This
method provides the most flexibility for you to manage the logical
2-40
Using the RAID GUI
HG0: HOST 3, HOST 4
HG1: HOST 5, HOST 6, HOST 7, HOST 8
LUN0
(JBOD0)
LUN1
(DG3LD1)
LUN2
(DG0LD0)
LUN3
(DG0LD2)
LUN4
(VOL2)
LUN5
(DG0LD1)
LUN9
(DG2LD0)
LUN10
(DG2LD2)
LUN11
(DG5LD8)
LUN12
(DG5LD9)
LUN13
(VOL4)
LUN14
(VOL5)
LUN15
(VOL6)
LUN16
(VOL7)
LUN7
(DG3LD0)
LUN6
(JBOD5)
LUN8
(DG3LD3)
FC Switch
Environment
HOST 0
HOST 1
HOST 2
HOST 4 HOST 3
HOST 6HOST 5
HOST 8HOST 7
Bind
FCP1 (Port1)FCP1 (Port1)
Bind
FCP1 (Port1)FCP1 (Port1)
Bind
FCP1 (Port1)FCP1 (Port1)
Bind
FCP1 (Port1)FCP1 (Port1)
Bind
FCP2 (Port2)FCP2 (Port2)
FCP1 (Port1)FCP1 (Port1)
FCP2 (Port2)FCP2 (Port2)
Figure 2-22 Selective storage
connectivity between host and storage resources exported by the
controller.
As the illustration shows, the HG
(Host Group) can be a host or a
group of hosts that share the
same access control settings in
the controller. SG represents the
LUNs as a storage group. Bind
the host/ host group and
storage group to the same host
port.
Add hosts
In the selective storage main screen, click Host > Add.
Host IDSelect a Host ID from the drop-down menu. A
WWPNEach FC port needs a WWPN for communicating
SAS AddressFor SAS controller each SAS port needs a SAS
Host NameUse the system default name as hostx. ‘x’ is the
HG IDSelect a Host Group ID from the drop-down
maximum of 32 hosts can be added to the
controller.
with other devices in an FC domain. Users can
choose each WWPN of Fiber HBA from the
‘Choose from detected hosts’ box or directly
enter the WWPN in this field.
Address for communicating with other devices in
an SAS domain.
Host identifier.
OR
Uncheck the ‘Use system default name’ box and
enter the name in the Name field. The maximum
name length is 63 bytes.
menu. You can select from hg0 to hg31 or No
group.
2-41
Using the RAID GUI
Remove hosts
Select the host(s) you want to delete and click Remove. Check the ‘Only
remove from host group’ box if you want to remove the host(s) from the
host group only.
Modify hosts/host groups
Select a host you want to change for its host name, host group ID, or host
group name, and click Modify to enter the settings screen.
Add LUNs in storage group
In the selective storage main screen, click SG > Add.
SG IDSelect a SG ID from the drop-down menu. A
maximum of 34 storage groups can be created
in the controller.
LUN IDSelect a LUN ID from the drop-down menu,
where up to 128 IDs are available for the
selection. A total of 1024 LUNs can be created in
the controller.
Mapping Virtual DiskSelect a virtual disk from the drop-down menu
for LUN mapping.
Mask StatusUnmask / Mask
This option makes a LUN available to some hosts
and unavailable to other hosts.
Access RightRead-only / Read-writable
The access right is applied to individual LUNs in a
storage group.
Sector Size512Byte / 1KB / 2KB / 4KB
Select a sector size from the drop-down menu as
the basic unit of data transfer in a host.
Number of Cylinder /
Number of Head /
Number of Sector
Write CompletionWrite-behind: Write commands are reported as
Define a specific cylinder, head, and sector to
accommodate different host systems and
applications. The default is Auto.
completed when a host’s data is transferred to
the write cache.
2-42
Write-through: Write commands are reported as
completed only when a host’s data has been
written to disk.
Using the RAID GUI
Remove LUNs in storage group
Select the LUN(s) you want to delete and click Remove. To remove all
LUNs of a virtual disk from all storage groups, check the ‘Remove mapping virtual disk from all storage group’ box.
Modify LUN/storage group
Select a LUN/ storage group you want to change for its mask status,
access right, or storage group name, and click Modify to enter the
settings screen. To configure the same settings to all LUNs in a storage
group, check the ‘Apply to all LUNs in this storage group’ box.
Bind host/host group and storage group to host ports
Now you can click Bind in the selective storage main screen. Select from
the HTP ID, Host/ HGID, and SG ID drop-down menu for binding.
Unbind hosts/ host groups and storage groups to host ports
Select a binding you want to cancel and click Unbind in the selective
storage main screen. Click Confirm to cancel the selected binding.
2.7 Maintenance Utilities
This feature allows you to perform maintenance tasks on your arrays.
2.7.1 Expanding disk groups
DG Reconfiguration allows expansion on disk groups by adding one or
more disks, thus increasing the usable capacity of the disk group. You can
also perform defragmentation during expansion.
To expand disk groups, do the following:
1. Select Maintenance Utilities > DG Reconfiguration from the main menu.
2. Click Expand and specify the following options for a DG expansion
task.
DG IDSelect a disk group for expansion from the drop-
down menu.
Expanding
HDDs
Select and use the arrow buttons to move one or
more unused hard disks from the Available HDDs
list to the Expanding HDDs list.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Defragment
during
expanding
Check this option to allow for defragmentation
during expansion.
2-43
Using the RAID GUI
3. Click Apply to review the current settings.
4. Click Confirm. The task is created.
Note
1. The disk group to be expanded must be in the optimal state.
2. You may only select to increase the number of hard disks but not
to change the disk group setting.
3. Once confirmed, please wait until the expansion process is
complete. Do not change or select any functions during the
expansion process.
2.7.2 Defragmenting disk groups
Except defragmenting disk groups during expansion, there is another way
to perform the task.
1. Select Maintenance Utilities > DG Reconfiguration from the main menu.
2. Click Defragment and specify the following options for defragmenting.
DG IDSelect a disk group to defragment from the drop-
down menu.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
3. Click Apply to view the current settings.
4. Click Confirm. The task is created.
After defragmentation is complete, all free chunks will be consolidated
into the one free chunk located in the space at the bottom of member
disks.
Note
1. Defragmentation does not support NRAID disk group.
2. There must be free chunks and logical disks on disk groups.
2.7.3 Changing RAID level / stripe size for logical disks
LD Reconfiguration supports stripe size and RAID level migration for logical
disks. You can conduct disk group expansion with migration at the same
time.
To change the RAID level or stripe size of a logical disk, do the following:
1. Select Maintenance Utilities > LD Reconfiguration from the main menu.
2. Click Migrate and specify the following options for an LD migration task.
DG ID/LD IDSelect a DG ID and an LD ID from the drop-down
menu for migration.
2-44
Using the RAID GUI
Expanding
HDDs
The controller performs disk group expansion with
specified hard disks.
RAID LevelThe controller performs the specified RAID level
migration.
The feasibility of migration is limited to the original
and final RAID level and the number of member
disks in the disk group. The following table defines
the rules of the number disks during the RAID
migration.
Table 2-11 Limitations of the number of member disks
* Where “Nn” means the number of member disks in the new RAID level, “No”
means the number of member disks in the original/old RAID level, “OK” means the
migration is always possible, and “N/A” means the migration is disallowed.
Stripe Size (KB)This option must be specified when migrating from
a non-striping-based RAID level to a striping-based
RAID level.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Defragment
during
Check this option to allow defragmentation
during migration.
migration
2.7.4 Expanding the capacity of logical disks in a disk group
To expand the capacity of a logical disk, do the following:
1. Select Maintenance Utilities > LD Reconfiguration from the main menu.
2. Click Expand and specify the following options for an LD expansion
task.
DG ID/LD IDSelect a DG ID and an LD ID from the drop-down
menu for expansion.
Capacity (MB)The capacity of a logical disk can be expanded if
there is a free chunk available on the disk group.
2-45
Using the RAID GUI
ScheduleImmediately: The task will start immediately.
Note
1. The new capacity must be bigger than the current capacity.
2. The sum of increased capacity of all logical disks on the disk group
must be less than or equal to the sum of capacity of all selected
free chunks.
Once: The task will start on the specified date and
time.
Starting Free
Chunk / Ending
Free Chunk
This option specifies the start and end of free
chunks to be used for the expansion. The Ending
Free Chunk must be bigger than or equal to the
Starting Free Chunk.
Note
At least one free chunk must be adjacent to the logical disk.
Initialization
Option
Background / Noinit
Background applies only to the logical disks with
parity-based RAID level or mirroring-based RAID
level.
3. Click Apply to view the current settings.
4. Click Confirm. The task is created.
2.7.5 Shrinking logical disks
The shrink operation conducts without background task; it simply reduces
the capacity of the logical disk.
To release free space of a logical disk on a disk group, do the following:
1. Select Maintenance Utilities > LD Reconfiguration from the main menu.
2. Click Shrink and specify the following options for an LD shrink task.
DG ID/LD IDSelect a DG ID and an LD ID from the drop-down
menu for shrink.
Capacity (MB)Enter the new capacity for the specified logical
disk to be shrunk. Note that the new capacity
must be higher than zero.
Note
It is advised that the file systems on the host be shrunk before
shrinking the logical disks; otherwise shrinking might cause data loss
or file system corruption.
3. Click Apply to view the current settings.
4. Click Confirm. The task starts.
2-46
Using the RAID GUI
2.7.6 Expanding volumes
To expand the capacity of a volume, do the following:
1. Select Maintenance Utilities > VOL Reconfiguration from the main
menu.
2. Select Expand and specify the following options for a VOL expansion
task. The expansion volume is formed by concatenating new logical
disks.
VOL IDSelect a VOL ID from the drop-down menu for
expansion.
LD LevelSelect a RAID level to filter a list of expanding LDs.
Expanding LDsSelect and use the arrow buttons to move one or
more LDs from the Available LDs list to the
Expanding LDs list.
Note
1. The volume must be in optimal state.
2. The maximum number of member logical disks for each volume is
eight.
3. No two logical disks can be in the same disk group.
4. None of the logical disks can be used by other volumes.
5. None of the logical disks can be bound to any LUNs.
6. All logical disks must be in the optimal state.
7. All disk groups of the logical disks must belong to the same owner
controller.
3. Click Apply to view the current settings.
4. Click Confirm to continue the expansion.
2.7.7 Shrinking volumes
The shrink operation conducts without background task; it simply reduces
the capacity of the volume by removing the concatenating volume
units.
To release free space of a volume, do the following:
1. Select Maintenance Utilities > LD Reconfiguration from the main menu.
2. Select Shrink and specify the following options for a VOL shrink task.
VOL IDSelect a VOL ID from the drop-down menu for
shrink.
Shrinking VUsSelect member VUs you want to remove from the
list and use the arrow buttons to move them to the
Shrinking VUs list.
2-47
Using the RAID GUI
Note
1. The volume must be in optimal state.
2. There must be at least two concatenating volume units in a
volume.
3. All selected volume units must be the last concatenating volume
units in the volume.
3. Click Apply to view the current settings.
4. Click Confirm to continue the shrink.
2.7.8 Cloning hard disks
When a hard disk is likely to become faulty or develop errors, for example,
when the number of reported errors or bad sectors of a physical disk
increases over a certain threshold, or a disk reports SMART warning, you
can copy all the data on the disk to another disk.
To clone a hard disk, do the following:
1. Select Maintenance Utilities > HDD Clone from the main menu.
2. Click Clone and specify the following disk cloning options.
Source DiskSelect a source disk you want to clone. The disk
must not be in an NRAID disk group.
Target DiskSelect the target disk to be the clone. The disk
must be either unused, a global spare, or a local
spare of the same disk group as the Source Disk.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Automatic
Resume
During cloning, if the target disk fails, the controller
will use another disk and resume cloning. [The
Auto Spare Control option (see 2.7.16
Miscellaneous on page 2-55) must be set to On.]
The following is the order of disks used to resume
cloning:
1. Local spare disks
2-48
2. Global spare disks
3. Unused disks
If there is no disk to resume cloning, or this option is
not specified, cloning is aborted when the target
disk fails.
Using the RAID GUI
Note
1. If there is disk scrubbing task or parity regeneration task in the
disk group of the source disk, the task is aborted and cloning is
started.
2. If the disk group of the source disk contains faulty disks, cloning is
suspended until the disk group completely rebuilds its disks.
3. Click Apply. The task will start according to the specified time.
To cancel hard disk cloning, do the following:
1. Select the task(s) and click Stop to abort disk cloning. A confirmation
prompt displays. Click Confirm to cancel the cloning task.
The target disk will become an unused disk. If there is a degraded disk
group and auto-spare option is on, the target disk will be used for
rebuilding.
2.7.9 Scrubbing
This feature supports parity check and recovery for disk groups, logical
disks, and hard disks. Bad sectors will be reported when detected.
To perform disk scrubbing on a disk group, do the following:
1. Select Maintenance Utilities > Scrubbing from the main menu.
2. Click Scrub and specify the following options for a disk scrubbing task.
Target TypeSelect either HDD or DG as the scrubbing disk type.
HDD: Specify an HDD ID for scrubbing.
DG: Specify a DG ID and an LD ID/All LDs for
scrubbing.
Parity CheckThis option is only available for parity-based RAID
level LDs.
None: No parity check is performed.
Check Only: The controller checks the parity for
logical disks.
Regenerate: Any parity inconsistency detected is
regenerated by the controller.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Weekly: The task will start on the specified day and
time every week.
Monthly: The task will start on the specified date
and time every month.
2-49
Using the RAID GUI
3. Click Apply. The task will start according to the specified time.
Note
1. The hard disk must not be a member disk of a disk group.
2. The disk group and logical disk(s) for scrubbing must be in the
optimal state.
3. The scrubbing task will be aborted if the disk group enters
degraded mode, starts rebuilding disk, or starts disk cloning.
4. If the disk group of the source disk contains faulty disks,
scrubbing is aborted until the disk group completely rebuilds its
disks.
To cancel disk scrubbing, do the following:
1. Select the task(s) and click Stop to abort the disk scrubbing. A
confirmation prompt displays. Click Confirm to cancel the scrubbing
task.
2.7.10 Regenerating the parity
This feature is less complicated than scrubbing. This command
regenerates the parity of a logical disk or all logical disks on disk groups
without parity checking. Follow the steps below to create a regenerating
parity task.
1. Select Maintenance Utilities > Regenerate Parity from the main menu.
2. Click Reg-parity and specify the following options for a parity
regeneration task.
DG ID/LD IDSelect a DG ID and an LD ID or All LDs from the
drop-down menu for parity regeneration.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Weekly: The task will start on the specified day and
time every week.
Monthly: The task will start on the specified date
and time every month.
3. Click Apply. The task will start according to the specified time.
To stop parity regeneration, do the following:
1. Select the task(s) and click Stop. A confirmation prompt displays. Click
Confirm to stop the parity regeneration task.
2.7.11 Performing disk self test
This feature instructs the hard disks to start or stop short or extended disk
self test (DST). The test performs a quick scan for bad sectors. To execute
2-50
Using the RAID GUI
this function, make sure the SMART warning has been turned on. (See
2.8.1 Hard disks on page 2-57)
Follow the steps below to start a disk self test:
1. Select Maintenance Utilities > Disk Self Test from the main menu.
2. Select the hard disks you want to perform the disk self test and click
DST. Specify the following options.
ScheduleImmediately: The task will start immediately.
Once: The task will start on the specified date and
time.
Weekly: The task will start on the specified day and
time every week.
Monthly: The task will start on the specified date
and time every month.
Perform
extended disk
self test
Check this option to start an extended disk self
test. Without this option, the hard disks perform
short disk self test.
3. Click Confirm to begin testing.
To stop the DST of a hard disk, select it and click Stop. A confirmation
prompt displays. Click Confirm to end the DST.
Note
1. Hard disks must support DST.
2. Hard disks must not be executing DST.
3. For ATA disks, the SMART must be turned on.
4. For ATA disks, if SMART is turned off during DST execution, DST
will be aborted.
5. During DST execution, accessing the hard disks may lead to
performance degradation.
6. For scheduling DST, the disk must be either unused, a global
spare, a local spare, or a JBOD.
7. (For redundant-controller system only) The DST may not continue
after failover and the following error messages may pop up (see
5.3 Redundant Controller on page 5-21 for more detailed
information on failover):
• The self-test was interrupted by the host with a hardware or
software reset.
• Self-test fail due to unknown error.
Users can simply re-launch the DST process when encountering
the above conditions. Please note that some disks may continue
the DST process without any problems.
2-51
Using the RAID GUI
2.7.12 Array roaming
Array roaming will be activated when hard disks are moved from one slot
to another or from one controller to a new controller. This ensures that the
new controller can be working at all times. You can determine the way of
array roaming through the Auto Array Roaming Control (See 2.7.16 Miscellaneous on page 2-55).
When the Auto Array Roaming Control option is enabled, the
configuration of the disks can be identified and restored and
uncompleted tasks are automatically resumed.
Some hard disk configurations may cause conflicts when moved to a
new controller. You are allowed to view group information, including the
virtual disk and hard disk states, from the Array Roaming page.
Note
At the top of the page, you can select the group id and the group
type (JBOD disk, disk group, or volume) for the information to be
displayed. Each group type will have different columns on this page.
To import the foreign/conflict disks, click the Import button and specify
the following options.
Target IDSelect an ID (which may be a JBOD ID, disk group
ID, or volume ID) to be used after import.
MembersSelect the foreign/conflict hard disks to be
imported and restored the configurations. Use the
arrow buttons to move the hard disks from the
Available Members list to the Selected Members
list.
Force to import
abnormal
group
Check this option to allow the import of
incomplete disk groups. Without this option, only
normal disk groups and volumes can be restored.
2.7.13 Array recovery
With the Array Recovery Utility (ARU), you can recover the disk groups,
logical disks, and volumes. To perform recovery, you must fully
understand the partition state of each logical disk.
A partition of a logical disk can be one of the following states: OPTIMAL,
FAULTY, , REBUILD, or UNTRUST. Each state is described as below:
• OPTIMAL: The partition is working and the data is valid.
• FAULTY: The partition is lost (the member disk is removed or faulty)
and it results in a faulty logical disk. The data on the faulty partition
will be still in sync with data on other partitions. The data on the faulty
partition can be used after recovery.
2-52
Using the RAID GUI
• BANISH: The partition is lost (the member disk is removed or faulty)
and it results in a degraded logical disk. The data on the banish
partition will be out of sync with data on other partitions. The data on
the banish partition can’t be used after recovery.
• REBUILD: The member disk of the partition has been added to the
logical disk, and the partition is rebuilding the data.
• UNTRUST: The member disk of the partition has been added to the
logical disk, but the data on the partition cannot be trusted. It can
become trusted if the logical disk can rebuild the data on the
partition.
• Partition state transition
The corresponding events and state transitions of a partition are shown in
the table below:
FromTo
Disk is failed or removed
OPTIMAL
REBUILDBANISH
UNTRUSTBANISH
FAULTY: for faulty logical disk
BANISH: for degraded logical disk
.
Lost member disk is replaced by a new disk for disk rebuilding.
FAULTYUNTRUST (The logical disk is not recoverable.)
BANISH
UNTRUST
(and later to REBUILD)
Lost member disk is restored to a disk group by the ARU.
FAULTYOPTIMAL
BANISH
Force to recover a logical disk by the ARU.
UNTRUSTOPTIMAL
Force to recover a logical disk by the ARU.
UNTRUSTREBUILD
UNTRUST
(and later to REBUILD)
The partition completes data rebuilding.
REBUILDOPTIMAL
Table 2-12 State transition
Before logical disk recovery, make sure the following:
• There are enough hard disks in the disk group.
• No background tasks in progress, such as disk rebuilding or RAID
reconfiguration.
2-53
Using the RAID GUI
• No reconfiguration tasks are performed by the faulty logical disk.
• Start a recovery
When there are any hard disk conflicts, there might be faulty disk groups,
logical disks, or volumes on your controller. You can perform DG recovery
to restore lost member disks to a disk group. The faulty logical disks on the
disk group are recovered automatically when the disk group is
recovered.
To perform a disk group recovery, do the following:
1. Select Maintenance Utilities > Array Recovery from the main menu.
2. Select DG from the Recovery Type drop-down menu.
3. Select a disk group, and click Recover.
4. The Restore the Array window displays. Select the original member disks
to restore.
Note
1. If a non-member disk is selected, check the Force to recover
disk option and specify the Disk Member Index. Make sure the
recovery index is correct.
2. To reduce the possibility of data loss, ensure that the recovery
order is correct when the Force to recover disk option is chosen.
5. Click Apply and a confirmation prompt displays. Click Confirm.
6. The disk group recovery starts. Rebuilding will also start for degraded
logical disks on a disk group.
If the logical disk is not recovered automatically after disk group
recovery, perform logical disk recovery. After logical disks are restored,
you can perform the volume recovery to restore the lost member logical
disks to a volume.
2.7.14 Schedule task
The DG reconfiguration, LD reconfiguration, disk cloning, disk scrubbing,
and DST scheduled tasks are listed in the Schedule Task section. When the
scheduled date and time is met, the controller will start the specified
tasks.
Note
The controller will try to launch commands according to the schedule.
However, if the command cannot be executed at that moment, the
controller will not retry.
To cancel a scheduled task, select it and click Delete. A confirmation
prompt displays. Click Confirm to delete the selected task.
2-54
Using the RAID GUI
2.7.15 Cache Configurations
In this section, you can configure the following settings to the controller.
The settings of Cache Unit Size, Auto Array Roaming Control, and Write
Log Control will take effect after you restart the RAID subsystem.
The cache unit size must be smaller or equal to the minimum stripe
size of existing logical disks.
Read Ahead Expire Control (1/100 second):55 (default)
Specify the read ahead expire control in 1/100 seconds.The range is
from 10 to 1000.
Write Cache Periodic Flush (second): 5 (default)
Specify the period in seconds to periodically flush the write cache. If
0 is specified, periodic cache flushing is disabled. The range is from 0
to 999.
Write Cache Flush Ratio (%):45 (default)
Specify the dirty write buffer watermark. When the specified
percentage is reached, the system will start to flush the write buffers
immediately. The range is from 1% to 100%.
Mirrored Write Cache Control: On (default) / Off
This option is only available on the redundant-controller system. If this
option is enabled, all written data from hosts will be mirrored to the
peer controller.
Note
Disable Mirrored Write Cache Control function will improve the write
performance, but it will cause data loss when a controller fails. Do
not disable it if you set active-active redundant controller
configuration.
2.7.16 Miscellaneous
Auto Spare Control: On (default) / Off
If this option is enabled, and there is no global spare disk, unused
hard disks are used for rebuilding. If there are multiple unused disks,
the disk with the lowest hard disk identifier will be used.
Spare Restore Control: On/ Off (default)
If this option is enabled, the controller will restore the data from the
spare disk to a new replacement disk when inserted. This allows the
user to keep the same member disks as original.
2-55
Using the RAID GUI
Auto Array Roaming Control: On / Off (default)
On: Enable imported foreign hard disks when the controller is started.
Foreign hard disk configurations are also restored.
Off: Disable imported foreign hard disks when the controller is started.
On-line Array Roaming Control: On / Off (default)
On: The controller will try to keep the disk in the foreign state if hard
disk contains valid meta-data. However, if the disk fails to import
successfully, it will enter the conflict state.
Off: All on-line installed disks are perceived as new disks and enter
unused state. Meta-data on the disk is cleared and reset.
Note
Hard disks with configurations that conflict with controller
configurations are not imported and enter conflict state.
Write Log Control: On (default) / Off
The consistency of parity and data might not be retained because of
improper shutdown of the controller. This option enables or disables
write logging for parity consistency recovery.
Note
1. Enabling write logging will cause slight performance degradation.
2. Write logging is only effective to logical disks with parity-based
RAID levels.
3. To guarantee the consistency of data and parity by write logging,
the on-disk cache must be turned off.
Meta-data Update Frequency: Low (default) / Medium / High
This option specifies the frequency to update the progress of
background tasks, except reconfiguration tasks.
Task Notify: On/ Off (default)
Select this option to enable or disable the event notification when
the background task is completed to a specified percentage. The
range is from 1% to 99%.
2-56
Using the RAID GUI
2.8 Hardware Configurations
2.8.1 Hard disks
In this section, you can configure the following settings to all hard disks.
Utilities Task Priority: Low (default) / Medium / High
This option determines the priority of the background tasks for utilities
of all hard disks not belonging to any disk group, such as scrubbing
and cloning.
SMART Warning:On / Off (default)
This option is only for SMART function supported hard disks. The SMART
function serves as a device status monitor.
Period of SMART Polling (minute): 60 (default)
This option is only available when the SMART warning is turned on.
Specify the period in minutes to poll the SMART status from hard disks
periodically.
SMART Action:Alert (default) / Clone
This option is only available when the SMART warning is turned on. The
controller will alert you or start disk cloning when a disk reports a
SMART warning.
Disk IO: timeout after 30 (default) sec(s) and retry 1 (default) time(s)
Timeout value (in unit of seconds): If a hard disk does not respond to
a command within this time, the controller will reset and reinitialize
the hard disk, and retry the command. The possible values are 1 to
60.
Retry times: Specify the number of retries when a disk IO command
fails. The possible values are 0 to 8.
Transfer Speed: Auto (default) / 1.5GB / 3GB
This option specifies the transfer speed of a hard disk. When Auto is
specified, the transfer speed is determined by the controller
according to the best transfer mode supported by the installed hard
disks.The option is available only for RAID controller with SATA disk
interface.
Bad Block Alert:On / Off (default)
This option enables or disables event alerts for bad block
reallocation. After selecting On, four blank fields are displayed for
you to specify the percentages of reserved bad block reallocation
space. The default values are 20, 40, 60, and 80.
2-57
Using the RAID GUI
Figure 2-23 Specify the percentage for Bad Block Alert
Figure 2-24 Specify the percentage for Bad Block Clone
Bad Block Clone: On / Off (default)
This option enables or disables disk cloning for bad block
reallocation. After selecting On, a blank field is displayed for you to
specify the percentage of reserved bad block reallocation space.
When the specified space is reached, disk cloning will be started. The
default value is 50.
Note
1. Latter percentages must be larger than the former percentages.
2. Percentages must be integers between 1 and 100.
Note
1. Percentages must be integers between 1 and 100.
2. Cloning can only be started when there are local or global spare
disks.
Bad Block Retry: On (default) / Off
Select this option to enable or disable retrying when bad block
reallocation fails.
IO Queue:On (default) / Off
Select this option to enable or disable Negative Command Queue (NCQ), which enhances hard disk read performance.The option is
available only for RAID controller with SATA disk interface.
Disk Standby Mode: On / Off (default)
Select this option to enable or disable disk standby mode after a
period of host inactivity.
Disk Access Delay Time (second): 15 (default)
Specify the delay time before the controller tries to access the hard
disks after power-on. The range is between 15 and 75.
Delay Time When Boot-Up (second): 40 (default)
Specify the delay time before the controller automatically restarts.
The range is between 20 and 80.
2-58
Using the RAID GUI
Caution
The boot-up delay time must be longer than the disk access delay
time plus 5 seconds.
2.8.2 Ports
2.8.2.1 FC / SAS / SCSI ports
This shows information about FC/SAS/SCSI ports. For FC ports including
Controller Failover Mode (for redundant controller only), each port’s ID,
name, WWN, Hard loop ID, connection mode (private loop, public loop,
or point-to-point), and data rate. For SAS ports including each port’s ID,
name and SAS address. For SCSI ports including each port’s ID, name,
default SCSI ID and data rate. To change the settings, follow the
instructions given below:
Note
In redundant-controller systems, the four FC ports are given
identifiers fcpa1, fcpa2, fcpb1, and fcpb2 to identify the
corresponding port positions located on each controller.
1. Select an FC/SAS/SCSI port and click Modify to open the configurations
window.
2. Specify the following options.
Controller
Failover Mode
(For FC port with redundant controller only)
Multipath IO: This mode allows the host computer
to access the RAID system over multiple paths. To
use this mode, Pathguard needs to be installed.
See 5.1 Multi-Path IO Solutions for more
information.
Multiple-ID: This function requires the use of fiber
switch. When you select this function, only simple
method is available for storage provisioning. See
5.2 Multiple ID solutions for more information.
NameType a name associated with each FC/SAS/SCSI
port. The maximum name length is 15 bytes. For
SAS ports please jump to step 4 after setting name.
Hard Loop IDSelect a fixed loop ID for each FC port from the
drop-down menu. To disable hard loop ID, select
Auto. The loop ID is automatically determined
during loop initialization procedure.
2-59
Using the RAID GUI
Connection
Mode
Auto: The controller will determine the connection
mode automatically.
Arbitration loop: This is a link that connects all the
storages with the host, which enables data
transferring.
Fabric: This is a point to point connection mode
without a switch.
Default SCSI ID(For SCSI port)
Select a fixed SCSI ID for each SCSI port from the
drop-down menu. The ID range is from 0 to 15.
Data RateAuto / 1GB / 2GB / 4GB
Select a preferred data rate for an FC port or all
FC ports.
Select a preferred data rate for an SCSI port or all
SCSI ports.The default setting is Ultra320.
3. Check the ‘Apply connection mode and data rate to all FC ports’
option if necessary.
Check the ‘Apply data rate to all SCSI ports’ option if necessary (SCSI
port).
4. Click Apply and the ‘Restart to Apply’ prompt box appears. Click
Restart to restart the controller immediately, or OK to restart later.
5. All settings except FC/SAS/SCSI port name are effective after you
reconnect the controller.
• Setting FC Worldwide Node Name
The default worldwide port name (WWPN) of each FC port should be
different. The assignment of worldwide node names (WWNN) to all FC
ports help the RAID system recognize all the FC ports with the same
WWPN as one device.
To set FC worldwide node name, click the WWNN button. Then select
Distinct to use different FC WWPNs, or Identical to synchronize all FC ports
using the same WWNN. Click Apply to save. The WWPN of all FC ports will
be synchronized the next time you start the RAID system.
2-60
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.