Qsan Technology P300H61, P300H71 User Manual

1
QSAN iSCSI subsystem
P300H61 / P300H71
GbE iSCSI to SATA II / SAS
RAID subsystem
User Manual
Version 7.79 (MAR, 2011)
QSAN Technology, Inc.
http://www.QsanTechnology.com
User Manual# QUM201106-P300H61_P300H71
2
Preface
Copyright
Copyright@2011, QSAN Technology, Inc. All rights reserved. No part of this manual may be reproduced or transmitted without written permission from QSAN Technology, Inc.
Trademarks
All products and trade names used in this manual are trademarks or registered trademarks of their respective companies.
About this manual
This manual is the introduction of QSAN P300H61 / P300H71 subsystem and it aims to help users know the operations of the disk array system easily. Information contained in this manual has been reviewed for accuracy, but not for product warranty because of the various environments / OS / settings. Information and specification will be changed without further notice. For any update information, please visit
www.QsanTechnology.com
and your contact windows.
Thank you for using QSAN Technology, Inc. products; if you have any question, please e-mail to
support@qsan.com.tw. We will answer your question as soon as possible.
Caution
Do not attempt to service, change, disassemble or upgrade the equipment’s
components by yourself. Doing so may violate your warranty and expose
you to electric shock. Refer all servicing to authorized service personnel.
Please always follow the instructions in this user’s manual.
3
Table of Contents
Chapter 1 Overview ................................................................. 6
1.1 Features .......................................................................................6
1.1.1 Highlights...................................................................................................................7
1.2 RAID concepts..............................................................................8
1.2.1 Terminology...............................................................................................................8
1.2.2 RAID levels ..............................................................................................................11
1.2.3 Volume relationship.................................................................................................11
1.3 iSCSI concepts............................................................................12
1.4 Subsystem specifications ............................................................14
1.4.1 Technical specifications...........................................................................................14
1.4.2 FCC and CE statements...........................................................................................17
Chapter 2 Installation ........................................................... 19
2.1 Package contents........................................................................19
2.2 Before installation.......................................................................19
2.3 Enclosure....................................................................................19
2.3.1 Front view................................................................................................................19
2.3.2 Control panel ...........................................................................................................20
2.3.3 Install drives ............................................................................................................21
2.3.4 Rear view.................................................................................................................22
2.4 Install battery backup module (optional) ....................................24
2.5 Deployment ................................................................................25
Chapter 3 Quick setup ........................................................... 29
3.1 Management interfaces ..............................................................29
3.1.1 Serial console...........................................................................................................29
3.1.2 Remote control ........................................................................................................29
3.1.3 LCM..........................................................................................................................29
3.1.4 Web UI.....................................................................................................................32
3.2 How to use the system quickly ...................................................34
3.2.1 Quick installation .....................................................................................................34
3.2.2 Volume creation wizard...........................................................................................37
Chapter 4 Configuration........................................................ 40
4.1 Web UI management interface hierarchy ...................................40
4.2 System configuration..................................................................41
4.2.1 System setting.........................................................................................................41
4.2.2 Network setting .......................................................................................................42
4.2.3 Login setting............................................................................................................43
4.2.4 Mail setting ..............................................................................................................44
4.2.5 Notification setting...................................................................................................45
4.3 iSCSI configuration .....................................................................47
4.3.1 NIC...........................................................................................................................47
4.3.2 Entity property.........................................................................................................50
4.3.3 Node ........................................................................................................................51
4.3.4 Session.....................................................................................................................54
4.3.5 CHAP account ..........................................................................................................55
4
4.4 Volume configuration..................................................................56
4.4.1 Physical disk.............................................................................................................57
4.4.2 RAID group..............................................................................................................60
4.4.3 Virtual disk...............................................................................................................63
4.4.4 Snapshot..................................................................................................................68
4.4.5 Logical unit ..............................................................................................................71
4.4.6 Example ...................................................................................................................73
4.5 Enclosure management ..............................................................77
4.5.1 Hardware monitor....................................................................................................78
4.5.2 UPS ..........................................................................................................................80
4.5.3 SES...........................................................................................................................82
4.5.4 Hard drive S.M.A.R.T...............................................................................................82
4.6 System maintenance ..................................................................83
4.6.1 System information..................................................................................................84
4.6.2 Event log..................................................................................................................84
4.6.3 Upgrade ...................................................................................................................86
4.6.4 Firmware synchronization........................................................................................87
4.6.5 Reset to factory default...........................................................................................87
4.6.6 Import and export ...................................................................................................88
4.6.7 Reboot and shutdown .............................................................................................88
4.7 Home/Logout/Mute.....................................................................89
4.7.1 Home .......................................................................................................................89
4.7.2 Logout......................................................................................................................89
4.7.3 Mute.........................................................................................................................89
Chapter 5 Advanced operations .......................................... 90
5.1 Volume rebuild ...........................................................................90
5.2 RG migration and moving...........................................................92
5.3 VD extension ..............................................................................94
5.4 QSnap.........................................................................................95
5.4.1 Create snapshot volume..........................................................................................95
5.4.2 Auto snapshot..........................................................................................................96
5.4.3 Rollback ...................................................................................................................97
5.4.4 QSnap constraint .....................................................................................................98
5.5 Disk roaming ............................................................................100
5.6 VD clone ...................................................................................101
5.7 SAS JBOD expansion ................................................................108
5.7.1 Connecting JBOD...................................................................................................108
5.7.2 Upgrade firmware of JBOD....................................................................................113
5.8 MPIO and MC/S ........................................................................113
5.9 Trunking and LACP ...................................................................115
5.10 Dual controllers ........................................................................117
5.10.1 Perform I/O ...........................................................................................................117
5.10.2 Ownership..............................................................................................................118
5.10.3 Controller status ....................................................................................................119
5.11 QReplica ...................................................................................120
Chapter 6 Troubleshooting ................................................. 135
6.1 System buzzer ..........................................................................135
5
6.2 Event notifications ....................................................................135
6.3 How to get support...................................................................142
Appendix ................................................................................ 146
A. Compatibility list .......................................................................146
B. Microsoft iSCSI initiator ............................................................147
6
Chapter 1 Overview
1.1 Features
QSAN subsystem can provide non-stop service with a high degree of fault tolerance by using QSAN RAID technology and advanced array management features.
P300H61 / P300H71 subsystem connects to the host system by iSCSI interface. It can
be configured to numerous RAID level. The subsystem provides reliable data protection for servers by using RAID 6. The RAID 6 allows two HDD failures without any impact on the existing data. Data can be recovered from the existing data and parity drives. (Data can be recovered from the rest drives.)
Figure 1.1.1 (P300H61)
Figure 1.1.2 (P300H71)
7
Snapshot-on-the-box (QSnap) is a fully usable copy of a defined collection of data
that contains an image of the data as it appeared at the point in time, which means a point-in-time data replication. It provides consistent and instant copies of data volumes without any system downtime. Snapshot-on-the-box can keep up to 32 snapshots for one logical volume. Rollback feature is provided for restoring the previous-snapshot data easily while continuously using the volume for further data access. The data access which includes read / write is working as usual without any impact to end users. The "on-the­box" implies that it does not require any proprietary agents installed at host side. The snapshot is taken at target side. It will not consume any host CPU time thus the server is dedicated to the specific or other application. The snapshot copies can be taken manually or by schedule every hour or every day, depends on the modification.
QSAN subsystem is the most cost-effective disk array system with completely integrated high-performance and data-protection capabilities which meet or exceed the highest industry standards, and the best data solution for small / medium business (SMB)
users.
1.1.1 Highlights
QSAN P300H61 / P300H71 feature highlights
Host Interface
8 x iSCSI GbE ports
Drive Interface
16 x SAS or SATA II (P300H61)
24 x SAS or SATA II (P300H71)
RAID Controllers
Dual-active RAID controllers
Scalability SAS JBOD expansion port
Green Auto disk spindown
Advanced cooling
RAID Level RAID 0, 1, 0+1, 3, 5, 6, 10, 30, 50, 60 and JBOD
Qsan N-way mirror
Compatibility Support multiple OSes, applications, iSCSI HBA, software
initiators, and etc.
Virtualization VMWare, Hyper-V, Citrix
8
Data Protection
QSnap writeable snapshot
Connection Availability
Load balancing and failover support on the 8 iSCSI GbE ports
Dimension
(W x D x H)
447 x 490 x 130 (mm) (P300H61) 447 x 490 x 171 (mm) (P300H71)
Power Supply
2 x 500W PSU (P300H61)
3 x 500W PSU (P300H71)
Cache Protection
Hot pluggable battery backup module (optional)
Fan Redundant
1.2 RAID concepts
RAID is the abbreviation of “Redundant Array of Independent Disks. The basic idea of
RAID is to combine multiple drives together to form one large logical drive. This RAID drive obtains performance, capacity and reliability than a single drive. The operating system detects the RAID drive as a single storage device.
1.2.1 Terminology
The document uses the following terms:
Part 1: Common
RAID Redundant Array of Independent Disks. There are different
RAID levels with different degree of data protection, data availability, and performance to host environment.
PD The Physical Disk belongs to the member disk of one specific
RAID group.
RG Raid Group. A collection of removable media. One RG consists
of a set of VDs and owns one RAID level attribute.
9
VD Virtual Disk. Each RD could be divided into several VDs. The VDs
from one RG have the same RAID level, but may have different volume capacity.
LUN Logical Unit Number. A logical unit number (LUN) is a unique
identifier which enables it to differentiate among separate devices (each one is a logical unit).
GUI Graphic User Interface.
RAID cell
When creating a RAID group with a compound RAID level, such as 10, 30, 50 and 60, this field indicates the number of subgroups in the RAID group. For example, 8 disks can be grouped into a RAID group of RAID 10 with 2 cells, 4 cells. In the 2-cell case, PD {0, 1, 2, 3} forms one RAID 1 subgroup and PD {4, 5, 6, 7} forms another RAID 1 subgroup. In the 4­cells, the 4 subgroups are PD {0, 1}, PD {2, 3}, PD {4, 5} and PD {6,7}.
WT Write-Through cache-write policy. A caching technique in which
the completion of a write request is not signaled until data is safely stored in non-volatile media. Each data is synchronized in both data cache and accessed physical disks.
WB Write-Back cache-write policy. A caching technique in which the
completion of a write request is signaled as soon as the data is in cache and actual writing to non-volatile media occurs at a later time. It speeds up system write performance but needs to bear the risk where data may be inconsistent between data cache and the physical disks in one short time interval.
RO Set the volume to be Read-Only.
DS Dedicated Spare disks. The spare disks are only used by one
specific RG. Others could not use these dedicated spare disks for any rebuilding purpose.
GS Global Spare disks. GS is shared for rebuilding purpose. If some
RGs need to use the global spare disks for rebuilding, they could get the spare disks out from the common spare disks pool for such requirement.
DG DeGraded mode. Not all of the array’s member disks are
functioning, but the array is able to respond to application read and write requests to its virtual disks.
SCSI Small Computer Systems Interface.
SAS Serial Attached SCSI.
10
S.M.A.R.T. Self-Monitoring Analysis and Reporting Technology.
WWN World Wide Name.
HBA Host Bus Adapter.
SES SCSI Enclosure Services.
NIC Network Interface Card.
BBM Battery Backup Module
Part 2: iSCSI
iSCSI Internet Small Computer Systems Interface.
LACP Link Aggregation Control Protocol.
MPIO Multi-Path Input/Output.
MC/S Multiple Connections per Session
MTU Maximum Transmission Unit.
CHAP
Challenge Handshake Authentication Protocol. An optional
security mechanism to control access to an iSCSI storage system over the iSCSI data ports.
iSNS Internet Storage Name Service.
Part 3: Dual controller
SBB Storage Bridge Bay. The objective of the Storage Bridge Bay
Working Group (SBB) is to create a specification that defines mechanical, electrical and low-level enclosure management requirements for an enclosure controller slot that will support a variety of storage controllers from a variety of independent hardware vendors (“IHVs”) and system vendors.
QSATA QSAN multiplexer board is for SATA II disk connection to the
dual controller backplane.
QJSATA QSAN bridge board is for SATA II disk connection to the dual
JBOD backplane.
11
1.2.2 RAID levels
There are different RAID levels with different degree of data protection, data availability, and performance to host environment. The description of RAID levels are on the following:
RAID 0 Disk striping. RAID 0 needs at least one hard drive.
RAID 1 Disk mirroring over two disks. RAID 1 needs at least two hard
drives.
N-way mirror
Extension to RAID 1 level. It has N copies of the disk.
RAID 3 Striping with parity on the dedicated disk. RAID 3 needs at least
three hard drives.
RAID 5 Striping with interspersed parity over the member disks. RAID 3
needs at least three hard drives.
RAID 6 2-dimensional parity protection over the member disks. RAID 6
needs at least four hard drives.
RAID 0+1 Mirroring of the member RAID 0 volumes. RAID 0+1 needs at
least four hard drives.
RAID 10 Striping over the member RAID 1 volumes. RAID 10 needs at
least four hard drives.
RAID 30 Striping over the member RAID 3 volumes. RAID 30 needs at
least six hard drives.
RAID 50 Striping over the member RAID 5 volumes. RAID 50 needs at
least six hard drives.
RAID 60 Striping over the member RAID 6 volumes. RAID 60 needs at
least eight hard drives.
JBOD The abbreviation of “Just a Bunch Of Disks. JBOD needs at
least one hard drive.
1.2.3 Volume relationship
12
The below graphic is the volume structure which QSAN has designed. It describes the relationship of RAID components. One RG (RAID group) consists of a set of VDs (Virtual Disk) and owns one RAID level attribute. Each RG can be divided into several VDs. The VDs in one RG share the same RAID level, but may have different volume capacity. All VDs share the CV (Cache Volume) to execute the data transaction. LUN (Logical Unit Number) is a unique identifier, in which users can access through SCSI commands.
Figure 1.2.3.1
1.3 iSCSI concepts
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet.
IP SANs are true SANs (Storage Area Networks) which allow several servers to attach to an infinite number of storage volumes by using iSCSI over TCP/IP networks. IP SANs can scale the storage capacity with any type and brand of storage system. In addition, it can be used by any type of network (Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet) and combination of operating systems (Microsoft Windows, Linux, Solaris, Mac, etc.) within the SAN network. IP-SANs also include mechanisms for security, data replication, multi-path and high availability.
Storage protocol, such as iSCSI, has “two ends” in the connection. These ends are initiator and target. In iSCSI, we call them iSCSI initiator and iSCSI target. The iSCSI initiator requests or initiates any iSCSI communication. It requests all SCSI operations like read or write. An initiator is usually located on the host side (either an iSCSI HBA or iSCSI SW initiator).
RG
PD 2 PD 3 DS PD 1
VD 1 VD 2
QSnap
VD
RAM
Cache Volume
+
LUN 1
LUN 2
LUN 3
+
+
13
The target is the storage device itself or an appliance which controls and serves volumes or virtual volumes. The target is the device which performs SCSI command or bridge to an attached storage device.
Figure 1.3.1
The host side needs an iSCSI initiator. The initiator is a driver which handles the SCSI traffic over iSCSI. The initiator can be software or hardware (HBA). Please refer to the certification list of iSCSI HBA(s) in Appendix A. OS native initiators or other software initiators use standard TCP/IP stack and Ethernet hardware, while iSCSI HBA(s) use their own iSCSI and TCP/IP stacks on board.
Hardware iSCSI HBA(s) provide its own initiator tool. Please refer to the vendors’ HBA user manual. Microsoft, Linux, Solaris and Mac provide iSCSI initiator driver. Please contact QSAN for the latest certification list. Below are the available links:
1. Link to download the Microsoft iSCSI software initiator:
http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585­b385-befd1319f825&DisplayLang=en
2. In current Linux distributions, OS built-in iSCSI initiators are usually available. For
different kernels, there are different iSCSI drivers. Please check Appendix A for iSCSI initiator certification list. If user needs the latest Linux iSCSI initiator, please visit Open-iSCSI project for most update information. Linux-iSCSI (sfnet) and Open-iSCSI projects merged in April 11, 2005.
Open-iSCSI website:
http://www.open-iscsi.org/
Open-iSCSI README:
http://www.open-iscsi.org/docs/README
Features: http://www.open-iscsi.org/cgi-bin/wiki.pl/Roadmap Support Kernels:
http://www.open-iscsi.org/cgi-bin/wiki.pl/Supported_Kernels
iSCSI device 1
(target)
Host 1
(initiator)
NIC
IP SAN
Host 2
(initiator)
iSCSI
HBA
iSCSI device 2
(target)
iSCSI device 1
(target)
Host 1
(initiator)
NIC
IP SAN
Host 2
(initiator)
iSCSI
HBA
iSCSI device 2
(target)
14
Google groups:
http://groups.google.com/group/open-iscsi/threads?gvc=2 http://groups.google.com/group/open-iscsi/topics
Open-iSCSI Wiki:
http://www.open-iscsi.org/cgi-bin/wiki.pl
3. ATTO iSCSI initiator is available for Mac.
Website:
http://www.attotech.com/xtend.html
4. Solaris iSCSI initiator
Version: Solaris 10 u6 (10/08)
1.4 Subsystem specifications
1.4.1 Technical specifications
Controller features
1. Dual-active configuration support
2. Better performance, when comparing to other competitors' products in the same
segment
3. Cache mirroring through high bandwidth channels
4. Flexible RAID group (RG) ownership management
Each RG can be assigned to one of the two controllers Each LUN can be exported from both controllers
5. Management port seamless take-over The management port can be transferred smoothly to the other controller with
the same IP address
6. Online firmware upgrade, no system down time
7. Backward compatible to P210V61 volume configurations
8. Multiple target iSCSI nodes per controller support Each LUN can be attached to one of 32 nodes from each controller
9. Front-end 4 x GbE iSCSI ports with high availability/load balancing/fail-over support
per controller
Microsoft MPIO, MC/S, Trunking, LACP, and etc.
10. SBB Compliant
System key components
1. CPU: Intel Xscale IOP 81342
2. Memory: 2GB DDRII 533 DIMM, maximum 4GB support per controller
3. Hardware iSCSI off-load engine
4. 2 x UARTs: serial console management and UPS
5. Fast Ethernet port for web-based management use
6. Backend: 16 x SAS or SATA II drive connections (P300H61)
Backend: 24 x SAS or SATA II drive connections (P300H71)
15
7. Front-end: 4 x GbE iSCSI ports per controller
8. LCM for quick management
9. Hot pluggable BBM support (optional)
10. SAS JBOD expansion port for expansion
11. QSATA board support for SATA drives (optional)
12. Two power supplies (P300H61)
Three power supplies (P300H71)
13. Redundant fans
RAID and volume operation
1. RAID level: 0,1,0+1,3,5,6,10,30,50, 60, JBOD, and N-way mirror
2. Up to 1024 logical volumes in the system
3. Up to 32 PDs can be included in one volume group
4. Global and dedicated hot spare disks
5. Write-through or write-back cache policy for different application usage
6. Multiple RAID volumes support
7. Configurable RAID stripe size
8. Online volume expansion
9. Instant RAID volume availability
10. Auto volume rebuilding
11. On-line volume migration with no system down-time
Advanced data protection
1. QSnap writeable snapshot
Built-in snapshot with rollback enabled Snapshot enabled up to 16 volumes, each logical volume supports up to 32
snapshot volumes, total 512 snapshot volumes per system
2. Microsoft Windows Volume Shadow Copy Services (VSS)
3. Configurable N-way mirror for high data protection
4. Online disk roaming
5. Instant volume configuration restoration
6. Smart faulty sector relocation
7. Hot pluggable battery backup module support (optional)
Enclosure monitoring
1. S.E.S. inband management
2. UPS management via dedicated serial port
3. Fan speed monitors
4. Redundant power supply monitors
5. Voltage monitors
6. Thermal sensors for both RAID controller and enclosure
7. Status monitors for QSAN SAS JBODs
Management interface
16
1. Management UI via
LCM serial console SSH telnet HTTP Web UI secured Web (HTTPS)
2. Notification via
Email SNMP trap Browser pop-up windows Syslog Windows Messenger
3. iSNS support
4. DHCP support
iSCSI features
1. iSCSI jumbo frame support
2. Header/Data digest support
3. CHAP authentication enabled
4. Load-balancing and failover through MPIO, MC/S, Trunking, and LACP
5. Up to 32 multiple nodes support
Host connection
1. 4 x iSCSI GbE ports per controller
2. Host access control: Read-Write and Read-Only
3. Up to 128 sessions per controller
4. One logic volume can be shared by as many as 16 hosts
OS support
Windows Linux Solaris Mac
Drive support
1. SAS
2. SATA II (optional)
3. SCSI-3 compliant
4. Multiple IO transaction processing
5. Tagged command queuing
6. Disk auto spindown support
7. S.M.A.R.T. for SATA II drives
8. SAS JBODs expansion
17
Power and Environment
AC input: 100-240V ~ 7A-4A 500W with PFC (Auto Switching) DC output: 3.3V-21A; 5V-39A; 12V-30A Operating Temperature: 0 to 40 Relative Humidity: 5% to 95% non-condensing
Dimensions
3U16 19 inch rackmount chassis (P300H61)
4U24 19 inch rackmount chassis (P300H71)
447mm x 490mm x 130mm (W x D x H) (P300H61)
447mm x 490mm x 171mm (W x D x H) (P300H71)
1.4.2 FCC and CE statements
FCC statement
This device has been shown to be in compliance with and was tested in accordance with the measurement procedures specified in the Standards and Specifications listed below and as indicated in the measurement report number: xxxxxxxx-F
Technical Standard: FCC Part 15 Class A (Verification) IC ICES-003
CE statement
This device has been shown to be in compliance with and was tested in accordance with the measurement procedures specified in the Standards and Specifications listed below and as indicated in the measurement report number: xxxxxxxx-E
Technical Standard: EMC DIRECTIVE 2004/108/EC (EN55022 / EN55024)
UL statement
Rack Mount Instructions - The following or similar rack-mount instructions are included with the installation instructions:
A. Elevated Operating Ambient - If installed in a closed or multi-unit rack assembly, the
operating ambient temperature of the rack environment may be greater than room ambient. Therefore, consideration should be given to installing the equipment in an environment compatible with the maximum ambient temperature (Tma) specified by the manufacturer.
B. Reduced Air Flow - Installation of the equipment in a rack should be such that the
amount of air flow required for safe operation of the equipment is not compromised.
18
C. Mechanical Loading - Mounting of the equipment in the rack should be such that a
hazardous condition is not achieved due to uneven mechanical loading.
D. Circuit Overloading - Consideration should be given to the connection of the
equipment to the supply circuit and the effect that overloading of the circuits might have on overcurrent protection and supply wiring. Appropriate consideration of equipment nameplate ratings should be used when addressing this concern.
E. Reliable Earthing - Reliable earthing of rack-mounted equipment should be
maintained. Particular attention should be given to supply connections other than direct connections to the branch circuit (e.g. use of power strips).
Caution The main purpose of the handles is for rack mount
use only. Do not use the
handles to carry or transport the systems.
The ITE is not intended to be installed and used in a home, school or public area accessible to the general population, and the thumbscrews should be tightened with a tool after both initial installation and subsequent access to the panel.
Warning: Remove all power supply cords before service
This equipment intended for installation in restricted access location.
Access can only be gained by SERVICE PERSONS or by USERS who have been
instructed about the reasons for the restrictions applied to the location and about any precautions that shall be taken.
Access is through the use of a TOOL or lock and key, or other means of security, and
is controlled by the authority responsible for the location.
Caution
Risk of explosion if battery is replaced by incorrect type. Dispose of used
batteries according to the instructions.
19
Chapter 2 Installation
2.1 Package contents
The package contains the following items:
1. P300H61 / P300H71 subsystem (x1)
2. HDD trays (x16) (P300H61)
HDD trays (x24) (P300H71)
3. Power cords (x2) (P300H61)
Power cords (x3) (P300H71)
4. RS-232 cables (x2), one is for console (black color, phone jack to DB9 female), the
other is for UPS (gray color, phone jack to DB9 male)
5. CD (x1)
6. Rail kit (x1 set)
7. Keys, screws for drives and rail kit (x1 packet)
2.2 Before installation
Before starting, prepare the following items.
1. A host with a Gigabit Ethernet NIC or iSCSI HBA.
2. CAT 5e, or CAT 6 network cables for management port and iSCSI data ports.
3. Prepare storage system configuration plan.
4. Prepare management port and iSCSI data ports network information. When using
static IP, please prepare static IP addresses, subnet mask, and default gateway.
5. Gigabit switches (recommended). Or Gigabit switches with LCAP / Trunking (optional).
6. CHAP security information, including CHAP username and secret (optional).
2.3 Enclosure
2.3.1 Front view
Figure 2.3.1.1 (P300H61) Figure 2.3.1.2 (P300H71)
20
Drive slot numbering (P300H61)
Slot 1 Slot 5 Slot 9 Slot 13 Slot 2 Slot 6 Slot 10 Slot 14 Slot 3 Slot 7 Slot 11 Slot 15 Slot 4 Slot 8 Slot 12 Slot 16
Drive slot numbering (P300H71)
Slot 1 Slot 7 Slot 13 Slot 19 Slot 2 Slot 8 Slot 14 Slot 20 Slot 3 Slot 9 Slot 15 Slot 21 Slot 4 Slot 10 Slot 16 Slot 22 Slot 5 Slot 11 Slot 17 Slot 23 Slot 6 Slot 12 Slot 18 Slot 24
The drives can be installed into any slot in the enclosure. Slot numbering will be reflected in web UI.
Tips It’s better to install at least one drive in slot 1 ~ 4. System e
vent logs are
saved in these drives. Otherwise, event logs will be gone after reboot.
2.3.2 Control panel
There are five buttons to control QSAN LCM (LCD Control Module), including:  (up),  (down), Enter, ESC (Escape) and Mute.
Figure 2.3.2.1
Control panel description:
      
21
LCD display.
Power LED:
Blue Power on.  Off Power off.
Access LED:
Orange Host is accessing.  Off Host is no access.
Status LED:
Red System is failure.  Off System is good.
Mute button.

Up button.
Down button.
Enter button.
ESC button.
2.3.3 Install drives
Remove a drive tray. Then install a HDD.
To install SAS drives: align the edge of the SAS drive to the back end of tray; the backplane can directly connect to SAS drives.
To install SATA II drives with QSATA boards: align the QSATA board edge to the back end of tray; the backplane can connect SATA II drives through QSATA boards.
Figure 2.3.3.1 SAS drives Figure 2.3.3.2 SATA drives
22
Figure 2.2.3.3
HDD tray description:
HDD fault LED:
Red HDD is failure.  Off HDD is good.
HDD activity LED:
Blue HDD is active.  Violet blinking HDD is accessing.  Off No HDD.
Latch for tray kit removal.
HDD tray handhold.
2.3.4 Rear view
Figure 2.3.4.1 (P300H61)
   
     
23
Figure 2.3.4.2 (P300H71)
PSU and Fan module description:
Power supply unit (PSU3).
Fan module (FAN2).
Power supply unit (PSU2).
Power supply unit (PSU1).
Fan module (FAN1).
Controller 1.
Controller 2.
Figure 2.3.4.3
      
24
Connector, LED and button description:
Gigabit ports (x4).
LED (from left to right)
Controller Health LED:
Green Controller status normal or in the booting. Red Other than above status.
Master Slave LED:
Green Master controller. Off Slave controller.
Dirty Cache LED:
Orange Data on the cache waiting for flush to disks. Off No data on the cache.
BBM LED:
Green BBM installed and powered Off No BBM
BBM Status Button:
When the system power is off, press the BBM status button,
if the BBM LED is Green, then the BBM still has power to keep data on the cache. If not, then the BBM power is ran out and cannot keep the data on the cache anymore.
Management port.
Console port.
RS 232 port for UPS.
SAS JBOD expansion port.
2.4 Install battery backup module (optional)
To install the subsystem with a battery backup module, please follow the procedure.
25
Figure 2.4.1
1. BBM (Battery Backup Module) supports hot pluggable. Regardless of the subsystem is
turned on or off.
2. Remove the cover of BBM.
3. Insert the BBM.
4. Tighten the BBM and use screws to lock the both sides.
5. Done.
2.5 Deployment
Please refer to the following topology and have all the connections ready.
26
Figure 2.5.1
1. Setup the hardware connection before power on servers. Connect console cable,
management port cable, and iSCSI data port cables in advance.
2. In addition, installing an iSNS server is recommended for dual controller system.
3. Power on P300H61 / P300H71 and J300H61 / J300H71 (optional) first, and
then power on hosts and iSNS server.
4. Host server is suggested to logon the target twice (both controller 1 and controller 2),
and then MPIO should be setup automatically.
Tips iSNS server is recommended for dual controller system.
For better data service availability, all the connections among host servers, GbE switches, and the dual controllers are recommended as redundant as below.
27
Figure 2.5.2
The following topology is the connections for console and UPS (optional).
Figure 2.5.3
28
1. Using RS-232 cable for console (back color, phone jack to DB9 female) to connect
from controller to management PC directly.
2. Using RS-232 cable for UPS (gray color, phone jack to DB9 male) to connect from
controller to APC Smart UPS serial cable (DB9 female side), and then connect the serial cable to APC Smart UPS.
Caution It may not work when connecting the RS-
232 cable for UPS (gray color,
phone jack to DB9 male) to APC Smart UPS directly.
29
Chapter 3 Quick setup
3.1 Management interfaces
There are three management methods to manage QSAN subsystem, describe in the following:
3.1.1 Serial console
Use console cable (NULL modem cable) to connect from console port of QSAN subsystem to RS 232 port of management PC. Please refer to figure 2.3.1. The console settings are on the following:
Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control. Terminal type: vt100 Login name: admin Default password: 1234
3.1.2 Remote control
SSH (secure shell) software is required for remote login. The SSH client software is available at the following web site:
SSH Tectia Client:
http://www.ssh.com/
PuTTY: http://www.chiark.greenend.org.uk/
Host name: 192.168.10.50 (Please check the DHCP address first on LCM.) Login name: admin Default password: 1234
Tips QSAN product supports SSH for remote control
only. For using SSH, the IP
address and password are required for login.
3.1.3 LCM
After booting up the system, the following screen shows management port IP and model name:
30
Figure 3.1.3.1
Figure 3.1.3.2
Press “Enter” button, the LCM functions “System Info.”, “Alarm Mute”, “Reset/Shutdown”, “Quick Install”, “Volume Wizard”, “View IP Setting”, “Change IP Config” and “Reset to Default” will rotate by pressing  (up) and 
(down).
When there is WARNING event or ERROR event occurred (LCM default filter), the LCM shows the event log to give users more detail from front panel.
The following table is function description of each item.
LCM operation description:
System Info.
Display system information.
Alarm Mute Mute alarm when error occurs.
Reset/
Shutdown
Reset or shutdown controller.
Quick Install Quick steps to create a volume. Please refer to next chapter for
detailed operation steps in web UI.
Volume Wizard
Smart steps to create a volume. Please refer to next chapter for detailed operation steps in web UI.
View IP Setting
Display current IP address, subnet mask, and gateway.
Change IP config
Set IP address, subnet mask, and gateway. There are 2 options: DHCP (Get IP address from DHCP server) or static IP.
Reset to Default
Reset to default will set password to default: 1234, and set IP address to default as DHCP setting.
Default IP address: 192.168.10.50 (DHCP)
Default subnet mask: 255.255.255.0
192.168.10.50 Qsan P300H61
192.168.10.50 Qsan P300H71
31
Default gateway: 192.168.10.254
LCM menu hierarchy:
[Firmware Version
x.x.x]
[System Info.]
[RAM Size
xxx MB]
[Alarm Mute] [Yes No]
[Reset]
[Yes
No]
[Reset/Shutdown]
[Shutdown]
[Yes
No]
[Quick Install]
RAID 0 RAID 1 RAID 3 RAID 5 RAID 6
RAID 0+1
xxx GB
[Apply The
Config]
[
Yes
No]
[Local] RAID 0 RAID 1 RAID 3 RAID 5 RAID 6
RAID 0+1
[Use default
algorithm]
[Volume
Size]
xxx GB
[Apply The
Config]
[
Yes
No]
[Volume Wizard]
[JBOD x] 
RAID 0 RAID 1 RAID 3 RAID 5 RAID 6
RAID 0+1
[new x disk]

xxx BG
Adjust
Volume
Size
[Apply The
Config]
[
Yes
No]
[IP Config]
[Static IP]
[IP Address]
[192.168.010.050]
[IP Subnet Mask]
[255.255.255.0]
[View IP Setting]
[IP Gateway]
[192.168.010.254]
[DHCP]
[Yes
No]
[IP Address]
Adjust IP
address
[IP Subnet
Mask]
Adjust
Submask IP
[IP Gateway]
Adjust
Gateway IP
[Change IP Config]
[Static IP]
[Apply IP
Setting]
[
Yes
No]
Qsan
Technology

[Reset to Default] [Yes No]
32
Caution
Before power off, it is better to execute “Shutdown”
to flush the data
from cache to physical disks.
3.1.4 Web UI
QSAN subsystem supports graphic user interface (GUI) to operate. Be sure to connect the LAN cable. The default IP setting is DHCP; open the browser and enter:
http://192.168.10.50 (Please check the DHCP address first on LCM.)
And then it will pop up a dialog for authentication.
Figure 3.1.4.1
User name: admin Default password: 1234
After login, choose the functions which lists on the left side of window to make any configuration.
33
Figure 3.1.4.2
There are seven indicators and three icons at the top-right corner.
Figure 3.1.4.3
Indicator description:
RAID light:
Green RAID works well. Red RAID fails.
Temperature light:
Green Temperature is normal. Red Temperature is abnormal.
Voltage light:
Green voltage is normal. Red voltage is abnormal.
UPS light:
Green UPS works well. Red UPS fails.
Fan light:
Green Fan works well. Red Fan fails.
Power light:
Green Power works well. Red Power fails.
Dual controller light:
Green Both controller 1 and controller 2 are present and
well.
Orange The system is degraded and there is only 1
34
controller alive and well.
Return to home page.
Logout the management web UI.
Mute alarm beeper.
Tips
If the status indicators in Internet Explorer (IE) are displayed in gray, but
not in blinking red, please enable “Internet Options” “Advanced”
“Play animations in webpages”
options in IE. The default value is
enabled, but some applications will disable it.
3.2 How to use the system quickly
The following methods will describe the quick guide to use this subsystem.
3.2.1 Quick installation
Please make sure that there are some free drives installed in this system. SAS drivers are recommended. Please check the hard drive details in “/ Volume configuration /
Physical disk”.
Figure 3.2.1.1
Step1: Click the “Quick installation” menu item; follow the steps to set up system
name and date / time.
35
Figure 3.2.1.2
Step2: Confirm the management port IP address and DNS, and then click “Next”.
Figure 3.2.1.3
Step 3: Set up the data port IP and click “Next”.
36
Figure 3.2.1.4
Step 4: Set up the RAID level and volume size and click “Next”.
Figure 3.2.1.5
Step 5: Check all items, and click “Finish”.
37
Figure 3.2.1.6
Step 6: Done.
3.2.2 Volume creation wizard
“Volume create wizard” has a smarter policy. When the system is inserted with some
HDDs. “Volume create wizard” lists all possibilities and sizes in different RAID levels, it will use all available HDDs for RAID level depends on which user chooses. When system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all possibilities and combination in different RAID level and different sizes. After user chooses RAID level, user may find that some HDDs are available (free status). The result is using smarter policy designed by QSAN. It gives user:
1. Biggest capacity of RAID level for user to choose and,
2. The fewest disk number for RAID level / volume size.
E.g., user chooses RAID 5 and the controller has 12*200G + 4*80G HDDs inserted. If we use all 16 HDDs for a RAID 5, and then the maximum size of volume is 1200G (80G*15). By the wizard, we do smarter check and find out the most efficient way of using HDDs. The wizard only uses 200G HDDs (Volume size is 200G*11=2200G), the volume size is bigger and fully uses HDD capacity.
Step 1: Select “Volume create wizard” and then choose the RAID level. After the RAID level is chosen, click “Next”.
38
Figure 3.2.2.1
Step 2: Please select the combination of the RG capacity, or “Use default algorithm” for maximum RG capacity. After RG size is chosen, click “Next”.
Figure 3.2.2.2
39
Step 3: Decide VD size. User can enter a number less or equal to the default number. Then click “Next”.
Figure 3.2.2.3
Step 4: Confirmation page. Click “Finish” if all setups are correct. Then a VD will be
created.
Step 5: Done. The system is available now.
Figure 3.2.2.4
(Figure 3.2.2.4: A virtual disk of RAID 0 is created and is named by system itself.)
40
Chapter 4 Configuration
4.1 Web UI management interface hierarchy
The below table is the hierarchy of web GUI.
System configuration
System setting

System name / Date and time / System indication
Network
setting

MAC address / Address / DNS / Port
Login setting

Login configuration / Admin password / User password
Mail setting

Mail
Notification
setting

SNMP / Messenger / System log server / Event log filter
iSCSI configuration
NIC

Show information for:(Controller 1/ Controller 2) Link aggregation or multi-homed /
IP settings for iSCSI ports /
Become default gateway / Enable jumbo frame / Ping host
/
Enable QReplica / QReplica IP setting / Disable QReplica
Entity property

Entity name / iSNS IP
Node

Show information for:(Controller 1/ Controller 2) Authenticate / Change portal / Rename alias/ User
Session

Show information for:(Controller 1/ Controller 2) List connection / Delete
CHAP account

Create / Modify user information / Delete
Volume configuration
Physical disk

Set Free disk / Set Global spare / Set Dedicated spare / Upgrade
/ Disk Scrub / Turn on/off the indication LED / More information
RAID group

Create / Migrate / Move /
Activate / Deactivate / Parity check /
Delete / Set preferred owner /Set disk property / More
information
Virtual disk

Create / Extend / Parity check / Delete / Set property / Attach
LU
N / Detach LUN / List LUN / Set clone / Clear clone / Start
clone / Stop clone / Schedule clone / Set snapshot space /
Cleanup snapshot / Take snapshot / Auto snapshot / List
snapshot / More information
Snapshot

Set snapshot space / Auto snapshot / Tak
e snapshot / Export /
Rollback / Delete/ Cleanup snapshot
Logical unit

Attach / Detach/ Session
QReplica
(optional)

Create / Rebuild / Configuration / Start / Stop / Refresh / Create
multi-path / Delete multi-path / Schedule / Delete
Enclosure management
Hardware
monitor

Controller 1 / BPL / Controller 2 / Auto shutdown
UPS

UPS Type / Shutdown battery level / Shutdown delay / Shutdown
UPS
SES

Enable / Disable
S.M.A.R.T.

S.M.A.R.T. information (Only for SATA hard drives)
Maintenance
41
System
information

System information
Event log

Download / Mute / Clear
Upgrade

Browse the firmware to upgrade
Firmware
synchronization

Synchronize the slave controller’s firmware version with the
master’s
Reset to factory
default

Sure to reset to factory default?
Import and
export
Import/Export / Import file
Reboot and
shutdown

Reboot / Shutdown
Quick installation
Step 1 / Step 2 / Step 3 / Step 4 / Confirm
Volume creation wizard
Step 1 / Step 2 / Step 3 / Confirm
4.2 System configuration
“System configuration” is designed for setting up the “System setting”, “Network setting”, “Login setting”, “Mail setting”, and “Notification setting”.
Figure 4.2.1
4.2.1 System setting
“System setting” can setup system name and date. Default “System name” is
composed of model name and serial number of this system.
42
Figure 4.2.1.1
Check “Change date and time” to set up the current date, time, and time zone before using or synchronize time from NTP (Network Time Protocol) server. Click “Confirm” in
System indication to turn on the system indication LED. Click again to turn off.
4.2.2 Network setting
“Network setting” is for changing IP address for remote administration usage. There
are 3 options, DHCP (Get IP address from DHCP server), BOOTP (Get IP address from BOOTP server) and static IP. The default setting is DHCP. User can change the HTTP, HTTPS, and SSH port number when the default port number is not allowed on host/server.
43
Figure 4.2.2.1
4.2.3 Login setting
“Login setting” can set single admin, auto logout time and admin / user password. The
single admin is to prevent multiple users access the same system in the same time.
1. Auto logout: The options are (1) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1 hour.
The system will log out automatically when user is inactive for a period of time.
2. Login lock: Disabled or Enabled. When the login lock is enabled, the system allows
only one user to login or modify system settings.
44
Figure 4.2.3.1
Check “Change admin password” or “Change user password” to change admin or user password. The maximum length of password is 12 characters.
4.2.4 Mail setting
“Mail setting” can enter 3 mail addresses for receiving the event notification. Some mail
servers would check “Mail-from address” and need authentication for anti-spam. Please fill the necessary fields and click “Send test mail” to test whether email functions are available. User can also select which levels of event logs are needed to be sent via Mail. Default setting only enables ERROR and WARNING event logs. Please also make sure the DNS server IP is well-setup so the event notification mails can be sent successfully.
45
Figure 4.2.4.1
4.2.5 Notification setting
“Notification setting” can set up SNMP trap for alerting via SNMP, pop-up message via
Windows messenger (not MSN), alert via syslog protocol, and event log filter for web UI and LCM notifications.
46
Figure 4.2.5.1
“SNMP” allows up to 3 SNMP trap addresses. Default community setting is “public”. User
can choose the event log levels and default setting enables ERROR and WARNING event log in SNMP. There are many SNMP tools. The following web sites are for your reference:
SNMPc:
http://www.snmpc.com/
Net-SNMP: http://net-snmp.sourceforge.net/
If necessary, click “Download” to get MIB file and import to SNMP.
To use “Messenger”, user must enable the service “Messenger” in Windows (Start Control Panel Administrative Tools Services Messenger), and then event logs can
47
be received. It allows up to 3 messenger addresses. User can choose the event log levels and default setting enables the WARNING and ERROR event logs.
Using “System log server”, user can choose the facility and the event log level. The default port of syslog is 514. The default setting enables event level: INFO, WARNING and ERROR event logs.
There are some syslog server tools. The following web sites are for your reference: WinSyslog: http://www.winsyslog.com/ Kiwi Syslog Daemon: http://www.kiwisyslog.com/ Most UNIX systems build in syslog daemon.
“Event log filter” setting can enable event log display on “Pop up events” and “LCM”.
4.3 iSCSI configuration
“iSCSI configuration” is designed for setting up the “Entity Property”, “NIC”, “Node”, “Session”, and “CHAP account”.
Figure 4.3.1
4.3.1 NIC
“NIC” can change IP addresses of iSCSI data ports. P300H61 / P300H71 has four
gigabit ports on each controller to transmit data. Each of them must be assigned to an IP address and be set up in multi-homed mode, or the link aggregation / trunking mode has been set up. When there are multiple data ports setting up in link aggregation or trunking mode, all the data ports share single address.
Figure 4.3.1.1
48
(Figure 4.3.1.1: There are 4 iSCSI data ports on each controller. 4 data ports are set with static IP.)
IP settings:
User can change IP address by checking the gray button of LAN port, click “IP settings for iSCSI ports”. There are 2 selections, DHCP (Get IP address from DHCP server) or
static IP.
Figure 4.3.1.2
Default gateway:
Default gateway can be changed by checking the gray button of LAN port, click “Become default gateway”. There can be only one default gateway.
MTU / Jumbo frame:
MTU (Maximum Transmission Unit) size can be enabled by checking the gray button of LAN port, click “Enable jumbo frame”. Maximum jumbo frame size is 3900 bytes.
Caution The MTU size of
switching hub and HBA on host must be enabled.
Otherwise, the LAN connection can not work properly.
Multi-homed / Trunking / LACP:
49
The following is the description of multi-homed / trunking / LACP functions.
1. Multi-homed: Default mode. Each of iSCSI data port is connected by itself and is
not link aggregation and trunking. This function is also for Multipath functions. Select this mode can also remove the setting of Trunking / LACP in same time.
2. Trunking: defines the use of multiple iSCSI data ports in parallel to increase the link
speed beyond the limits of any single port.
3. LACP: The Link Aggregation Control Protocol (LACP) is part of IEEE specification
802.3ad that allows bundling several physical ports together to form a single logical channel. LACP allows a network switch to negotiate an automatic bundle by sending LACP packets to the peer. The advantages of LACP are (1) increases the bandwidth. (2) failover when link status fails on a port.
Trunking / LACP setting can be changed by clicking the button “Aggregation”.
Figure 4.3.1.3
(Figure 4.3.1.3: There are 4 iSCSI data ports on each controller, select at least two NICs for link aggregation.)
Figure 4.3.1.4
50
For example, LAN1 and LAN2 are set as Trunking mode. LAN3 and LAN4 are set as LACP mode. To remove Trunking / LACP setting, check the gray button of LAN port, click “Delete link aggregation”. Then it will pop up a message to confirm.
Ping host:
User can ping the corresponding host data port from the target, click “Ping host”.
Figure 4.3.1.5
(Figure 4.3.1.5 shows a user can ping host from the target to make sure the data port connection is well.)
4.3.2 Entity property
“Entity property” can view the entity name of the system, and setup “iSNS IP” for
iSNS (Internet Storage Name Service). iSNS protocol allows automated discovery, management and configuration of iSCSI devices on a TCP/IP network. Using iSNS, it needs to install an iSNS server in SAN. Add an iSNS server IP address into iSNS server lists in order that iSCSI initiator service can send queries. The entity name can be changed.
51
Figure 4.3.2.1
4.3.3 Node
“Node” can view the target name for iSCSI initiator. P300H61 / P300H71 supports up
to 32 multi-nodes. There are 32 default nodes created for each controller.
Figure 4.3.3.1
CHAP:
CHAP is the abbreviation of Challenge Handshake Authentication Protocol. CHAP is a strong authentication method used in point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and password. CHAP enables the username and password to transmit in an encrypted form for protection.
52
To use CHAP authentication, please follow the procedures.
1. Select one of 32 default nodes from one controller.
2. Check the gray button of “OP.” column, click “Authenticate”.
3. Select “CHAP”.
Figure 4.3.3.2
4. Click “OK”.
Figure 4.3.3.3
5. Go to “/ iSCSI configuration / CHAP account” page to create CHAP account.
Please refer to next section for more detail.
6. Check the gray button of “OP.” column, click “User”.
7. Select CHAP user(s) which will be used. It’s a multi option; it can be one or more. If
choosing none, CHAP can not work.
53
Figure 4.3.3.4
8. Click “OK”.
9. In “Authenticate” of “OP” page, select “None” to disable CHAP.
Change portal:
Users can change the portals belonging to the device node of each controller.
1. Check the gray button of “OP.” column next to one device node.
2. Select “Change portal”.
3. Choose the portals for the controller.
4. Click “OK” to confirm.
Figure 4.3.3.5
Rename alias:
User can create an alias to one device node.
54
1. Check the gray button of “OP.” column next to one device node.
2. Select “Rename alias”.
3. Create an alias for that device node.
4. Click “OK” to confirm.
5. An alias appears at the end of that device node.
Figure 4.3.3.6
Figure 4.3.3.7
Tips
After setting CHAP, the initiator in host should be set with the same CHAP
account. Otherwise, user cannot login.
4.3.4 Session
“Session” can display current iSCSI session and connection information, including the
following items:
1. TSIH (target session identifying handle)
2. Host (Initiator Name)
3. Controller (Target Name)
4. InitialR2T(Initial Ready to Transfer)
5. Immed. data(Immediate data)
6. MaxDataOutR2T(Maximum Data Outstanding Ready to Transfer)
7. MaxDataBurstLen(Maximum Data Burst Length)
8. DataSeginOrder(Data Sequence in Order)
9. DataPDUInOrder(Data PDU in Order)
10. Detail of Authentication status and Source IP: port number.
55
Figure 4.3.4.1
(Figure 4.3.4.1: iSCSI Session.)
Check the gray button of session number, click “List connection”. It can list all connection(s) of the session.
Figure 4.3.4.2
(Figure 4.3.4.2: iSCSI Connection.)
4.3.5 CHAP account
“CHAP account” can manage a CHAP account for authentication. P300H61 / P300H71
can create multiple CHAP accounts.
To setup CHAP account, please follow the procedures.
1. Click “Create”.
2. Enter “User”, “Secret”, and “Confirm” secret again. “Node” can be selected here
or later. If selecting none, it can be enabled later in “/ iSCSI configuration / Node / User”.
56
Figure 4.3.5.1
3. Click “OK”.
Figure 4.3.5.2
4. Click “Delete” to delete CHAP account.
4.4 Volume configuration
“Volume configuration” is designed for setting up the volume configuration which
includes “Physical disk”, “RAID group”, “Virtual disk”, “Snapshot”, “Logical unit”, and “QReplica” (optional).
57
Figure 4.4.1
4.4.1 Physical disk
“Physical disk” can view the status of hard drives in the system. The followings are
operational steps:
1. Check the gray button next to the number of slot, it will show the functions which can
be executed.
2. Active function can be selected, and inactive functions show up in gray color and
cannot be selected.
For example, set PD slot number 4 to dedicated spare disk.
Step 1: Check to the gray button of PD 4, select “Set Dedicated spare”, it will link to next page.
Figure 4.4.1.1
Step 2: If there is any RG which is in protected RAID level and can be set with dedicate spare disk, select one RG, and then click “Submit”.
Figure 4.4.1.2
Step 3: Done. View “Physical disk” page.
58
Figure 4.4.1.3
(Figure 4.4.1.3: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of the RG named “RG-R5”. The others are free disks.)
Step 4: The unit of size can be changed from (GB) to (MB). It will display the capacity of hard drive in MB.
Figure 4.4.1.4
PD column description:
Slot The position of a hard drive. The button next to the number of
slot shows the functions which can be executed.
Size (GB) (MB)
Capacity of hard drive. The unit can be displayed in GB or MB.
RG Name RAID group name.
59
Status The status of hard drive:
“Online” the hard drive is online. “Rebuilding” the hard drive is being rebuilt. “Transition” the hard drive is being migrated or is
replaced by another disk when rebuilding occurs.
“Scrubbing” the hard drive is being scrubbed.
Health The health of hard drive:
“Good” the hard drive is good. “Failed” the hard drive is failed. “Error Alert” S.M.A.R.T. error alert. “Read Errors” the hard drive has unrecoverable read
errors.
Usage The usage of hard drive:
“RAID disk” This hard drive has been set to a
RAID group.
“Free disk” This hard drive is free for use. “Dedicated spare” This hard drive has been set as
dedicated spare of a RG.
“Global spare”
This hard drive has been set as
global spare of all RGs.
Vendor Hard drive vendor.
Serial Hard drive serial number.
Type Hard drive type:
“SATA” SATA disk. “SATA2” SATA II disk. “SAS” SAS disk.
Write cache Hard drive write cache is enabled or disabled. Default is
“Enabled”.
Standby HDD auto spindown to save power. Default is “Disabled”.
Readahead This feature makes data be loaded to disk’s buffer in advance for
further use. Default is “Enabled”.
60
Command queuing
Newer SATA and most SCSI disks can queue multiple commands and handle one by one. Default is “Enabled”.
PD operation description:
Set Free disk
Make the selected hard drive be free for use.
Set Global spare
Set the selected hard drive to global spare of all RGs.
Set Dedicated spares
Set a hard drive to dedicated spare of the selected RG.
Upgrade Upgrade hard drive firmware.
Disk Scrub Scrub the hard drive.
Turn on/off the indication LED
Turn on the indication LED of the hard drive. Click again to turn off.
More information
Show hard drive detail information.
4.4.2 RAID group
“RAID group” can view the status of each RAID group, create, and modify RAID groups.
The following is an example to create a RG.
Step 1: Click “Create”, enter “Name”, choose “RAID level”, click “Select PD” to select PD, assign the RG’s “Preferred owner”. Then click “OK”. The “Write Cache” option is to enable or disable the write cache option of hard drives. The “Standby” option is to enable or disable the auto spindown function of hard drives, when this option is enabled and hard drives have no I/O access after certain period of time, they will spin down automatically. The “Readahead” option is to enable or disable the read ahead function. The “Command queuing” option is to enable or disable the hard drives’ command queue function.
61
Figure 4.4.2.1
Step 2: Confirm page. Click “OK” if all setups are correct.
Figure 4.4.2.2
(Figure 4.4.2.2: There is a RAID 0 with 4 physical disks, named “RG-R0”. The second RAID group is a RAID 5 with 3 physical disks, named “RG-R5”.)
Step 3: Done. View “RAID group” page.
RG column description:
The button includes the functions which can be executed.
Name RAID group name.
62
Total (GB) (MB)
Total capacity of this RAID group. The unit can be displayed in GB or MB.
Free (GB) (MB)
Free capacity of this RAID group. The unit can be displayed in GB or MB.
#PD The number of physical disks in a RAID group.
#VD The number of virtual disks in a RAID group.
Status The status of RAID group:
“Online” the RAID group is online. “Offline” the RAID group is offline. “Rebuild” the RAID group is being rebuilt. “Migrate” the RAID group is being migrated. “Scrubbing” the RAID group is being scrubbed.
Health The health of RAID group:
“Good” the RAID group is good. “Failed” the RAID group fails. “Degraded” the RAID group is not healthy and not
completed. The reason could be lack of disk(s) or have failed disk
RAID The RAID level of the RAID group.
Current owner
The owner of the RAID group. The default owner is controller 1.
Preferred owner
The preferred owner of the RAID group. The default owner is controller 1.
RG operation description:
Create Create a RAID group.
Migrate Change the RAID level of a RAID group. Please refer to next
chapter for details.
Move Move the member disks of RAID group to totally different
physical disks.
63
Activate Activate the RAID group after disk roaming; it can be executed
when RG status is offline. This is for online disk roaming purpose.
Deactivate Deactivate the RAID group before disk roaming; it can be
executed when RG status is online. This is for online disk roaming purpose.
Parity check Regenerate parity for the RAID group. It supports RAID 3 / 5 / 6
/ 30 / 50 / 60.
Delete Delete the RAID group.
Set preferred owner
Set the RG ownership to the other controller.
Set disk property
Change the disk property of write cache and standby options.
Write cache:
“Enabled” Enable disk write cache. (Default) “Disabled” Disable disk write cache.
Standby:
“Disabled” Disable auto spindown. (Default) “30 sec / 1 min / 5 min / 30 min” Enable hard drive
auto spindown to save power when no access after certain period of time.
Read ahead:
“Enabled” Enable disk read ahead. (Default) “Disabled” Disable disk read ahead.
Command queuing:
“Enabled” Enable disk command queue. (Default) “Disabled” Disable disk command queue.
More information
Show RAID group detail information.
4.4.3 Virtual disk
“Virtual disk” can view the status of each Virtual disk, create, and modify virtual disks.
The following is an example to create a VD.
64
Step 1: Click “Create”, enter “Name”, select RAID group from “RG name”, enter required “Capacity (GB)/(MB)”, change “Stripe height (KB)”, change “Block size (B)”, change “Read/Write” mode, set virtual disk “Priority”, select “Bg rate”
(Background task priority), and change “Readahead” option if necessary. “Erase” option will wipe out old data in VD to prevent that OS recognizes the old partition. There are three options in “Erase”: None (default), erase first 1GB or full disk. Last, select “Type” mode for normal or clone usage. Then click “OK”.
Figure 4.4.3.1
Caution
If shutdown or reboot the system when creating VD, the erase process will
stop.
Step 2: Confirm page. Click “OK” if all setups are correct.
65
Figure 4.4.3.2
(Figure 4.4.3.2: Create a VD named “VD-01”, from “RG-R0”. The second VD is named “VD-02”, it’s initializing.)
Step 3: Done. View “Virtual disk” page.
VD column description:
The button includes the functions which can be executed.
Name Virtual disk name.
Size (GB) (MB)
Total capacity of the virtual disk. The unit can be displayed in GB or MB.
Write The right of virtual disk:
“WT” Write Through. “WB” Write Back. “RO” Read Only.
Priority The priority of virtual disk:
“HI” HIgh priority. “MD” MiDdle priority. “LO” LOw priority.
Bg rate Background task priority:
“4 / 3 / 2 / 1 / 0” Default value is 4. The higher
number the background priority of a VD is, the more background I/O will be scheduled to execute.
Status The status of virtual disk:
66
“Online” The virtual disk is online. “Offline” The virtual disk is offline. “Initiating” The virtual disk is being initialized. “Rebuild” The virtual disk is being rebuilt. “Migrate” The virtual disk is being migrated. “Rollback” The virtual disk is being rolled back. “Parity checking” The virtual disk is being parity
check.
Type The type of virtual disk:
“RAID” the virtual disk is normal. “BACKUP” the virtual disk is for clone usage.
Clone The target name of virtual disk.
Schedule The clone schedule of virtual disk:
Health The health of virtual disk:
“Optimal” the virtual disk is working well and there is
no failed disk in the RG.
“Degraded” At least one disk from the RG of the Virtual
disk is failed or plugged out.
“Failed” the RAID group disk of the VD has single or
multiple failed disks than its RAID level can recover from data loss.
“Partially optimal” the virtual disk has experienced
recoverable read errors. After passing parity check, the health will become “Optimal”.
R %
Ratio (%) of initializing or rebuilding.
RAID RAID level.
#LUN Number of LUN(s) that virtual disk is attached to.
Snapshot (GB) (MB)
The virtual disk size that is used for snapshot. The number means “Used snapshot space” / “Total snapshot space”. The unit can be displayed in GB or MB.
#Snapshot Number of snapshot(s) that have been taken.
67
RG name The RG name of the virtual disk
VD operation description:
Create Create a virtual disk.
Extend Extend the virtual disk capacity.
Parity check Execute parity check for the virtual disk. It supports RAID 3 / 5 /
6 / 30 / 50 / 60.
Regenerate parity:
“Yes” Regenerate RAID parity and write. “No” Execute parity check only and find mismatches. It
will stop checking when mismatches count to 1 / 10 / 20 / … / 100.
Delete Delete the virtual disk.
Set property Change the VD name, right, priority, bg rate and read ahead.
Right:
“WT” Write Through. “WB” Write Back. (Default) “RO” Read Only.
Priority:
“HI” HIgh priority. (Default) “MD” MiDdle priority. “LO” LOw priority.
Bg rate:
“4 / 3 / 2 / 1 / 0” 
Default value is 4. The higher
number the background priority of a VD is, the more background I/O will be scheduled to execute.
Read ahead:
“Enabled” Enable disk read ahead. (Default) “Disabled” Disable disk read ahead.
AV-media mode:
“Enabled” Enable AV-media mode for optimizing video
editing.
68
“Disabled” Disable AV-media mode. (Default)
Type:
“RAID” the virtual disk is normal. (Default) “Backup” the virtual disk is for clone usage.
Attach LUN Attach to a LUN.
Detach LUN Detach to a LUN.
List LUN List attached LUN(s).
Set clone Set the target virtual disk for clone.
Clear clone Clear clone function.
Start clone Start clone function.
Stop clone Stop clone function.
Schedule clone
Set clone function by schedule.
Set snapshot space
Set snapshot space for taking snapshot. Please refer to next chapter for more detail.
Cleanup snapshot
Clean all snapshots of a VD and release the snapshot space.
Take snapshot
Take a snapshot on the virtual disk.
Auto snapshot
Set auto snapshot on the virtual disk.
List snapshot List all snapshots of the virtual disk.
More information
Show virtual disk detail information.
4.4.4 Snapshot
“Snapshot” can view the status of snapshot, create, and modify snapshots. Please refer
to next chapter for more detail about snapshot concept. The following is an example to take a snapshot.
69
Step 1: Create snapshot space. In “/ Volume configuration / Virtual disk”, Check to the gray button next to the VD number; click “Set snapshot space”.
Step 2: Set snapshot space. Then click “OK”. The snapshot space is created.
Figure 4.4.4.1
Figure 4.4.4.2
(Figure 4.4.4.2: “VD-01” snapshot space has been created, snapshot space is 15GB, and used 1GB for saving snapshot index.)
Step 3: Take a snapshot. In “/ Volume configuration / Snapshot”, click “Take snapshot”. It will link to next page. Enter a snapshot name.
Figure 4.4.4.3
Step 4: Expose the snapshot VD. Check to the gray button next to the Snapshot VD
number; click “Expose”. Enter a capacity for snapshot VD. If size is zero, the exposed snapshot VD will be read only. Otherwise, the exposed snapshot VD can be read / written, and the size will be the maximum capacity for writing.
70
Figure 4.4.4.4
Figure 4.4.4.5
(Figure 4.4.4.5: This is the snapshot list of “VD-01”. There are two snapshots. Snapshot VD “SnapVD-01” is exposed as read-only, “SnapVD-02” is exposed as read-write.)
Step 5: Attach a LUN to a snapshot VD. Please refer to the next section for attaching a
LUN.
Step 6: Done. Snapshot VD can be used.
Snapshot column description:
The button includes the functions which can be executed.
Name Snapshot VD name.
Used (GB) (MB)
The amount of snapshot space that has been used. The unit can be displayed in GB or MB.
Status The status of snapshot:
“N/A” The snapshot is normal. “Replicated” The snapshot is for clone or QReplica
usage.
“Abort” The snapshot is over space and abort.
71
Health The health of snapshot:
“Good” The snapshot is good. “Failed” The snapshot fails.
Exposure Snapshot VD is exposed or not.
Right The right of snapshot:
“Read-write” The snapshot VD can be read / write. “Read-only” The snapshot VD is read only.
#LUN Number of LUN(s) that snapshot VD is attached.
Created time Snapshot VD created time.
Snapshot operation description:
Expose/ Unexpose
Expose / unexpose the snapshot VD.
Rollback Rollback the snapshot VD.
Delete Delete the snapshot VD.
Attach Attach a LUN.
Detach Detach a LUN.
List LUN List attached LUN(s).
4.4.5 Logical unit
“Logical unit” can view, create, and modify the status of attached logical unit number(s)
of each VD.
User can attach LUN by clicking the “Attach”. “Host” must enter with an iSCSI node name for access control, or fill-in wildcard “*”, which means every host can access the volume. Choose LUN number and permission, and then click “OK”.
72
Figure 4.4.5.1
Figure 4.4.5.2
(Figure 4.4.5.2: VD-01 is attached to LUN 0 and every host can access. VD-02 is attached to LUN 1 and only the initiator node which is named “iqn.1991-05.com.microsoft:qsan” can access.)
LUN operation description:
Attach Attach a logical unit number to a virtual disk.
Detach Detach a logical unit number from a virtual disk.
The matching rules of access control are followed from the LUN’ created time; the earlier created LUN is prior to the matching rules. For example: there are 2 LUN rules for the same VD, one is “*”, LUN 0; and the other is “iqn.host1”, LUN 1. The host “iqn.host2” can login successfully because it matches the rule 1.
Wildcard “*” and “?” are allowed in this field. “*” can replace any word. “?” can replace only one character. For example:
“iqn.host?” “iqn.host1” and “iqn.host2” are accepted. “iqn.host*” “iqn.host1” and “iqn.host12345” are accepted.
This field can not accept comma, so “iqn.host1, iqn.host2” stands a long string, not 2 iqns.
73
4.4.6 Example
The following is an example to create volumes. This example is to create two VDs and set a global spare disk.
Example
This example is to create two VDs in one RG, each VD shares the cache volume. The cache volume is created after system boots up automatically. Then set a global spare disk. Last, delete all of them.
Step 1: Create a RG (RAID group).
To create a RAID group, please follow the procedures:
Figure 4.4.6.1
1. Select “/ Volume configuration / RAID group”.
2. Click “Create“.
3. Input a RG Name, choose a RAID level from the list, click “Select PD“ to choose the
RAID physical disks, then click “OK“.
4. Check the setting. Click “OK“ if all setups are correct.
5. Done. A RG has been created.
74
Figure 4.4.6.2
(Figure 4.4.6.2: Creating a RAID 5 with 3 physical disks, named “RG-R5”.)
Step 2: Create VD (Virtual Disk).
To create a data user volume, please follow the procedures.
Figure 4.4.6.3
1. Select “/ Volume configuration / Virtual disk”.
2. Click “Create”.
3. Input a VD name, choose a RG Name and enter a size for this VD; decide the stripe
height, block size, read / write mode, bg rate, and set priority, finally click “OK”.
4. Done. A VD has been created.
5. Follow the above steps to create another VD.
75
Figure 4.4.6.4
(Figure 4.4.6.4: Creating VDs named “VD-R5-1” and “VD-R5-2” from RAID group “RG-R5”, the size of “VD-R5-1” is 50GB, and the size of “VD-R5-2” is 64GB. There is no LUN attached.)
Step 3: Attach a LUN to a VD.
There are 2 methods to attach a LUN to a VD.
1. In “/ Volume configuration / Virtual disk”, check the gray button next to the VD
number; click “Attach LUN”.
2. In “/ Volume configuration / Logical unit”, click “Attach”.
The procedures are as follows:
Figure 4.4.6.5
1. Select a VD.
2. Input “Host” IQN, which is an iSCSI node name for access control, or fill-in wildcard
“*”, which means every host can access to this volume. Choose LUN and permission, and then click “OK”.
3. Done.
Figure 4.4.6.6
(Figure 4.4.6.6: VD-R5-1 is attached to LUN 0. VD-R5-2 is attached to LUN 1.)
76
Tips
The match
ing rules of access control are from the LUNs’ created time, the
earlier created LUN is prior to the matching rules.
Step 4: Set a global spare disk.
To set a global spare disk, please follow the procedures.
1. Select “/ Volume configuration / Physical disk”.
2. Check the gray button next to the PD slot; click “Set global space”.
3. “Global spare” status is shown in “Usage” column.
Figure 4.4.6.7
(Figure 4.4.6.7: Slot 4 is set as a global spare disk.)
Step 5: Done.
Delete VDs, RG, please follow the below steps.
Step 6: Detach a LUN from the VD.
In “/ Volume configuration / Logical unit”,
Figure 4.4.6.8
1. Check the gray button next to the LUN; click “Detach”. There will pop up a
confirmation page.
2. Choose “OK”.
3. Done.
Step 7: Delete a VD (Virtual Disk).
77
To delete the virtual disk, please follow the procedures:
1. Select “/ Volume configuration / Virtual disk”.
2. Check the gray button next to the VD number; click “Delete”. There will pop up a
confirmation page, click “OK”.
3. Done. Then, the VD is deleted.
Tips
When deleting VD directly, the attached LUN(s) of to this VD will be
detached together.
Step 8: Delete a RG (RAID group).
To delete a RAID group, please follow the procedures:
1. Select “/ Volume configuration / RAID group”.
2. Select a RG which all its VD are deleted, otherwise the this RG cannot be deleted.
3. Check the gray button next to the RG number click “Delete”.
4. There will pop up a confirmation page, click “OK”.
5. Done. The RG has been deleted.
Tips The action of deleting one RG will succeed on
ly when all of the related
VD(s) are deleted in this RG. Otherwise, user cannot delete this RG.
Step 9: Free a global spare disk.
To free a global spare disk, please follow the procedures.
1. Select “/ Volume configuration / Physical disk”.
2. Check the gray button next to the PD slot; click “Set Free disk”.
Step 10: Done, all volumes have been deleted.
4.5 Enclosure management
“Enclosure management” allows managing enclosure information including “Hardware monitor”, “UPS”, “SES”, and “S.M.A.R.T.”. For the enclosure
management, there are many sensors for different purposes, such as temperature sensors, voltage sensors, hard disk status, fan sensors, power sensors, and LED status. Due to the different hardware characteristics among these sensors, they have different polling intervals. Below are the details of the polling time intervals:
78
1. Temperature sensors: 1 minute.
2. Voltage sensors: 1 minute.
3. Hard disk sensors: 10 minutes.
4. Fan sensors: 10 seconds . When there are 3 errors consecutively, system sends
ERROR event log.
5. Power sensors: 10 seconds, when there are 3 errors consecutively, system sends
ERROR event log.
6. LED status: 10 seconds.
Figure 4.5.1
4.5.1 Hardware monitor
“Hardware monitor” can view the information of current voltages and temperatures.
79
80
Figure 4.5.1.1
If “Auto shutdown” is checked, the system will shutdown automatically when voltage or temperature is out of the normal range. For better data protection, please check “Auto Shutdown”.
For better protection and avoiding single short period of high temperature triggering auto shutdown, the system use multiple condition judgments to trigger auto shutdown, below are the details of when the Auto shutdown will be triggered.
1. There are several sensors placed on systems for temperature checking. System will
check each sensor for every 30 seconds. When one of these sensor is over high temperature threshold for continuous 3 minutes, auto shutdown will be triggered immediately.
2. The core processor temperature limit is 80 . The iSCSI NIC temperature limit is 65 .
The SAS expandor and SAS controller temperature limit is 65 .
3. If the high temperature situation doesn’t last for 3 minutes, system will not trigger
auto shutdown.
4.5.2 UPS
“UPS” can set up UPS (Uninterruptible Power Supply).
Figure 4.5.2.1
(Figure 4.5.2.1: Without UPS.)
Currently, the system supports the UPS features of APC (American Power Conversion Corp.) smart-UPS series only. Please review the details from the website:
103H
http://www.apc.com/. Connection with other vendors of UPS can work well, but they have
no such communication features with the system.
81
First, connect the cable(s) between the system and APC smart-UPS via RS-232. Then set up the following values for what the system will do the actions when power fails.
Figure 4.5.2.2
(Figure 4.5.2.2: With Smart-UPS.)
UPS column description:
UPS Type
Select UPS Type. Choose Smart-UPS for APC, None for other vendors or no UPS.
Shutdown battery level (%)
When below the setting level, the system will shutdown. Setting level to “0” will disable UPS.
Shutdown delay (s)
If power failure occurs and system power can not recover after the time setting, the system will shutdown. Setting delay to “0” will disable the function.
Shutdown UPS
Select ON, when power is gone, UPS will shutdown by itself after the system shutdown successfully. After power comes back, UPS will start working and notify system to boot up. OFF will not.
Status The status of UPS:
“Detecting…” “Running” “Unable to detect UPS”
82
“Communication lost” “UPS reboot in progress” “UPS shutdown in progress” “Batteries failed. Please change them NOW!”
Battery level (%)
Current power percentage of battery level.
The system will shutdown either “Shutdown battery level (%)” or “Shutdown delay
(s) ” reaches the condition. User should set these values carefully.
4.5.3 SES
SES represents SCSI Enclosure Services, one of the enclosure management standards. “SES configuration” can enable or disable the management of SES.
Figure 4.5.3.1
(Figure 4.5.1.1: Enable SES in LUN 0, and can be accessed from every host)
The SES client software is available at the following web site:
SANtools:
http://www.santools.com/
4.5.4 Hard drive S.M.A.R.T.
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a diagnostic tool for
hard drives to deliver warning of drive failures in advance. S.M.A.R.T. provides users chances to take actions before possible drive failure.
S.M.A.R.T. measures many attributes of the hard drive all the time and inspects the properties of hard drives which are close to be out of tolerance. The advanced notice of possible hard drive failure can allow users to back up hard drive or replace the hard drive.
83
This is much better than hard drive crash when it is writing data or rebuilding a failed hard drive.
“S.M.A.R.T.” can display S.M.A.R.T. information of hard drives. The number is the current value; the number in parenthesis is the threshold value. The threshold values from different hard drive vendors are different; please refer to hard drive vendors’ specification for details.
S.M.A.R.T. only supports SATA drives. SAS drives do not have this function now. It will show N/A in the web page for SAS drives.
Figure 4.5.4.1 (SAS drives)
Figure 4.5.4.2 (SATA drives)
4.6 System maintenance
84
“Maintenance” allows the operations of system functions which include “System information” to show the system version and details, “Event log” to view system event logs to record critical events, “Upgrade” to the latest firmware, “Firmware synchronization” to synchronized firmware versions on both controllers, “Reset to factory default” to reset all controller configuration values to factory settings, “Import and export” to import and export all controller configuration to a file, and “Reboot and shutdown” to reboot or shutdown the system.
Figure 4.6.1
4.6.1 System information
“System information” can display system information, including CPU type, installed
system memory, firmware version, serial numbers of dual controllers, backplane ID, and system status.
Figure 4.6.1.1
Status description:
Normal Dual controllers are in normal stage.
Degraded One controller fails or has been plugged out.
Lockdown The firmware of two controllers is different or the size of
memory of two controllers is different.
Single Single controller mode.
4.6.2 Event log
85
“Event log” can view the event messages. Check the checkbox of INFO, WARNING, and
ERROR to choose the level of event log display. Click “Download” button to save the whole event log as a text file with file name “log-ModelName-SerialNumber-Date-Time.txt”. Click “Clear” button to clear all event logs. Click “Mute” button to stop alarm if system alerts.
Figure 4.6.2.1
The event log is displayed in reverse order which means the latest event log is on the first / top page. The event logs are actually saved in the first four hard drives; each hard drive has one copy of event log. For one system, there are four copies of event logs to make sure users can check event log any time when there are failed disks.
86
Tips
Please plug-
in any of the first four hard drives, then event logs can be
saved and displayed in next system boot up. Otherwise, the event logs
cannot be saved and would be disappeared.
4.6.3 Upgrade
“Upgrade” can upgrade controller firmware, JBOD firmware, change operation mode,
and activate QReplica license. Before upgrade, it’s better to use “Export” function to backup all configurations to a file.
Figure 4.6.3.1
87
Please prepare new controller firmware file named “xxxx.bin” in local hard drive, then click “Browse” to select the file. Click “Confirm”, it will pop up a warning message, click
“OK” to start to upgrade firmware.
Figure 4.6.3.2
When upgrading, there is a progress bar running. After finished upgrading, the system must reboot manually to make the new firmware took effect.
To upgrade JBOD firmware, the steps are the same as controller firmware but choosing number of JBOD first.
Controller mode can be modified to dual or single here. If the subsystem has one controller only, switch this mode to “Single”. This mode indicates singel upgradable. Enter the MAC address displayed in “\ System configuration \ Network setting” such as 001378xxxxxx (case-insensitive), and then click “Confirm”.
At last, it can activate QReplica function if there is a license here. Select the license file, and then click “Confirm” .
4.6.4 Firmware synchronization
“Firmware synchronization” can synchronize the firmware version when controller 1
and controller 2’s firmware are different. It will upgrade the firmware of slave controller to master ones no matter what the firmware version of slave controller is newer or older than master. In normal status, the firmware versions in controller 1 and 2 are the same as below figure.
Figure 4.6.4.1
4.6.5 Reset to factory default
88
“Reset to factory default” allows user to reset subsystem to factory default setting.
Figure 4.6.5.1
Reset to default value, the password is: 1234, and IP address to default DHCP. Default IP address: 192.168.10.50 (DHCP) Default subnet mask: 255.255.255.0 Default gateway: 192.168.10.254
4.6.6 Import and export
“Import and export” allows user to save system configuration values: export, and apply
all configuration: import. For the volume configuration setting, the values are available in export and not available in import which can avoid confliction / date-deleting between two subsystems which mean if one system already has valuable volumes in the disks and user may forget and overwrite it. Use import could return to original configuration. If the volume setting was also imported, user’s current volumes will be overwritten with different configuration.
Figure 4.6.6.1
1. Import: Import all system configurations excluding volume configuration.
2. Export: Export all configurations to a file.
Caution “Import”
will import all system configurations excluding volume
configuration; the current configurations will be replaced.
4.6.7 Reboot and shutdown
89
“Reboot and shutdown” can “Reboot” and “Shutdown” the system. Before power
off, it’s better to execute “Shutdown” to flush the data from cache to physical disks. The step is necessary for data protection.
Figure 4.6.7.1
4.7 Home/Logout/Mute
In the right-upper corner of web UI, there are 3 individual icons, “Home”, “Logout”, and “Mute”.
Figure 4.7.1
4.7.1 Home
Click “Home” to return to home page.
4.7.2 Logout
For security reason, please use “Logout” to exit the web UI. To re-login the system, please enter username and password again.
4.7.3 Mute
Click “Mute” to stop the alarm when error occurs.
90
Chapter 5 Advanced operations
5.1 Volume rebuild
If one physical disk of the RG which is set as protected RAID level (e.g.: RAID 3, RAID 5, or RAID 6) is FAILED or has been unplugged / removed, then the status of RG is changed to degraded mode, the system will search/detect spare disk to rebuild the degraded RG to a complete one. It will detect dedicated spare disk as rebuild disk first, then global spare disk.
QSAN subsystems support Auto-Rebuild. The following is the scenario:
Take RAID 6 for example:
1. When there is no global spare disk or dedicated spare disk in the system, The RG will
be in degraded mode and wait until (1) there is one disk assigned as spare disk, or (2) the failed disk is removed and replaced with new clean disk, then the Auto-Rebuild starts. The new disk will be a spare disk to the original RG automatically. If the new added disk is not clean (with other RG information), it would be marked as RS (reserved) and the system will not start "auto-rebuild". If this disk is not belonging to any existing RG, it would be FR (Free) disk and the system will start Auto-Rebuild. If user only removes the failed disk and plugs the same failed disk in the same slot again, the auto-rebuild will start running. But rebuilding in the same failed disk may impact customer data if the status of disk is unstable. QSAN suggests all customers not to rebuild in the failed disk for better data protection.
2. When there is enough global spare disk(s) or dedicated spare disk(s) for the degraded
array, system starts Auto-Rebuild immediately. And in RAID 6, if there is another disk failure occurs during rebuilding, system will start the above Auto-Rebuild process as well. Auto-Rebuild feature only works at that the status of RG is "Online". It will not work at “Offline”. Thus, it will not conflict with the “Online roaming” feature.
3. In degraded mode, the status of RG is “Degraded”. When rebuilding, the status of
RG / VD will be “Rebuild”, the column “R%” in VD will display the ratio in percentage. After complete rebuilding, the status will become “Online”. RG will become completely one.
Tips “Set dedicated spare”
is not available if there is no RG or only RG of
RAID 0, JBOD, because user can not set dedicated spare disk to RAID 0 and
JBOD.
91
Sometimes, rebuild is called recover; they are the same meaning. The following table is the relationship between RAID levels and rebuild.
Rebuild operation description:
RAID 0 Disk striping. No protection for data. RG fails if any hard drive
fails or unplugs.
RAID 1 Disk mirroring over 2 disks. RAID 1 allows one hard drive fails or
unplugging. Need one new hard drive to insert to the system and rebuild to be completed.
N-way mirror
Extension to RAID 1 level. It has N copies of the disk. N-way mirror allows N-1 hard drives failure or unplugging.
RAID 3 Striping with parity on the dedicated disk. RAID 3 allows one
hard drive failure or unplugging.
RAID 5 Striping with interspersed parity over the member disks. RAID 5
allows one hard drive failure or unplugging.
RAID 6 2-dimensional parity protection over the member disks. RAID 6
allows two hard drives failure or unplugging. If it needs to rebuild two hard drives at the same time, it will rebuild the first one, then the other in sequence.
RAID 0+1 Mirroring of RAID 0 volumes. RAID 0+1 allows two hard drive
failures or unplugging, but at the same array.
RAID 10 Striping over the member of RAID 1 volumes. RAID 10 allows
two hard drive failure or unplugging, but in different arrays.
RAID 30 Striping over the member of RAID 3 volumes. RAID 30 allows
two hard drive failure or unplugging, but in different arrays.
RAID 50 Striping over the member of RAID 5 volumes. RAID 50 allows
two hard drive failures or unplugging, but in different arrays.
RAID 60 Striping over the member of RAID 6 volumes. RAID 60 allows
four hard drive failures or unplugging, every two in different arrays.
JBOD The abbreviation of “Just a Bunch Of Disks. No data
protection. RG fails if any hard drive failures or unplugs.
92
5.2 RG migration and moving
To do migration, the total size of RG must be larger or equal to the original RG. It does not allow expanding the same RAID level with the same hard disks of original RG. There is a similar function “Move” which will move the member disks of RG to totally different physical disks. Take examples as following.
Figure 5.2.1
The below operations are not allowed when a RG is being migrated or moved. System would reject these operations:
1. Add dedicated spare.
2. Remove a dedicated spare.
3. Create a new VD.
4. Delete a VD.
5. Extend a VD.
6. Scrub a VD.
7. Perform another migration operation.
8. Scrub entire RG.
9. Take a snapshot.
10. Delete a snapshot.
11. Expose a snapshot.
12. Rollback to a snapshot.
Caution RG migration or moving
cannot be executed during rebuilding or VD
extension.
93
Tips “Migrate” function will migrate the member disks of RG to
the same
physical
disks but it should increase the number of disks or it should be
different RAID level. “Move” function will move
the member disks of RG to
totally different physical disks.
To migrate the RAID level, please follow below procedures.
1. Select “/ Volume configuration / RAID group”.
2. Check the gray button next to the RG number; click “Migrate”.
3. Change the RAID level by clicking the down arrow to “RAID 5”. There will be a pup-
up which indicates that HDD is not enough to support the new setting of RAID level, click “Select PD” to increase hard drives, then click “OK“ to go back to setup page. When doing migration to lower RAID level, such as the original RAID level is RAID 6 and user wants to migrate to RAID 0, system will evaluate whether this operation is safe or not, and appear a warning message of "Sure to migrate to a lower
protection array?”.
Figure 5.2.2
4. Double check the setting of RAID level and RAID PD slot. If there is no problem, click
“OK“.
5. Finally a confirmation page shows the detail of RAID information. If there is no
problem, click “OK” to start migration. System also pops up a message of “Warning: power lost during migration may cause damage of data!” to give user warning. When the power is abnormally off during the migration, the data is in high risk.
6. Migration starts and it can be seen from the “status” of a RG with “Migrating”. In
“/ Volume configuration / Virtual disk”, it displays a “Migrating” in “Status” and complete percentage of migration in “R%”.
Figure 5.2.3
(Figure 5.2.2: A RAID 0 with 3 physical disks migrates to RAID 5 with 4 physical disks.)
94
Figure 5.2.4
5.3 VD extension
To extend VD size, please follow the procedures.
1. Select “/ Volume configuration / Virtual disk”.
2. Check the gray button next to the VD number; click “Extend”.
3. Change the size. The size must be larger than the original, and then click “OK” to
start extension.
Figure 5.3.1
4. Extension starts. If VD needs initialization, it will display an “Initiating” in “Status”
and complete percentage of initialization in “R%”.
Figure 5.3.2
Tips
The size of VD extension must be larger than original.
Caution VD Extension cannot be executed during rebuilding or migration.
95
5.4 QSnap
Snapshot-on-the-box (QSnap) captures the instant state of data in the target volume
in a logical sense. The underlying logic is Copy-on-Write -- moving out the data which would be written to certain location where a write action occurs since the time of data capture. The certain location, named as “Snap VD”, is essentially a new VD which can be attached to a LUN provisioned to a host as a disk like other ordinary VDs in the system. Rollback restores the data back to the state of any time which was previously captured in case for any unfortunate reason it might be (e.g. virus attack, data corruption, human errors and so on). Snap VD is allocated within the same RG in which the snapshot is taken, we suggest to reserve 20% of RG size or more for snapshot space. Please refer to the following figure for snapshot concept.
Figure 5.4.1
5.4.1 Create snapshot volume
To take a snapshot of the data, please follow the procedures.
1. Select “/ Volume configuration / Virtual disk”.
2. Check the gray button next to the VD number; click “Set snapshot space”.
3. Set up the size for snapshot. The minimum size is suggested to be 20% of VD size,
and then click “OK”. It will go back to the VD page and the size will show in snapshot column. It may not be the same as the number entered because some size is reserved for snapshot internal usage. There will be 2 numbers in “Snapshot” column. These numbers mean “Used snapshot space” and “Total snapshot space”.
96
4. There are two methods to take snapshot. In “/ Volume configuration / Virtual
disk”, check the gray button next to the VD number; click “Take snapshot”. Or in “/ Volume configuration / Snapshot”, click “Take snapshot”.
5. Enter a snapshot name, and then click “OK”. A snapshot VD is created.
6. Select “/ Volume configuration / Snapshot” to display all snapshot VDs taken
from the VD.
Figure 5.4.1.1
7. Check the gray button next to the Snapshot VD number; click “Expose”. Enter a
capacity for snapshot VD. If size is zero, the exposed snapshot VD is read only. Otherwise, the exposed snapshot VD can be read / written, and the size is the maximum capacity for writing.
8. Attach a LUN to the snapshot VD. Please refer to the previous chapter for attaching a
LUN.
9. Done. It can be used as a disk.
Figure 5.4.1.2
(Figure 5.4.1.2: This is the snapshot list of “VD-01”. There are two snapshots. Snapshot VD “SnapVD-01” is exposed as read-only, “SnapVD-02” is exposed as read-write.)
1. There are two methods to clean all snapshots. In “/ Volume configuration /
Virtual disk”, check the gray button next to the VD number; click “Cleanup snapshot”. Or in “/ Volume configuration / Snapshot”, click “Cleanup”.
2. “Cleanup snapshot” will delete all snapshots of the VD and release snapshot space.
5.4.2 Auto snapshot
The snapshot copies can be taken manually or by schedule such as hourly or daily. Please follow the procedures.
97
1. There are two methods to set auto snapshot. In “/ Volume configuration /
Virtual disk”, check the gray button next to the VD number; click “Auto snapshot”.
Or in “/ Volume configuration / Snapshot”, click “Auto snapshot”.
2. The auto snapshot can be set monthly, weekly, daily, or hourly.
3. Done. It will take snapshots automatically.
Figure 5.4.2.1
(Figure 5.4.2.1: It will take snapshots every month, and keep the last 32 snapshot copies.)
Tips
Daily snapshot will be taken at every
00:00. Weekly snapshot will be taken
every Sunday 00:00. Monthly snapshot will be taken every first day of
month 00:00.
5.4.3 Rollback
98
The data in snapshot VD can rollback to original VD. Please follow the procedures.
1. Select “/ Volume configuration / Snapshot”.
2. Check the gray button next to the Snap VD number which user wants to rollback the
data; click “Rollback”.
3. Done, the data in snapshot VD is rollback to original VD.
Caution Before executing rollback, it is better to dismount file system for flushin
g
data from cache to disks in OS first. System sends pop-
up message when
user executes rollback function.
5.4.4 QSnap constraint
QSAN snapshot function applies Copy-on-Write technique on UDV/VD and provides a
quick and efficient backup methodology. When taking a snapshot, it does not copy any data at first time until a request of data modification comes in. The snapshot copies the original data to snapshot space and then overwrites the original data with new changes. With this technique, snapshot only copies the changed data instead of copying whole data. It will save a lot of disk space.
Create a data-consistent snapshot
Before using snapshot, user has to know why sometimes the data corrupts after rollback of snapshot. Please refer to the following diagram.
When user modifies the data from host, the data will pass through file system and memory of the host (write caching). Then the host will flush the data from memory to physical disks, no matter the disk is local disk (IDE or SATA), DAS (SCSI or SAS), or SAN (fibre or iSCSI). From the viewpoint of storage device, it can not control the behavior of host side. This case maybe happens. If user takes a snapshot, some data is still in memory and not flush to disk. Then the snapshot may have an incomplete image of original data. The problem does not belong to the storage device. To avoid this data inconsistent issue between snapshot and original data, user has to make the operating system flush the data from memory of host (write caching) into disk before taking snapshot.
99
Figure 5.4.4.1
On Linux and UNIX platform, a command named sync can be used to make the operating system flush data from write caching into disk. For Windows platform, Microsoft also provides a tool – sync, which can do exactly the same thing as the sync command in Linux/UNIX. It will tell the OS to flush the data on demand. For more detail about sync tool, please refer to:
http://technet.microsoft.com/en-us/sysinternals/bb897438.aspx
Besides the sync tool, Microsoft develops VSS (volume shadow copy service) to prevent this issue. VSS is a mechanism for creating consistent point-in-time copies of data known as shadow copies. It is a coordinator between backup software, application (SQL or Exchange…) and storages to make sure the snapshot without the problem of data­inconsistent. For more detail about the VSS, please refer to
http://technet.microsoft.com/en-us/library/cc785914.aspx. QSAN P300H61 / P300H71
can support Microsoft VSS.
What if the snapshot space is over?
Before using snapshot, a snapshot space is needed from RG capacity. After a period of working snapshot, what if the snapshot size over the snapshot space of user defined? There are two different situations:
100
1. If there are two or more snapshots existed, the system will try to remove the oldest
snapshots (to release more space for the latest snapshot) until enough space is released.
2. If there is only one snapshot existed, the snapshot will fail. Because the snapshot
space is run out.
For example, there are two or more snapshots existed on a VD and the latest snapshot keeps growing. When it comes to the moment that the snapshot space is run out, the system will try to remove the oldest snapshot to release more space for the latest snapshot usage. As the latest snapshot is growing, the system keeps removing the old snapshots. When it comes that the latest snapshot is the only one in system, there is no more snapshot space which can be released for incoming changes, then snapshot will fail.
How many snapshots can be created on a VD
There are up to 32 snapshots can be created on a UDV/VD. What if the 33rd snapshot has been taken? There are two different situations:
1. If the snapshot is configured as auto snapshot, the latest one (the 33rd snapshot)
will replace the oldest one (the first snapshot) and so on.
2. If the snapshot is taken manually, when taking the 33rd snapshot will fail and a
warning message will be showed on Web UI.
Rollback / Delete snapshot
When a snapshot has been rollbacked, the other snapshots which are earlier than it will also be removed. But the rest snapshots will be kept after rollback. If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted. The space occupied by these snapshots will be released after deleting.
5.5 Disk roaming
Physical disks can be re-sequenced in the same system or move all physical disks in the same RAID group from system-1 to system-2. This is called disk roaming. System can execute disk roaming online. Please follow the procedures.
1. Select “/ Volume configuration / RAID group”.
2. Check the gray button next to the RG number; click “Deactivate”.
3. Move all PDs of the RG to another system.
4. Check the gray button next to the RG number; click “Activate”.
5. Done.
Disk roaming has some constraints as described in the followings:
Loading...