Chapter 2 Getting Started ...................................................................................15
2.1 Packaging, Shipment and Delivery......................................................................................................... 15
2.2 Unpacking the Subsystem.......................................................................................................................... 15
2.3 Identifying Parts of the iSCSI RAID Subsystem................................................................................. 16
2.3.1 Front View................................................................................................................................................ 16
2.4 Connecting the iSCSI RAID Subsystem to Your Network............................................................. 19
2.5 Powering On .................................................................................................................................................... 19
2.6 Installing Hard Drives................................................................................................................................... 20
2.8.1 Web GUI.................................................................................................................................................... 22
2.8.2 Console Serial Port............................................................................................................................... 23
2.8.3 Remote Control – Secure Shell....................................................................................................... 23
2.9.1 LCD Control Module (LCM) .............................................................................................................. 24
2.9.2 System Buzzer......................................................................................................................................... 26
Chapter 3 Web GUI Guideline ............................................................................27
3.2.2 Status Indicators.................................................................................................................................... 29
3.4 System Configuration................................................................................................................................... 31
3.4.1 System Setting........................................................................................................................................ 32
3.4.2 IP Address ................................................................................................................................................ 33
3.4.4 Mail Setting ............................................................................................................................................. 35
3.5.5 CHAP Account........................................................................................................................................ 42
3.7.1 SES Configuration ................................................................................................................................. 62
3.8 System Maintenance..................................................................................................................................... 66
3.8.1 System Information.............................................................................................................................. 66
3.8.3 Reset to Default..................................................................................................................................... 67
4.5 Disk Roaming................................................................................................................................................... 76
4.6 Support Microsoft MPIO and MC/S....................................................................................................... 76
A. Certification List................................................................................................................................................ 77
B. Event Notifications........................................................................................................................................... 80
C. Known Issues...................................................................................................................................................... 84
D. Microsoft iSCSI Initiator ................................................................................................................................ 85
E. Installation Steps for Large Volume (Over 2TB) .................................................................................. 89
F. MPIO and MC/S Setup Instructions.......................................................................................................... 92
[3]
Chapter 1 Introduction
TheiSCSI RAID Subsystem
The iSCSI RAID subsystem is a 4-bay disk array based on hardware RAID
configuration. It is an easy-to-use storage system which can be configured to any
RAID level. It provides reliable data protection for servers, and the RAID 6 function is
available. The RAID 6 function allows failure of two disk drives without any impact on
the existing data. Data can be recovered from the remaining data and parity drives.
The iSCSI RAID subsystem is the most cost-effective disk array subsystem with
completely integrated high-performance and data-protection capabilities which meet
or exceed the highest industry standards, and the best data solution for small or
medium business users.
[4]
1.1 Key Features
¾ Front-end 2 x 1Gigabit port full iSCSI offload
¾ Supports iSCSI jumbo frame
¾ Supports RAID levels 0, 1, 0+1, 3, 5, 6, 10 and JBOD
¾ Global hot spare disks
¾ Write-through or write-back cache policy for different applicat ion usage
¾ Supports greater than 2TB per volume set (64-bit LBA support)
¾ RAID level migration
¾ Online volume expansion
¾ Configurable RAID stripe size
¾ Instant RAID volume availability and background initialization
¾ Supports S.M.A.R.T, NCQ and Staggered Spin-up capable drives
¾ Volume rebuilding priority adjustment
¾ Auto volume rebuilding
¾ Array roaming
[5]
1.2 Technical Specifications
Form Factor:
1U 19-inch rackmount chassis
RAID processor:
Intel XScale IOP331
RAID Level:
0, 1, 0+1, 3, 5, 6, 10 and JBOD
Cache memory:
512MB ~ 1GB DDR333 DIMM supported
No. of channels (host and drives):
2 and 4
Host bus interface : 1Gb/s Ethernet
Drive bus interface : 3Gb/s SATA II
Hot-swap drive trays:
Four (4) 1-inch trays
Host access control:
Read-Write & Read-Only
Supports CHAP authentication Audible alarm
Instant RAID volume availability and
background initialization support
Supports over 2TB per volume
Online consistency check
Bad block auto-remapping
S.M.A.R.T. support
New disk insertion / removal detection
Auto volume rebuild
Array roaming
Jumbo frame support Password protection
Global hot spare disks UPS connection
Maximum logical volume: up to 255
Maximum host connection:up to 32 Cooling fans:1
Maximum host clustering:
up to 8 for one logical volume
Online Volume migration
Online Volume expansion
Configurable stripe size
Power supplies: 220W power supply
w/PFC
Power requirements:
AC 90V ~ 264V full range
6A ~ 3A, 50Hz ~ 60Hz
Environmental
Relative Humidity:
10% ~ 85% Non-condensing
Operating Temp:
o
10
C ~ 40oC (50oF ~ 104oF)
Physical Dimensions:
44(H) x 446.4(W) x 506(D)mm
[6]
1.3 Terminology
The document uses the following terms:
RAID
PD
RG Raid Group. A collection of removable media. One RG consists of a
VD
CV
LUN Logical Unit Number. A logical unit number (LUN) is a unique
RAID is the abbreviation of “Redundant Array of Independent
Disks”. There are different RAID levels with different degree of the
data protection, data availability, and performance to host
environment.
The Physical Disk belongs to the member disk of one specific RAID
group.
set of VDs and owns one RAID level attribute.
Virtual Disk. Each RD could be divided into several VDs. The VDs
from one RG have the same RAID level, but may have different
volume capacity.
Cache Volume. Controller uses onboard memory as cache. All RAM
(except for the part which is occupied by the controller) can be used
as cache.
identifier which enables it to differentiate among separate devices
(each one is a logical unit).
GUI Graphic User Interface.
RAID
width,
RAID
copy,
RAID row
(RAID cell
in one row)
WT Write-Through cache-write policy. A caching technique in which the
WB Write-Back cache-write policy. A caching technique in which the
RAID width, copy and row are used to describe one RG.
E.g.:
3. One RAID 10 volume over 3 4-disk RAID 1 volume: RAID
width=1; RAID copy=4; RAID row=3.
completion of a write request is not signaled until data is safely
stored in non-volatile media. Each data is synchronized in both data
cache and accessed physical disks.
completion of a write request is signaled as soon as the data is in
cache and actual writing to non-volatile media occurs at a later time.
It speeds up system write performance but needs to bear the risk
where data may be inconsistent between data cache and the
physical disks in one short time interval.
ROSet the volume to be Read-Only.
[7]
DSDedicated Spare disks. The spare disks are only used by one specific
RG. Others could not use these dedicated spare disks for any
rebuilding purpose.
GS
DC Dedicated Cache.
GC Global Cache.
DG DeGraded mode. Not all of the array’s member disks are
SCSI Small Computer Systems Interface.
iSCSI Internet Small Computer Systems Interface.
FC Fibre Channel.
S.M.A.R.T. Self-Monitoring Analysis and Reporting Technology.
WWN World Wide Name.
HBA Host Bus Adapter.
Global Spare disks. GS is shared for rebuilding purpose. If some RGs
need to use the global spare disks for rebuilding, they could get the
spare disks out from the common spare disks pool for such
requirement.
functioning, but the array is able to respond to application read and
write requests to its virtual disks.
SAF-TE SCSI Accessed Fault-Tolerant Enclosures.
SES SCSI Enclosure Services.
NIC Network Interface Card.
MPIO Multi-Path Input/Output.
MC/S Multiple Connections per Session
MTU Maximum Transmission Unit.
CHAP Challenge Handshake Authentication Protocol. An optional security
mechanism to control access to an iSCSI storage system over the
iSCSI data ports.
iSNSInternet Storage Name Service.
[8]
1.4 RAID Concepts
RAID Fundamentals
The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple
inexpensive disk drives into an array of disk drives to obtain performance, capacity
and reliability that exceeds that of a single large drive. The array of drives appears to
the host computer as a single logical drive.
Five types of array architectures, RAID 1 through RAID 5, were originally defined;
each provides disk fault-tolerance with different compromises in features and
performance. In addition to these five redundant array architectures, it has become
popular to refer to a non-redundant array of disk drives as a RAID 0 arrays.
Disk Striping
Fundamental to RAID technology is striping. This is a method of combining multiple
drives into one logical storage unit. Striping partition s the storage space of each drive
into stripes, which can be as small as one sector (512 bytes) or as large as several
megabytes. These stripes are then interleaved in a rotating sequence, so that the
combined space is composed alternately of stripes from each drive. The specific type
of operating environment determines whether large or small stripes should be used.
Most operating systems today support concurrent disk I/O operations across multiple
drives. However, in order to maximize throughput for the disk subsystem, the I/O load
must be balanced across all the drives so that each drive can be kept busy as much as
possible. In a multiple drive system without striping, the disk I/O load is never
perfectly balanced. Some drives will contain data files that are frequently accessed
and some drives will rarely be accessed.
By striping the drives in the array with stripes large enough so that each record falls
entirely within one stripe, most records can be evenly distributed across all drives.
This keeps all drives in the array busy during heavy load situations. This situation
allows all drives to work concurrently on different I/O operations, and thus maximize
the number of simultaneous I/O operations that can be performed by the array.
[9]
Definition of RAID Levels
RAID 0 is typically defined as a group of striped disk drives without parity or data
redundancy. RAID 0 arrays can be configured with large stripes for multi-user
environments or small stripes for single-user systems that access long sequential
records. RAID 0 arrays deliver the best data storage efficiency and performance of any
array type. The disadvantage is that if one drive in a RAID 0 array fails, the entire
array fails.
RAID 1, also known as disk mirroring, is simply a pair of disk drives that store
duplicate data but appear to the computer as a single drive. Although striping is not
used within a single mirrored drive pair, multiple RAID 1 arrays can be striped
together to create a single large array consisting of pairs of mirrored drives. All writes
must go to both drives of a mirrored pair so that the information on the drives is kept
identical. However, each individual drive can perform simultaneou s, independent read
operations. Mirroring thus doubles the read performance of a single non-mirrored
drive and while the write performance is unchanged. RAID 1 delivers the best
performance of any redundant array type. In addition, there is less performance
degradation during drive failure than in RAID 5 arrays.
[10]
RAID 3 sector-stripes data across groups of drives, but one drive in the group is
dedicated to storing parity information. RAID 3 relies on the embedded ECC in each
sector for error detection. In the case of drive failure, data recovery is accomplished
by calculating the exclusive OR (XOR) of the information recorded on the remaining
drives. Records typically span all drives, which optimizes the disk transfer rate.
Because each I/O request accesses every drive in the array, RAID 3 arrays can satisfy
only one I/O request at a time. RAID 3 delivers the best performance for single-user,
single-tasking environments with long records. Synchronized-spindle drives are
required for RAID 3 arrays in order to avoid performance degradation with short
records. RAID 5 arrays with small stripes can yield similar performance to RAID 3
arrays.
Under RAID 5 parity information is distributed across all the drives. Since there is no
dedicated parity drive, all drives contain data and read operations can be overlapped
on every drive in the array. Write operations will typically access one data drive and
one parity drive. However, because different records store their parity on different
drives, write operations can usually be overlapped.
[11]
Dual-level RAID achieves a balance between the increased data availability inherent
in RAID 1 and RAID 5 and the increased read performance inherent in disk striping
(RAID 0). These arrays are sometimes referred to as RAID 0+1 or RAID 10 and RAID
0+5 or RAID 50.
RAID 6 is similar to RAID 5 in that data protection is achieved by writing parity
information to the physical drives in the array. With RAID 6, however, two sets of
parity data are used. These two sets are different, and each set occupies a capacity
equivalent to that of one of the constituent drives. The main advantage of RAID 6 is
High data availability – any two drives can fail without loss of critical data.
In summary:
RAID 0 is the fastest and most efficient array type but offers no fault-tolerance.
RAID 0 requires a minimum of two drives.
RAID 1 is the best choice for performance-critical, fault-tolerant environments.
RAID 1 is the only choice for fault-tolerance if no more than two drives are used.
RAID 3 can be used to speed up data transfer and provide fault-tolerance in single-
user environments that access long sequential records. However, RAID 3 does not
allow overlapping of multiple I/O operations and requires synchronized-spindle
drives to avoid performance degradation with short records. RAID 5 with a small
stripe size offers similar performance.
RAID 5 combines efficient, fault-tolerant data storage with good performance
characteristics. However, write performance and performance during drive failure
is slower than with RAID 1. Rebuild operations also require more time than with
RAID 1 because parity information is also reconstructed. At least three drives are
required for RAID 5 arrays.
RAID 6 is essentially an extension of RAID level 5 which allows for additional f ault
tolerance by using a second independent distributed parity scheme (twodimensional parity). Data is striped on a block level across a set of drives, ju st like
in RAID 5, and a second set of parity is calculated and written across all the drives;
RAID 6 provides for an extremely high data fault tolerance and can sustain
multiple simultaneous drive failures. It is a perfect solution for mission critical
applications.
[12]
RAID Management
The subsystem can implement several different levels of RAID technology. RAID levels
supported by the subsystem are shown below.
RAID Level Description
Block striping is provide, which yields higher
0
1
performance than with individual drives. There is
no redundancy.
Drives are paired and mirrored. All data is 100%
duplicated on an equivalent drive. Fully
redundant.
Min. Drives
1
2
N-way
mirror
3
5
6
0 + 1
10
JBOD
Extension of RAID 1 level. It has N copies of the
disk.
Data is striped across several physical drives.
Parity protection is used for data redundancy.
Data is striped across several physical drives.
Parity protection is used for data redundancy.
Data is striped across several physical drives.
Parity protection is used for data redundancy.
Requires N+2 drives to implement because of
two-dimensional parity scheme
Mirroring of the two RAID 0 disk arrays. This
level provides striping and redundancy through
mirroring.
Striping over the two RAID 1 disk arrays. This
level provides mirroring and redundancy through
striping.
The abbreviation of “Just a Bunch Of Disks”.
JBOD needs at least one hard drive.
N
3
3
4
4
4
1
[13]
1.5 Volume Relationship Diagram
This diagram shows how the volume structure of the iSCSI RAID subsystem is
designed. It describes the relationship of RAID components. One RG (RAID group)
consists of a set of VDs (Virtual disk) and owns one RAID level attribute. Each RG can
be divided into several VDs. The VDs in one RG share the same RAID level, but may
have different volume capacity. Each VD will be associated with one specific CV (Cache
Volume) to execute the data transaction. Each CV can have different cache memory
size by user’s modification/setting. LUN (Logical Unit Number) is a unique identifier, in
which users can access through SCSI commands.
VD 1 VD 2 VD 3
+
+
+
PD 2 PD 3 DS PD 1
Global CV
Dedicated
CV
[14]
Chapter 2 Getting Started
2.1 Packaging, Shipment and Delivery
Before removing the subsystem from the shipping carton, you should visually
inspect the physical condition of the shipping carton.
Unpack the subsystem and verify that the contents of the shipping carton are
all there and in good condition.
Exterior damage to the shipping carton may indicate that the contents of the
carton are damaged.
If any damage is found, do not remove the components; contact the dealer
where you purchased the subsystem for further instructions.
2.2 Unpacking the Subsystem
The package contains the following items:
• iSCSI RAID subsystem unit
• One power cord
• Three Ethernet LAN cables
• One RS232 null modem cable (phone jack to DB9)
• One UPS cable (phone jack to DB9)
• Installation Reference Guide
• Spare screws, etc.
If any of these items are missing or damaged, please contact your dealer or sales
representative for assistance.
[15]
2.3 Identifying Parts of the iSCSI RAID Subsystem
The illustrations below identify the various parts of the subsystem.
2.3.1 Front View
4 5 6 7
1Carrier Open Button – Use this to open the disk tray. Press the button to
open. This button also shows the Lock Indicat or.
When the Lock Groove is horizontal, this indicates that the Drive Tray is
locked. When the Lock Groove is vertical, the Drive Tray is unlocked. Lock
and unlock the Drive Trays by u sing a flat-h ead screw d river.
2 Tray Lever – Use this to pull out the disk tray.
3 HDD Status Indicator
Every Drive Tray contains two LEDs for displaying the HDD status.
Parts
HDD Status
LED
HDD Access
LED
4Activity LED – This LED will be blinking Blue when the controller is busy or
data is being accessed.
Green LED indicates power is on and hard drive status
is good for this slot. Red LED indicates no hard drive.
LED will blink blue when the hard drive is being
accessed.
Function
[16]
5 LCD Display Panel
6 LCD Control Module (LCM)
Use the function keys to navigate through the menu options available in the
LCM.
Parts Function
Up and Down
Arrow buttons
Select button
Exit button EXIT
7 Environment Status LEDs
Parts Function
Power LED Green LED indicates power is ON.
Power Fail LED
Fan Fail LED
Over Temperature LED
Voltage Warning LED
Access LED
Use the Up or Down arrow keys to go
through the information on the LCD
screen. This is also used to move
between each menu when you configure
the subsystem.
This is used to enter the option you have
selected.
Press this button to return to the
previous menu.
If a redundant power supply unit fails, this LED
will turn to RED and alarm will sound.
When a fan fails, this LED will turn red and an
alarm will sound.
If temperature irregularities in the system
occurs (HDD slot temperature over 45°C), this
LED will turn RED and alarm will sound.
An alarm will sound warning of a voltage
abnormality and this LED will turn red.
This LED will blink blue when the RAID
controller is busy / active.
[17]
2.3.2 Rear View
1. Uninterruptible Power Supply (UPS) Port (APC Smart UPS only)
The subsystem may come with an optional UPS port allowing you to connect a APC
Smart UPS device. Connect the cable from the UPS device to the UPS port located at
the rear of the subsystem. This will automatically allow th e subsystem to use the
functions and features of the UPS.
2. RS232 Port
The subsystem is equipped with an RS232 serial port allowing you to connect a PC or
terminal. Use the null modem cable to
3. R-Link Port: Remote Link through RJ-45 Ethernet for remote management
The subsystem is equipped with one 10/100 Ethernet RJ45 LAN port. Use web browser
to manage the RAID subsystem through Ethernet for remote configuration and
monitoring.
4. LAN1 / LAN2 Gigabit Ports
The subsystem is equipped with two Gigabit data ports for connecting to the network.
5. Cooling Fan
One blower fan is located at the rear of the subsystem. It provides sufficient airflow
and heat dispersion inside the chassis. In case a fan fails to function, the “
fail LED will turn red and an alarm will sound.
” Fan
6. Power Switch
Use this to power on the system.
[18]
2.4 Connecting the iSCSI RAID Subsystem to Your Network
To connect the iSCSI unit to the network, insert the cable that came with the unit
into the Gigabit network port (LAN1) on the back of iSCSI unit. Insert the other
end into a Gigabit BASE-T Ethernet connection on your network hub or switch. You
may connect the other network port LAN2 if needed.
For remote management of the iSCSI RAID subsystem, connect the R-Link port to
your network.
2.5 Powering On
1. Plug in the power cord into the AC Power Input Socket located at the rear of
the subsystem.
2. Turn on the Power Switch.
3. The Power LED on the front Panel will turn green.
[19]
2.6 Installing Hard Drives
The expansion chassis supports hot-swapping allowing you to install or replace a
hard drive while the subsystem is running.
Each Drive Carrier has a locking mechanism. When the Lock Groove is horizontal,
this indicates that the Drive Carrier is locked. When the Lock Groove is vertical,
the Drive Carrier is unlocked. Lock and unlock the Drive Carriers by using a flathead screw driver. The Lock Grooves are located on the carrier open button.
a. Press the Carrier Open button and the Drive Carrier handle will flip open.
b. Pull out an empty disk tray. Pull the lever handle outwards to remove the
carrier from the enclosure.
c. Place the hard drive in the disk tray. Make sure the holes of the disk tray
align with the holes of the hard drive.
Carrier
Open
Button
d. Install the mounting screws on the bottom part to secure the drive in the
disk tray.
e. Slide the tray into a slot.
f. Close the lever handle until you hear the latch click into place.
[20]
2.7 iSCSI Introduction
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer
System Interface) commands and data in TCP/IP packets for linking storage
devices with servers over common IP infrastructures. iSCSI provides high
performance SANs over standard IP networks like LAN, WAN or the Internet.
IP SANs are true SANs (Storage Area Networks) which allow few of servers to
attach to an infinite number of storage volumes by using iSCSI over TCP/IP
networks. IP SANs can scale the storage capacity with any type and brand of
storage system. In addition, using any type of network (Ethernet, Fast Ethernet,
Gigabit Ethernet) and combining operating systems (Microsoft Windows, Linux,
Solaris, …etc.) within the SAN network. IP-SANs also include mechanisms for
security, data replication, multi-path and high availability.
Storage protocol, such as iSCSI, has “two ends” in the connection. These ends are
the initiator and the target. In iSCSI w e call them iSCSI initiator and iSCSI target.
The iSCSI initiator requests or initiates any iSCSI communication. It requests all
SCSI operations like read or write. An initiator is us ually lo cated on the host /s erver
side (either an iSCSI HBA or iSCSI SW initiator).
The iSCSI target is the storage device itself or an appliance which controls and
serves volumes or virtual volumes. The target is the device w hich performs SCSI
commands or bridges it to an attached storage device. iSCSI targets can be disks,
tapes, RAID arrays, tape libraries, and etc.
The host side needs an iSCSI initiator. The initiat or is a driver which handles the
SCSI traffic over iSCSI. The initiator can be software or hardware (HBA). Please
refer to the certification list of iSCSI HBA(s) in Appendix A. OS native initiators or
other software initiators use the standard TCP/IP stack and Ethernet hardware,
while iSCSI HBA(s) use their own iSCSI and TCP/IP stacks on board.
Hardware iSCSI HBA(s) would provide its initiator tool. Please refer to the vendors’
HBA user manual. Microsoft, Linux and Mac provide software iSCSI initiator
driver. Below are the available links:
Host 1
(initiator)
iSCSI device 1
(target)
IP SAN
iSCSI device 2
(target)
Host 2
(initiator)
iSCSI
HBA
[21]
1. Link to download the Microsoft iSCSI software initiator:
Please refer to Appendix D for Microsoft iSCSI initiator installation procedure.
2. Linux iSCSI initiator is also available. For different kernels, there are different
iSCSI drivers. If you need the latest Linux iSCSI initiator, please visit OpeniSCSI project for most update information. Linux-iSCSI (sfnet) and Open-iSCSI
projects merged in April 11, 2005.
There are three management methods to manage the iSCSI RAID subsystem, describe
in the following:
2.8.1 Web GUI
The iSCSI RAID subsystem supports graphical user interface to manage the
system. Be sure to connect LAN cable to your R-Link port. The default setting of
management port IP is DHCP and the DHCP address displays on LCM; user can
check LCM for the IP first, then open the browser and type the DHCP address:
(The DHCP address is dynamic and user may need to check every time after
reboot again.) When DHCP service is not available, the subsys tem uses zero config
(Zeroconf) to get an IP address.
Example on LCM, the subsystem gets a DHCP address 192.168.10.50 from DHCP
server.
192.168.10.50
iSCSI-Model-Name ←
[22]
http://192.168.10.50
or
https://192.168.10.50
(SSL). Please be aware of the https function is slower than http.)
Click any function at the first time; it will pop up a dialog to auth enticate current
user.
Login name: admin
Default password: 00000000
Or login with the read-only account which only allows seeing the configuration and
cannot change setting.
Login name: user
Default password: 1234
(https: connection with encrypted Secure Sockets Layer
2.8.2 Console Serial Port
Use NULL modem cable to connect console port.
The console setting is baud rate: 115200, 8 bits, 1 stop bit, and no parity.
Terminal type: vt100
Login name: admin
Default password: 00000000
2.8.3 Remote Control – Secure Shell
SSH (secure shell) is required for the iSCSI RAID subsystem to remote login. The
SSH client software is available at the following w eb site:
Host name: 192.168.10.50 (Please check your DHCP address for this field.)
Login name: admin
Default password: 00000000
NOTE: The iSCSI RAID series only support SSH for remote
control. For using SSH, the IP address and the password is
required for login.
[23]
2.9 Enclosure
2.9.1 LCD Control Module (LCM)
There are four buttons to control the subsystem LCM (LCD Control Module). These are:
c(Up) d(Down)
After booting up the system, the following screen shows management port IP and
model name:
192.168.10.50
iSCSI-Model-Name ←
Press “
“Quick Install”, Volume Wizard”, “View IP Setting”, “Change IP Config” and
“Reset to Default” will rotate by pressing c (up) and d (down).
When there is WARNING or ERROR level of event (LCM default filter), the LCM shows
the event log to give users more detail from front panel too.
The following table is function description.
”, the LCM functions “System Info,”, “Alarm Mute”, “Reset/Shutdown”,
(Enter) (Escape)
System Info View System information of Firmware Version & RAM Size.
Alarm Mute Mute alarm when error occurs.
Reset/Shutdown Reset or shutdown controller.
Quick Install Quick three steps to create a volume. Please refer to
section 3.3 for operation in web UI.
Volume Wizard Smart steps to create a volume. Please refer to next
chapter for operation in web UI.
View IP Setting Display current IP address, subnet mask, and gateway.
Change IP
Config
Reset to Default Reset to default sets password to default: 00000000, and
Set IP address, subnet mask, and gateway. There are 2
selections, DHCP (Get IP address from DHCP server) or set
static IP.
set IP address to default as DHCP setting.
Default IP address: 192.168.10.50 (DHCP)
Default subnet mask: 255.255.255.0
Default gateway: 192.168.10.254
[24]
The following is the LCM menu hierarchy.
[Firmware
[System Info.]
[RAM Size]
[Alarm Mute] [cYes Nod]
[Reset/Shutdown]
[Shutdown]
[Quick Install]
RAID 0+1
[Volume Wizard]
proIPS
cd
[View IP Setting]
[Change IP
Config]
[Reset to Default] [cYes Nod]
WARNING: Before power off, it is better to execute “Shutdown”
to flush the data from cache to physical disks.
RAID 0+1
[JBOD x]
RAID 0+1
[IP Config]
[Static IP]
[IP Address]
[192.168.010.050]
[IP Subnet Mask]
[255.255.255.0]
[IP Gateway]
[192.168.010.254]
[Static IP]
Version]
[Reset]
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
xxx GB
[Local]
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
[DHCP]
[cYes
Nod]
[cYes
]
Nod
[Apply
The
Config]
[c (Yes
Nod]
[Apply
[Use
default
algorithm]
[Volume
Size]
xxx GB
The
Config]
[cYes
Nod]
[Apply
[new x
disk] c d
xxx BG
Adjust
Volume
Size
The
Config]
[cYes
Nod]
[cYes
Nod]
[IP
Address]
[IP Subnet
Mask]
[IP
Gateway]
[Apply IP
Setting]
Adjust IP
address
Adjust
Submask
IP
Adjust
Gateway
IP
[cYes
Nod]
[25]
2.9.2 System Buzzer
The system buzzer features are described as follows:
1. The system buzzer alarms 1 second when system boots up successfully.
2. The system buzzer alarms continuously when there is error level event
happened. The alarm will be stopped after mute.
3. The alarm will be muted automatically when the error situation is resolved.
E.g., when RAID 5 is degraded and alarm sounds immediately, after user
changes/adds one physical disk for rebuilding, and when the rebuilding is
done, the alarm will be muted automatically.
[26]
Chapter 3 Web GUI Guideline
3.1 iSCSI RAID Subsystem GUI Hierarchy
The below table is the hierarchy of the subsystem GUI.
Quick installation
System configuration
System setting
IP address
Login setting
Mail setting
Notification
setting
iSCSI configuration
Entity property
NIC
Node
Session
CHAP account
Volume configuration
Volume create
wizard
Physical disk
RAID group
Virtual disk
Logical unit
Enclosure management
SES
configuration
Hardware
monitor
S.M.A.R.T.
UPS
Maintenance
System
information
Upgrade
Reset to
default
Import and
export
Event log
Æ
Step 1 / Step 2 / Confirm
Æ
System name / Date and time
Æ
MAC address / Address / DNS / port
Æ
Login configuration / Admin password / User
password
Æ
Mail
Æ
SNMP / Messenger / System log server / Event
log filter
Æ
Entity name / iSNS IP
Æ
IP settings for iSCSI ports / Become default
gateway / Enable jumbo frame
Æ
Create / Authenticate / Rename / User / Delete
Æ
Session information / Delete
Æ
Create / Delete
Step 1 / Step 2 / Step 3 / Step 4 / Confirm
Æ
Set Free disk / Set Global spare / Set
Dedicated spare / Set property / More
information
Æ
Create / Migrate / Activate / Deactivate / Scrub
/ Delete / Set disk property / More information
Æ
Create / Extend / Scrub / Delete / Set property
/ Attach LUN / Detach LUN / List LUN / More
information
Browse the firmware to upgrade / Export
configuration
Æ
Sure to reset to factory default?
Æ
Import/Export / Import file
Æ
Download / Mute / Clear
[27]
Reboot and
shutdown
Logout Sure to logout?
Æ
Reboot / Shutdown
3.2 Login
The iSCSI RAID subsystem supports graphical user interface (GUI) to operate the
system. Be sure to connect the LAN cable. The default IP setting is DHCP; open web
browser and enter:
http://192.168.10.50 (Please check the DHCP address first on LCM)
Click any function at the first time; it will pop up a dialog for authentication.
Login name: admin
Default password: 00000000
After login, you can choose the function blocks on the left side of window to do
configuration.
[28]
Loading...
+ 65 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.