Quantum, DLT, DLTtape, the Quantum logo, and the DLTtape logo are all registered trademarks of Quantum Corporation.
SDLT and Super DLTtape are trademarks of Quantum Corporation. Other trademarks may
be mentioned herein which belong to other companies.
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Purpose of This Release
StorNext 3.1.4 includes enhancements that extend the capabilities of StorNext
Storage Manager (SNSM) and StorNext File System (SNFS). This document
describes these enhancements, as well as supported platforms and system
components. This document also lists currently known issues, issues that were
resolved for this release, and known limitations.
Visit www.quantum.com/ServiceandSupport
updates for StorNext.
New Features and Enhancements
StorNext 3.1.4 adds new support for operating systems, libraries and tape
drives.
Added Operating
System Support
Support has been added for the following operating systems:
• Windows Vista Service Pack 2 (SP2) for x86 32-bit and x86 64-bit systems
• Windows Server 2008 Service Pack 2 (SP2) for x86 32-bit and x86 64-bit
systems
• Red Hat Enterprise Linux 4 Update 8 (2.6.9-89 EL) for x86 32-bit and
x86 64-bit systems
• Red Hat Enterprise Linux 5 Update 3 (2.6.18-128 EL) for x86 64-bit systems
for additional information and
Added Library and
Drive Support
New Media Limiting
Feature
2Purpose of This Release
Support has been added for the following libraries and drives:
• Quantum DXi 7500 library
• Sun/StorageTek T10000 Rev B tape drives for Sun/StorageTek SCSI and Fibre
Channel L700 libraries
• Sun/StorageTek T10000 Rev B tape drives for Sun/StorageTek ACSLS 7.3
SL3000 libraries
StorNext users can now minimize the number of new tape media that can be
used for stores by a single policy class. In most cases one new media will be
used per policy class at a time. A new piece of media must be filled before
another new media can be allocated for the policy class.
It is necessary to understand how Storage Manager allocates media to
understand when only one piece of media will be used and in what
circumstances several media might be used.
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Consider the following example. In standard StorNext Storage Manager without
the new feature, suppose we have a group of 1200 files that need to be stored
for policy class pc1. Assume there are four tape drives available, and they are
currently idle. Assume also that there is a supply of blank unassigned media,
and that there are no partially used media currently assigned to policy class pc1.
Storage Manager splits its files to be stored into groups of 300 (the default
value), so in this example the list of 1200 files to be stored will be split into four
groups of 300 files each. Four
fs_fmover processes can be copying the files to
four different newly allocated media. This is good because it achieves
parallelization for the copies for class pc1, but a side effect is that four different
media are now assigned to class pc1, and each of the four media might be only
partially used.
The new media limiting feature allows the StorNext administrator to select a
different media allocation strategy. The alternate allocation strategy will allocate
one new media for the policy class, and files will be written only to that media
until it fills up. Then a new media will be allocated and stores are written only to
that media. And so on. This feature maximizes storage media usage per policy
class by eliminating the possible parallelization of copy operations (stores) for
the policy class.
When preparing to store a set of files, Storage Manager looks to see if there are
any media already assigned to the policy class that can hold at least one of the
files in the list. If so, the store operation proceeds, using that piece of media. If
there are no such available media, a blank tape is requested. That blank tape will
be used for the store operation and will then be owned by the policy class.
To limit the media used per policy class, insert this line:
LIMIT_MEDIA_PER_CLASS=y;
in the file /usr/adic/TSM/config/fs_sysparm_override
You must stop and restart Storage manager for this change to take effect. If you
wish to disable the feature, remove that line from /usr/adic/TSM/config/fs_sysparm_override and then stop and restart Storage Manager.
Enabling the feature prevents Storage Manager from requesting a new blank
tape if there is a piece of media owned by the policy class containing enough
space to hold at least one of the files in the current store list. However, if there
are several media already assigned to the policy class they can still be used to
store files at the same time.
You must be careful if no media are currently assigned to the policy class,
because a new piece of media is not really assigned to a policy class until a copy
operation to that media has successfully completed.
Consider the example above in which we wanted to store 1200 files. Even with
the limiting feature enabled, four media would be used because files have been
written to each of the four media. Four store operations (300 files each) would
be launched almost simultaneously. Each of these store operations will query
the media database and conclude that there are no writable media assigned to
the policy class. Each of the four store operations will request a new blank tape.
When the operations have completed, four media are now assigned to the
policy class.
This behavior can be avoided by seeding the process with one file. If we stored
one file initially (with the
fsstore command, for example) then there would be
one piece of media assigned to the policy class. Then, if we stored 1200 files,
New Features and Enhancements3
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Other Changes
each of the stores of 300 files would wait for that piece of media. If that piece
of media fills, a new piece will be assigned and stores will be able to continue.
This section describes additional changes in the StorNext 3.1.4 release.
New VTOC Label
Format
In StorNext 3.1.3 an alternate VTOC label format was introduced. This new label
format allows for better label compatibility between architectures in future
StorNext releases. The default label format in 3.1.3 was the original VTOC
format. Beginning with release 3.1.4, the default is the newer VTOC format.
In 3.1.x there are compatibility considerations that must be taken into account
when using VTOC labels:
1) The old VTOC format is not compatible with Solaris systems.
2) The new VTOC format is not compatible with IRIX systems on StorNext
releases prior to 3.5.0
See the cvlabel documentation for more on VTOC label formats. The '-i' and '-I'
flags indicate which VTOC format to use.
Quantum recommends against using the old VTOC format. If IRIX platforms are
included in a StorNext environment, contact Quantum regarding upgrading to
StorNext 3.5.0 or higher.
For more information about this VTOC label change, see StorNext Product
Bulletin 31 on Quantum.com
.
Updated DatabaseStorNext 3.1.4 incorporates a new version of the database, revision 104, which
corrects an issue in earlier database versions.
This issue caused truncation problems and issues in which files on several tapes
were erroneously reported as having no path. Running a fsmedinfo command
for these files reported PATH UNKNOWN despite the directories existing. This
problem occurred because running fsclean -r did not properly clean out the
oldmedia table. Because of the entries in the oldmedia table, database
queries did not return the correct number of entries.
Storage Manager
Parameter Deprecated
4Other Changes
This release includes an optimization for the Storage Manager rebuild policy. As
part of this optimization, the MAPPER_MAX_THREADS system parameter has
been deprecated.
When upgrading, if this parameter exists in /usr/adic/TSM/config/fs_sysparm or /usr/adic/TSM/config/fs_sysparm_override, it will
automatically be removed. If the parameter is added to these files, it will have
no effect.
Discontinued Support on Some Platforms
Some StorNext services that were supported on various platforms in StorNext
3.1.3 and other previous releases are no longer supported in StorNext 3.1.4.
These services will continue to be supported for previous StorNext releases, but
going forward beginning with release 3.1.4 will not be supported.
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Table 1 Discontinued
Platforms
Tab l e 1
StorNext 3.1.4.
shows the StorNext services for which support is discontinued as of
Discontinued Support on Some Platforms5
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Changes From Previous Releases
The following change was instituted in a previous StorNext release and is listed
here as a reminder that important settings have been changed.
Revised FSBlockSize,
Metadata Disk Size, and
JournalSize Settings
The FsBlockSize (FSB), metadata disk size, and JournalSize settings all
work together. For example, the FsBlockSize must be set correctly in order for
the metadata sizing to be correct. JournalSize is also dependent on the
FsBlockSize.
For FsBlockSize the optimal settings for both performance and space
utilization are in the range of 16K or 64K.
Settings greater than 64K are not recommended because performance will be
adversely impacted due to inefficient metadata I/O operations. Values less than
16K are not recommended in most scenarios because startup and failover time
may be adversely impacted. Setting FsBlockSize (FSB) to higher values is
important for multi-terabyte file systems for optimal startup and failover time.
Note: This is particularly true for slow CPU clock speed metadata servers such
as Sparc. However, values greater than 16K can severely consume
metadata space in cases where the file-to-directory ratio is low (e.g.,
less than 100 to 1).
For metadata disk size, all new installations must have a
with more space allocated depending on the number of files per directory and
the size of your file system.
The following table shows suggested FsBlockSize (FSB) settings and
metadata disk space based on the average number of files per directory and file
system size. The amount of disk space listed for metadata is
25 GB minimum amount. Use this table to determine the setting for your
configuration.
minimum
in addition
of 25 GB,
to the
Average No.
of Files Per
Directory
Less than 10FSB: 16KB
10-100FSB: 16KB
100-1000FSB: 64KB
6Changes From Previous Releases
File System SIze: Less
Than 10TB
Metadata: 32 GB per 1M
files
Metadata: 8 GB per 1M
files
Metadata: 8 GB per 1M
files
File System Size: 10TB
or Larger
FSB: 64KB
Metadata: 128 GB per
1M files
FSB: 64KB
Metadata: 32 GB per 1M
files
FSB: 64KB
Metadata: 8 GB per 1M
files
Average No.
of Files Per
Directory
File System SIze: Less
Than 10TB
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
File System Size: 10TB
or Larger
1000 +FSB: 64KB
Metadata: 4 GB per 1M
files
The best rule of thumb is to use a 16K FsBlockSize unless other requirements
such as directory ratio dictate otherwise.
This setting is not adjustable after initial file system creation, so it is very
important to give it careful consideration during initial configuration.
Example: FsBlockSize 16K
FSB: 64KB
Metadata: 4 GB per 1M
files
JournalSize Setting
The optimal settings for JournalSize are in the range between 16M and 64M,
depending on the FsBlockSize. Avoid values greater than 64M due to
potentially severe impacts on startup and failover times. Values at the higher
end of the 16M-64M range may improve performance of metadata operations
in some cases, although at the cost of slower startup and failover time.
The following table shows recommended settings. Choose the setting that
corresponds to your configuration.
FsBlockSizeJournalSize
16KB16MB
64KB64MB
This setting is adjustable using the cvupdatefs utility. For more information,
see the cvupdatefs man page.
Example: JournalSize 16M
StorNext Upgrade Recommendations
Customers migrating to 3.1.4 should observe the following best practices:
• Whenever possible, StorNext systems should run the latest StorNextsupported operating system service pack or update level.
• When upgrading from StorNext 2.8 to StorNext 3.1.4 on RHEL4, you should
first upgrade the operating system to update 7. For example, if SNMS 2.8 is
installed on RHEL4U3, the upgrade procedure is:
1. Upgrade the operating system from RHEL4U3 to RHEL4U7
2. Upgrade the StorNext version from 2.8 to 3.1.4
StorNext Upgrade Recommendations7
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Since StorNext 3.1.4 requires Update 5, 6 or 7 of RHEL4, these steps should
be performed one after the other to avoid running StorNext with an
unsupported Red Hat update level longer than necessary.
• StorNext 3.1.4 does not support RHEL5 GA (“update 0”). When upgrading a
system running RHEL5 GA to StorNext 3.1.4, you must first upgrade the
operating system to RHEL5U2.
• StorNext 3.1.4 does not support AIX 5.2. When upgrading clients running
AIX 5.2 to StorNext 3.1.4, perform the following steps:
1 Make a backup copy of /etc/filesystems
2 Uninstall StorNext
3 Upgrade the operating system to AIX 5.3
4 Install StorNext 3.1.4
5 Update /etc/filesystems with previously saved StorNext mount
information
Configuration Requirements
Before installing StorNext 3.1.4, note the following configuration requirements:
• In cases where gigabit networking hardware is used and maximum StorNext
performance is required, a separate, dedicated switched Ethernet LAN is
recommended for the StorNext metadata network. If maximum StorNext
performance is not required, shared gigabit networking is acceptable.
• A separate, dedicated switched Ethernet LAN is mandatory for the metadata
network if 100 Mbit/s or slower networking hardware is used.
• StorNext does not support file system metadata on the same network as
iSCSI, NFS, CIFS, or VLAN data when 100 Mbit/s or slower networking
hardware is used.
• The operating system on the metadata controller must always be run in U.S.
English.
• For Windows systems (server and client), the operating system must always
be run in U.S. English.
Caution: If a Library used by StorNext Storage Manager is connected via a
fibre switch, zone the switch to allow only the system(s) running
SNSM to have access to the library. This is necessary to ensure that
a “rogue” system does not impact the library and cause data loss or
corruption. For more information, see StorNext Product Alert 16.
This section contains the following configuration requirement topics:
• Library Requirements
• Disk Requirements
• Disk Naming Requirements
8Configuration Requirements
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
• SAN Disks on Windows Server 2008
Library RequirementsThe following libraries require special configurations to run StorNext.
DAS and Scalar DLC Network-Attached Libraries
Prior to launching the StorNext Configuration Wizard, DAS, and Scalar DLC
network-attached libraries must have the DAS client already installed on the
appropriate host control computer.
DAS Attached Libraries
For DAS attached libraries, refer to “Installation and Configuration” and “DAS
Configuration File Description” in the
Guide
. The client name is either the default StorNext server host name or the
name selected by the administrator.
StorNext can support LTO-3 WORM media in DAS connected libraries, but
WORM media cannot be mixed with other LTO media types in one logical library.
DAS Installation and Administration
To use LTO-3 WORM media in a logical library, before configuring the library in
StorNext, set the environmental variable XDI_DAS_MAP_LTO_TO_LTOW in the
/usr/adic/MSM/config/envvar.config file to the name of the library. The
library name must match the name given to the library when configuring it with
StorNext. If defining multiple libraries with this environmental variable, separate
them with a space. After setting the environmental variable, restart StorNext
Storage Manager (SNSM).
Note: SDLC software may not correctly recognize LTO-3 WORM media in the
library and instead set it to “unknown media type.” In this case you
must manually change the media type to “LTO3” using the SDLC GUI.
Scalar DLC Attached Libraries
For Scalar 10K and Scalar 1000 DLC attached libraries, refer to “Installation and
Configuration” and “Client Component Installation” in the
Library Controller Reference Manual
The DAS client should be installed during the installation of the Scalar DLC
attached libraries. Use this procedure to install the DAS client.
1 Select Clients > Create DAS Client.
The client name is either the default StorNext server host name or the name
selected by the administrator.
(6-00658-04).
Scalar Distributed
2 When the DAS client is configured in Scalar DLC, select Aliasing.
3 Select sony_ait as the Media aliasing.
4 The default value is 8mm.
5 Verify that Element Type has AIT drive selected.
6 Click Change to execute the changes.
Configuration Requirements9
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
ACSLS Attached Libraries
Due to limitations in the STK ACSLS interface, StorNext supports only single ACS
configurations (ACS 0 only).
Scalar i500 (Firmware Requirements)
For Scalar i500 libraries that do not have a blade installed, the library and drives
must meet the following minimum firmware requirements. (These requirements
apply
only
to Scalar i500 libraries that do not have a blade installed.)
• HP LTO-4 Fibre/SAS tape device minimum firmware level: H35Z
Caution: If you do not meet the minimum firmware requirements, you
might be unable to add a library to the Scalar i500 using the
StorNext Configuration Wizard.
Disk RequirementsDisk devices must support, at minimum, the mandatory SCSI commands for
block devices as defined by the SCSI Primary Commands-3 standard (SPC-3) and
the SCSI Block Commands-2 (SBC-2) standard.
To ensure disk reliability, Quantum recommends that disk devices meet the
requirements specified by Windows Hardware Quality Labs (WHQL) testing.
However, there is no need to replace non-WHQL certified devices that have been
used successfully with StorNext.
Disk devices must be configured with 512-byte or 4096-byte sectors, and the
underlying operating system must support the device at the given sector size.
StorNext customers that have arrays configured with 4096-byte sectors can use
only Windows, Linux and IRIX clients. Customers with 512-byte arrays can use
clients for any valid StorNext operating system (i.e., Windows, Linux, or UNIX).
In some cases, non-conforming disk devices can be identified by examining the
output of cvlabel –vvvl. For example:
/dev/rdsk/c1d0p0: Cannot get the disk physical info.
If you receive this message, contact your disk vendors to determine whether the
disk has the proper level of SCSI support.
Disk Naming
Requirements
When naming disks, names should be unique across all SANs. If a client
connects to more that one SAN, a conflict will arise if the client sees two disks
with the same name.
SAN Disks on Windows
Server 2008
10Configuration Requirements
SAN policy has been introduced in Windows Server 2008 to protect shared disks
accessed by multiple servers. The first time the server sees the disk it will be
offline, so StorNext is prevented from using or labeling the disk.
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
To bring the disks online, use the POLICY=OnlineAll setting. If this doesn’t
set the disks online after a reboot, you may need to go to Windows Disk
Management and set each disk online.
Follow these steps to set all disks online:
1 From the command prompt, type DISKPART
2 Type SAN to view the current SAN policy of the disks.
3 To set all the disks online, type SAN POLICY=onlineall.
4 After being brought online once, the disks should stay online after
rebooting.
5 If the disks appear as “Not Initialized” in Windows Disk Management after a
reboot, this indicates the disks are ready for use.
If the disks still appear as offline in Disk Management after rebooting, you
must set each disk online by right-clicking the disk and selecting Online.
This should always leave the SAN disks online after reboot.
Note: NOTE: If the disks are shared among servers, above steps may lead to
data corruption. Users are encouraged to use the proper SAN policy to
protect data
EXAMPLE:
C:\ >Diskpart
Microsoft DiskPart version 6.0.6001
Copyright (C) 1999-2007 Microsoft Corporation.
On computer: CALIFORNIA
DISKPART> SAN
SAN Policy : Offline All
DISKPART> san policy=onlineall
DiskPart successfully changed the SAN policy for the current
operating system.
Operating System Requirements
Tab l e 2 shows the operating systems, kernel versions, and hardware platforms
that support StorNext File System, StorNext Storage Manager, and the StorNext
client software.
This table also indicates the platforms that support the following:
•MDC Servers
• Distributed LAN Servers
• File System LAN Clients
Operating System Requirements11
StorNext 3.1.4 Release Notes
6-00431-25 Rev A
September 2009
Table 2 StorNext Supported
OSes and Platforms
Notes: When adding StorNext Storage Manager to a StorNext File System environment,
the metadata controller (MDC) must be moved to a supported platform. If you attempt
to install and run a StorNext 3.1.4 server that is not supported, you do so at your own
risk. Quantum strongly recommends against installing non-supported servers.
*64-bit versions of Windows support up to 128 distributed LAN clients. 32-bit versions of
Windows is not recommended for MDC server or Distributed LAN due to memory
limitations.
12Operating System Requirements
Loading...
+ 28 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.