intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
The Dell Fluid File System (FluidFS) network attached storage (NAS) solution is a highly-available file
storage solution. The solution aggregates multiple NAS controllers into one system and presents them to
UNIX, Linux, and Microsoft Windows clients as one virtual file server.
How PowerVault FluidFS NAS Works
PowerVault FluidFS NAS leverages the PowerVault FluidFS appliances and the Dell PowerVault MD
storage to provide scale‐out file storage to Microsoft Windows, UNIX, and Linux clients. The FluidFS
system supports SMB (CIFS) and NFS clients installed on dedicated servers or on virtual systems deploying
VMware virtualization.
The MD storage systems manage the “NAS pool” of storage capacity. The FluidFS system administrator
can create NAS volumes in the NAS pool, and CIFS shares and/or NFS exports to serve NAS clients
working on different platforms.
To the clients, the FluidFS system appears as a single file server, hosting multiple CIFS shares and NFS
exports, with a single IP address and namespace. Clients connect to the FluidFS system using their
respective operating systems’ NAS protocols:
•UNIX and Linux users access files through the NFS protocol
•Windows users access files through the SMB(CIFS) protocol
The FluidFS system serves data to all clients concurrently, with no performance degradation.
FluidFS Terminology
The following table defines terminology related to FluidFS scale‐out NAS.
11
TermDescription
Fluid File System (FluidFS)A special purpose, Dell proprietary operating system providing
enterprise class, high‐performance, scalable NAS services using
Dell PowerVault, EqualLogic or Dell Compellent SAN storage
systems.
FluidFS Controller (NAS controller)Dell hardware device capable of running the FluidFS firmware.
FluidFS Appliance (NAS appliance)Enclosure containing two NAS controllers. The controllers in an
appliance are called peers, are hot‐swappable and operate in
active‐active mode.
Backup Power Supply (BPS)A backup power supply that keeps a FluidFS controller running
in the event of power failure and allows it to dump its cache to
a nonvolatile storage device.
FluidFS system (cluster)Multiple NAS controllers appropriately connected and
configured to form a single functional unit.
PowerVault FluidFS ManagerWebUI Management user interface used for managing
PowerVault FluidFS systems.
NAS reserve (pool)The SAN storage system LUNs (and their aggregate size)
allocated and provisioned to a FluidFS system.
NAS volumeFile system (single-rooted directory/folder and file hierarchy),
defined using FluidFS management functions over a portion of
the NAS reserve.
Client Network (Client LAN)The network through which clients access CIFS shares or NFS
exports and also through which the PowerVault FluidFS
Manager is accessed.
Client VIPVirtual IP address(es) that clients use to access CIFS shares and
NFS exports hosted by the FluidFS system.
CIFS shareA directory in a NAS volume that is shared on the Client
Network using the SMB (CIFS) protocol.
NFS exportA directory in a NAS volume that is shared on the Client
Network using the Network File System (NFS) protocol.
Network Data Management Protocol
(NDMP)
Replication partnershipA relation between two FluidFS systems enabling them to
SnapshotA time-specific view of a NAS volume data.
Protocol used for NDMP backup and restore.
replicate NAS volumes between themselves.
Key Features Of PowerVault FluidFS Systems
The following table summarizes key features of PowerVault FluidFS scale‐out NAS.
FeatureDescription
Shared back‐end infrastructureThe MD system SAN and NX36X0 scale‐out NAS leverage the
same virtualized disk pool.
Unified block and fileUnified block (SAN) and file(NAS) storage.
High performance NASSupport for a single namespace spanning up to two NAS
appliances (four NAS controllers).
Capacity scalingAbility to scale a single namespace up to 1024 TB capacity.
12
FeatureDescription
Connectivity options1GbE and 10GbE, copper and optical options for connectivity to
the client network.
Highly available and active‐active
design
Automatic load balancingAutomatic balancing of client connections across network ports
Multi‐protocol supportSupport for CIFS/SMB (on Windows) and NFS (on UNIX and Linux)
Client authenticationControl access to files using local and remote client
Quota rulesSupport for controlling client space usage.
File security styleChoice of file security mode for a NAS volume (UNIX or NTFS).
Cache mirroringThe write cache is mirrored between NAS controllers, which
Journaling modeIn the event of a NAS controller failure, the cache in the remaining
NAS volume thin clonesClone NAS volumes without the need to physically copy the data
DeduplicationPolicy‐driven post‐process deduplication technology that
CompressionLZPS (Level Zero Processing System) compression algorithm that
Metadata protectionMetadata is constantly check-summed and stored in multiple
ReplicationNAS-volume level, snapshot‐based, asynchronous replication to
NDMP backupsSnapshot‐based, asynchronous backup (remote NDMP) over
Anti‐virus scanningCIFS anti‐virus scanning by deploying certified third‐party ICAP‐
MonitoringBuilt‐in performance monitoring and capacity planning.
Redundant, hot‐swappable NAS controllers in each NAS appliance.
Both NAS controllers in a NAS appliance process I/O. BPS allows
maintaining data integrity in the event of a power failure by
keeping a NAS controller online long enough to write the cache
to the internal storage device.
and NAS controllers, as well as back‐end I/O across MD array
LUNs.
protocols with ability to share user data across both protocols.
authentication, including LDAP, Active Directory, and NIS.
ensures a high performance response to client requests and
maintains data integrity in the event of a NAS controller failure.
NAS controller is written to storage and the NAS controller
continues to write directly to storage, which protects against data
loss.
set.
eliminates redundant data at rest.
intelligently shrinks data at rest.
locations for data consistency and protection.
enable disaster recovery.
Ethernet to certified third‐party backup solutions.
enabled anti‐virus solutions.
Overview Of PowerVault FluidFS Systems
PowerVault FluidFS system consists of one or two PowerVault NX36x0 appliances connected and
configured to utilize a PowerVault MD storage array and provide NAS services. PowerVault FluidFS
systems can start with one NX36x0 appliance, and expand with another (identical) appliance as required.
13
NOTE: To identify the physical hardware displayed in PowerVault FluidFS Manager, match the
Service Tag shown in FluidFS Manager with the Service Tag printed on a sticker on the front right
side of the NAS appliance.
All NAS appliances in a FluidFS system must use the same controllers — mixing of 1 GbE and 10 GbE
appliances or controllers is not supported. The following appliances are supported:
•NX3500 (legacy) — 1 Gb Ethernet client connectivity with 1GB iSCSI back‐end connectivity to the MD
system(s)
•NX3600 — 1 Gb Ethernet client connectivity with 1GB iSCSI back‐end connectivity to the MD
system(s)
•NX3610 — 10 Gb Ethernet client connectivity with 10GB Ethernet iSCSI back‐end connectivity to the
MD system(s)
NAS appliance numbers start at 1 and NAS controller numbers start at 0. So, NAS Appliance 1 contains
NAS Controllers 0 and 1 and FluidFS Appliance 2 contains NAS Controllers 2 and 3.
Internal Cache
Each NAS controller has an internal cache that provides fast reads and reliable writes.
Internal Backup Power Supply
Each NAS controller is equipped with an internal Backup Power Supply (BPS) that protects data during a
power failure. The BPS units provide continuous power to the NAS controllers for a minimum of 5
minutes and have sufficient battery power to allow the NAS controllers to write all data from the cache to
non‐volatile internal storage before they shut down.
The NAS controllers regularly monitor the BPS battery status for the minimum level of power required for
normal operation. To ensure that the BPS battery status is accurate, the NAS controllers routinely
undergo battery calibration cycles. During a battery calibration cycle, the BPS goes through charge and
discharge cycles; therefore, battery error events during this process are expected. A battery calibration
cycle takes up to seven days to complete. If a NAS controller starts a battery calibration cycle, and the
peer NAS controller BPS has failed, the NAS controllers enter journaling mode, which might impact
performance. Therefore, Dell recommends repairing a failed BPS as soon as possible.
Internal Storage
Each NAS controller has an internal storage device that is used only for the FluidFS images and as a cache
storage offload location in the event of a power failure. The internal hard drive does not provide the NAS
storage capacity.
PowerVault FluidFS Architecture
PowerVault FluidFS scale‐out NAS consists of:
•Hardware:
– FluidFS appliance(s)
– MD system
14
•NAS appliance network interface connections:
– Client/LAN network
– SAN network
– Internal network
The following figure shows an overview of the PowerVault FluidFS architecture:
The client/LAN network is used for client access to the CIFS shares and NFS exports. It is also used by the
storage administrator to manage the FluidFS system. The FluidFS system is assigned one or more virtual
IP addresses (client VIPs) that allow clients to access the FluidFS system as a single entity. The client VIP
also enables load balancing between NAS controllers, and ensures failover in the event of a NAS
controller failure.
If client access to the FluidFS system is not through a router (in other words, the network has a “flat”
topology), define one client VIP. Otherwise, define a client VIP for each client interface port per NAS
controller. If you deploy FluidFS in an LACP environment, please contact Dell Support to get more
information about the optimal number of VIPs for your system.
15
MD System
The PowerVault MD array provides the storage capacity for NAS; the NX36x0 cannot be used as a stand‐
alone NAS appliance. The MD array eliminates the need for separate storage capacity for block and file
storage.
SAN Network
The NX36x0 shares a back‐end infrastructure with the MD array. The SAN network connects the NX36x0
to the MD system and carries the block level traffic. The NX36x0 communicates with the MD system
using the iSCSI protocol.
Internal Network
The internal network is used for communication between NAS controllers. Each of the NAS controllers in
the FluidFS system must have access to all other NAS controllers in the FluidFS system to achieve the
following goals:
•Provide connectivity for FluidFS system creation
•Act as a heartbeat mechanism to maintain high availability
•Enable internal data transfer between NAS controller
•Enable cache mirroring between NAS controllers
•Enable balanced client distribution between NAS controllers
Data Caching And Redundancy
New or modified file blocks are first written to a local cache, and then immediately mirrored to the peer
NAS controller (mirroring mode). Data caching provides high performance, while cache mirroring
between peer NAS controllers ensures data redundancy. Cache data is ultimately (and asynchronously)
transferred to permanent storage using optimized data‐placement schemes.
When cache mirroring is not possible, such as during a single NAS controller failure or when the BPS
battery status is low, NAS controllers write directly to storage (journaling mode).
File Metadata Protection
File metadata includes information such as name, owner, permissions, date created, date modified, and a
soft link to the file’s storage location.
The FluidFS system has several built‐in measures to store and protect file metadata:
•Metadata is managed through a separate caching scheme and replicated on two separate volumes.
•Metadata is check-summed to protect file and directory structure.
•All metadata updates are journaled to storage to avoid potential corruption or data loss in the event of
a power failure.
•There is a background process that continuously checks and fixes incorrect checksums.
16
High Availability And Load Balancing
To optimize availability and performance, client connections are load balanced across the available NAS
controllers. Both NAS controllers in a NAS appliance operate simultaneously. If one NAS controller fails,
clients are automatically failed over to the remaining controllers. When failover occurs, some CIFS clients
reconnect automatically, while in other cases, a CIFS application might fail, and the user must restart it.
NFS clients experience a temporary pause during failover, but client network traffic resumes
automatically.
Failure Scenarios
The FluidFS system can tolerate a NAS controller failure without impact to data availability and without
data loss. If one NAS controller becomes unavailable (for example, because the NAS controller failed, is
turned off, or is disconnected from the network), the NAS appliance status is degraded. Although the
FluidFS system is still operational and data is available to clients, the administrator cannot perform most
configuration modifications and performance might decrease because data is no longer cached.
The impact to data availability and data integrity following a multiple NAS controller failure depends on
the circumstances of the failure scenario. Dell recommends detaching a failed NAS controller as soon as
possible, so that it can be safely taken offline for service. Data access remains intact as long as one of the
NAS controllers in each NAS appliance in a FluidFS system is functional.
The following table summarizes the impact to data availability and data integrity of various failure
scenarios.
ScenarioSystem StatusData IntegrityComments
Single NAS controller
failure
Available, degradedUnaffected
•Peer NAS controller enters
journaling mode
•Failed NAS controller can be
replaced while keeping the
file system online
Sequential dual‐NAS
controller failure in
single NAS appliance
system
Simultaneous dual‐ NAS
controller failure in
single NAS appliance
system
Sequential dual‐NAS
controller failure in
multiple NAS appliance
system, same NAS
appliance
Simultaneous dual‐NAS
controller failure in
multiple NAS appliance
UnavailableUnaffectedSequential failure assumes that
there is enough time between
NAS controller failures to write all
data from the cache to disk (MD
system or non‐volatile internal
storage)
UnavailableLose data in cacheData that has not been written to
disk is lost
UnavailableUnaffectedSequential failure assumes that
there is enough time between
NAS controller failures to write all
data from the cache to disk (MD
system or non‐volatile internal
storage)
UnavailableLose data in cacheData that has not been written to
disk is lost
17
ScenarioSystem StatusData IntegrityComments
system, same NAS
appliance
Dual‐NAS controller
failure in multiple NAS
appliance system,
separate NAS appliances
Available, degradedUnaffected
•Peer NAS controller enters
journaling mode
•Failed NAS controller can be
replaced while keeping the
file system online
Ports Used by the FluidFS System
The FluidFS system uses the ports listed in the following table. You might need to adjust your firewall
settings to allow the traffic on these ports. Some ports might not be used, depending on which features
are enabled.
Required Ports
The following table summarizes ports that are required for all FluidFS systems.
PortProtocolService Name
22TCP
53TCP
80TCP
111TCP and UDP
427TCP and UDP
443TCP
445TCP and UDP
2049–2049+(domain number ‐ 1)TCP and UDP
SSH
DNS
Internal system use
portmap
SLP
Internal system use
CIFS/SMB
NFS
4000–4000+(domain number ‐ 1)TCP and UDP
4050–4050+(domain number ‐ 1)TCP and UDP
5001–5001+(domain number ‐ 1) TCP and UDP
5051–5051+(domain number ‐ 1) TCP and UDP
44421TCP
44430–44439TCP
statd
NLM (lock manager)
mount
quota
FTP
FTP (Passive)
Feature-Specific Ports
The following table summarizes ports that are required, depending on enabled features.
18
PortProtocolService Name
88TCP and UDPKerberos
123UDPNTP
135TCPAD ‐ RPC
138UDPNetBIOS
139TCPNetBIOS
161UDPSNMP Agent
162TCPSNMP trap
389TCP and UDPLDAP
464TCP and UDPKerberos v5
543TCPKerberos login
544TCPKerberos remote shell
636TCPLDAP over TLS/SSL
711UDPNIS
714TCPNIS
749TCP and UDPKerberos administration
1344TCPAnti-virus ‐ ICAP
3268TCPLDAP global catalog
3269TCPLDAP global catalog over TLS/SSL
8004TCPScanEngine server WebUI (AV host)
9445TCPReplication trust setup
10000TCPNDMP
10550‐10551, 10560‐10568TCPReplication
Other Information You May Need
WARNING: See the safety and regulatory information that shipped with your system. Warranty
information may be included within this document or as a separate document.
•The Getting Started Guide provides an overview of setting up your system and technical
specifications.
•The Owner's Manual provides information about solution features and describes how to troubleshoot
the system and install or replace system components.
•The rack documentation included with your rack solution describes how to install your system into a
rack, if required.
•The System Placemat provides information on how to set up the hardware and install the software on
your NAS solution.
•Any media that ships with your system that provides documentation and tools for configuring and
managing your system, including those pertaining to the operating system, system management
software, system updates, and system components that you purchased with your system.
•For the full name of an abbreviation or acronym used in this document, see the Glossary at dell.com/support/manuals.
19
NOTE: Always check for updates on dell.com/support/manuals and read the updates first because
they often supersede information in other documents.
20
2
Upgrading to FluidFS Version 3
Supported Upgrade Paths
To upgrade to FluidFS version 3.0, the FluidFS cluster must be at FluidFS version 2.0.7630 or later. If the
FluidFS cluster is at a pre‐2.0.7630 version, upgrade to version 2.0.7680 prior to upgrading to version 3.0.
The following table summarizes the supported upgrade paths.
Version 2.0 ReleaseUpgrades to Version 3.0.x Supported?
2.0.7680Yes
2.0.7630Yes
2.0.7170No
2.0.6940No
2.0.6730No
2.0.6110No
FluidFS V2 and FluidFS V3 Feature and Configuration
Comparison
This section summarizes functionality differences between FluidFS version 2.0 and 3.0. Review the
functionality comparison before upgrading FluidFS to version 3.0.
•Note: The Version 3.0 column in the table below indicates changes that must be made in some cases,
in order to accommodate version 3.0 configuration options.
FeatureVersion 2.0Version 3.0
Management interfaceThe NAS ManagerUser Interface has been
updated.
Management
connections
Default management
account
The FluidFS cluster is
managed using a dedicated
Management VIP.
The default administrator
account is named admin.
Version 3.0 does not use a Management VIP—
the FluidFS cluster can be managed using any
client VIP.
During the upgrade, the Management VIP from
version 2.0 is converted to a client VIP.
The default administrator account is named
Administrator.
During the upgrade, the admin account from
version 2.0 is deleted and the CIFS
Administrator account becomes the version
3.0 Administrator account. During the upgrade,
you will be prompted to reset the CIFS
Administrator password if you have not reset it
within the last 24 hours. Make sure to
21
FeatureVersion 2.0Version 3.0
remember this password because it is required
to manage the FluidFS cluster in version 3.0.
User‐defined
management accounts
Command Line
Interface (CLI) access
and commands
Dell Technical Support
Services remote
troubleshooting
account
FluidFS cluster name
and NetBIOS name
Data reduction
overhead
Anti‐virus scanningYou can specify which file
Supported NFS
protocol versions
Supported SMB
protocol versions
Only local administrator
accounts can be created.
Administrator accounts log
into the CLI directly.
The remote troubleshooting
account is named fse (field
service engineer).
The FluidFS cluster name and
NetBIOS name do not have
to match. The NetBIOS name
can begin with a digit.
Version 2.0 does not include
a data
reduction feature.
types to scan.
Version 2.0 supports NFS
protocol version 3.
Version 2.0 supports SMB
protocol version 1.0.
You can create local administrator accounts or
create administrator accounts for remote users
(members of Active Directory, LDAP or NIS
repositories).
During the upgrade, any user‐defined
administrator accounts from version 2.0 are
deleted.
Workaround: Use one of the following
options:
Convert the administrator accounts to local
users before upgrading and convert them back
to administrator accounts after upgrading.
Re‐create administrator accounts after the
upgrade.
Version 3.0 introduces a cli account that must
be used in conjunction with an administrator
account to log into the CLI.
In addition, the command set is significantly
different in version 3.0.
The remote troubleshooting account is named
support. During the upgrade, the fse account
from version 2.0 is deleted. After the upgrade,
the support account is disabled by default.
The FluidFS cluster name is used as the
NetBIOS name.
Before upgrading, the FluidFS cluster name and
NetBIOS name must be changed to match.
Also, the FluidFS cluster name cannot be
longer than 15 characters and cannot begin
with a digit.
Version 3.0 introduces a data reduction
feature. If data reduction is enabled, the system
deducts an additional 100GB per NAS
appliance from the NAS pool for data reduction
processing. This is in addition to the amount of
space that the system deducts from the NAS
pool for internal use.
You cannot specify which files types to scan—
all files smaller than the specified file size
threshold are scanned.
You can specify whether to allow or deny
access to files larger than the file size
threshold.
As in version 2.0, you can specify file types and
directories to exclude from anti‐virus scanning.
Version 3.0 supports NFS protocol version 3
and 4.
Version 3.0 supports SMB protocol version 1.0,
2.0, and 2.1.
22
FeatureVersion 2.0Version 3.0
CIFS home sharesClients can access CIFS
home shares in two ways:
\\<client_VIP_or_name>
\<path_prefix>\<username>
\\<client_VIP_or_name>
\homes
Both access methods point
to the same folder.
Local user namesA period can be used as the
last
character of a local user
name.
Local users and local
groups UID/GID range
Guest account
mapping policy
NDMP client portThe NDMP client port must
Replication portsTCP ports 10560–10568 and
Snapshot schedulesSnapshot schedules can be
Internal subnetThe internal (interconnect)
A unique UID (user ID) or GID
(group ID) can be configured
for
local users and local groups.
By default, unmapped users
are
mapped to the guest
account, which allows a
guest account to access a file
if the CIFS share allows guest
access.
be in
the range 1–65536.
26 are used for replication.
disabled.
subnet
can be changed from a Class
C
subnet during or after
deployment.
Version 3.0 does not include the “homes”
access method. After the upgrade, the “homes”
share will not be present, and clients will need
to use the “username” access method instead.
If you have a policy that mounts the \
\<client_VIP_or_name>\homes share when
client systems start, you must change the
policy to mount the \\<client_VIP_or_name>
\<path_prefix>\<username> share.
A period cannot be used as the last character
of a local user name.
Before upgrading, delete local user names that
have a period as the last character and re‐
create the accounts with a different name.
The UID/GID range for local users and local
groups is 1001 to 100,000. There is no way to
configure or determine the UID/GID of local
users and local groups. This information is
internal to the FluidFS cluster.
During the upgrade, any existing local users
and local
groups from version 2.0 with a UID/GID that is
outside the version 3.0 UID/GID range will
remain unchanged. Local users and local
groups created after the upgrade will use the
version 3.0 UID/GID range.
Unmapped users cannot access any CIFS
share, regardless of whether the CIFS share
allows guest access.
Guest access is enabled automatically after the
upgrade only if there are guest users already
defined for any CIFS shares in version 2.0.
The NDMP client port must be in the range
10000–10100.
Before upgrading, the NDMP client port must
be changed to be in the range 10000–10100.
You must also make the reciprocal change on
the DMA servers.
TCP ports 10550–10551 and 10560–10568 are
used for replication.
Snapshot schedules cannot be disabled.
During the upgrade, disabled snapshot
schedules from version 2.0 are deleted.
The internal subnet must be a Class C subnet.
Before upgrading, the internal subnet must be
changed to a Class C subnet, otherwise the
service pack installation will fail with the
following message:
“Please allocate a new C-class subnet for
FluidFS Internal Network, run the following
23
FeatureVersion 2.0Version 3.0
command, and then repeat the upgrade:
system
networking subnets add NEWINTER Primary
255.255.255.0 -PrivateIPs x.y.z.1,x.y.z.2
(where x.y.z.* is the new subnet)”.
Note: If you receive this message while
attempting to
upgrade, obtain a Class C subnet that is not
used in your network, run the command to set
the internal subnet (for example: system
networking subnets add NEWINTER Primary
255.255.255.0 –PrivateIPs 172.41.64.1,
172.41.64.2), and retry the service pack
installation.
Port for management
and remote KVM
1GbE to 10GbE client
connectivity upgrade
Only subnet‐level isolation of
management traffic is
supported.
Version 2.0 does not support
upgrading an appliance from
1GbE client connectivity to
10GbE client connectivity.
The following features are available:
Physical isolation of management traffic
Remote KVM that allows you to view and
manage the NAS controller console remotely
over a network
These features are implemented using the
Ethernet port located on the lower right side of
the back panel of a NAS controller.
Version 3.0 introduces support for upgrading
an appliance from 1GbE client connectivity to
10GbE client connectivity. Upgrades must be
performed by a Dell installer or certified
business partner.
Performing Pre-Upgrade Tasks
Complete the following tasks before upgrading.
•The FluidFS cluster must be at FluidFS version 2.0.7630 or later before upgrading to FluidFS version
3.0.
•When upgrading to V3 “admin” user will not be available anymore, and local “Administrator” must be
used instead. Make sure to know its password, Administrator password must be changed up to 24
Hours before the upgrade. To change Administrator password Login to CLI ( Using SSH ) and run the
following command :system authentication local-accounts users change-password Administrator
CAUTION: The password you set will be required to manage the FluidFS cluster after
upgrading to version 3.0. Make sure that you remember this password or record it in a secure
location.
•Change the FluidFS cluster name to match the NetBIOS name, if needed. Also, ensure that the FluidFS
cluster name is no longer than 15 characters and does not begin with a digit.
•Change the internal subnet to a Class C subnet, if needed.
•Convert user‐defined administrator accounts to local users, if needed.
•Change policies that mount the \\<client_VIP_or_name>\homes share to mount the \
\<client_VIP_or_name>\<path_prefix>\<username> share, if needed.
24
•Delete local user names that have a period as the last character and re‐create the accounts with a
different name, if needed.
•Change the NDMP client port to be in the range 10000–10100, if needed. You must also make the
reciprocal change on the DMA servers.
•Stop all NDMP backup sessions, if needed. If an NDMP backup session is in progress during the
upgrade, the temporary NDMP snapshot is left in place.
•Open additional ports on your firewall to allow replication between replication partners, if needed.
•Remove parentheses characters from the Comment field for CIFS shares and NFS exports.
•Ensure the NAS volumes do not have the following names (these names are reserved for internal
FluidFS cluster functions):
– .
– ..
– .snapshots
– acl_stream
– cifs
– int_mnt
– unified
– Any name starting with locker_
•Ensure that at least one of the defined DNS servers is accessible using ping and dig (DNS lookup
utility).
•Ensure that the Active Directory domain controller is accessible using ping and that the FluidFS cluster
system time is in sync with the Active Directory time.
•Ensure that the NAS controllers are running, attached, and accessible using ping, SSH, and rsync.
•Although the minimum requirement to upgrade is that at least one NAS controller in each NAS
appliance must be running, Dell recommends ensuring that all NAS controllers are running before
upgrading.
•Ensure that the FluidFS cluster system status shows running.
Upgrading from FluidFS Version 2.0 to 3.0
Use the following procedure to upgrade a Dell PowerVault NX3500/NX3600/NX3610 FluidFS cluster from
FluidFS version 2.0 to 3.0.
25
NOTE:
•Perform pre‐upgrade tasks.
•Installing a service pack causes the NAS controllers to reboot during the installation process.
This might cause interruptions in CIFS and NFS client connections. Therefore, Dell recommends
scheduling a maintenance window to perform service pack installations.
•Contact Dell Technical Support Services to obtain the latest FluidFS version 3.0 service pack. Do
not modify the service pack filename.
1.Login to the FluidFS v2 Manager application using a browser and go to Cluster Management →
Cluster Management → Maintenance → Service Packs.
2.Browse to the ISO location and click Upload.
The system starts uploading the service pack file.
26
3.When the file is uploaded, click Install.
The upgrade process starts and may take an hour or more. The upgrade’s progress is displayed as
follows:
4.During the upgrade, you will be notified that a node has been rebooted. After receiving this message,
wait 15 minutes more so that the reboot of both nodes and the Final Sync are completed.
5.Login again with the Administrator user (the Admin user is no longer available). The new FluidFS
version 3 Manager UI is displayed.
6.Make sure the system is fully operational and all components are in Optimal status, before you start
working with it.
27
28
3
FluidFS Manager User Interface Overview
FluidFS Manager Layout
The following image and legend describe the layout of the FluidFS Manager.
Figure 2. FluidFS Manager Web User Interface Layout
FluidFS Manager Sections
❶Left-hand tabs, used to select a view topic.
❷Upper tabs, used to select a view subtopic.
❸Main view area, containing one or more panes. Each pane refers to a different FluidFS
element or configuration setting, which can be viewed/modified/deleted.
❹The event log, which shows a sortable table of event messages.
❺The dashboard, which displays various system statistics, statuses and services at a glance.
Navigating Views
A specific FluidFS Manager view is displayed when you select a topic, by clicking the topic tab on the left,
and select a subtopic, by clicking a subtopic tab on top.
For example, to display the System\SNMP view, click the System tab on the left and the SNMP tab on
top.
The FluidFS elements and settings related to the view you selected are displayed in the main view area.
29
Figure 3. Navigating Views in FluidFS Manager
Working With Panes, Menus, And Dialogs
Showing And Hiding Panes
Panes within the main view area display FluidFS elements and settings. A pane’s contents may be hidden
by clicking the button, and displayed by clicking the button.
Opening A Pane Menu
To modify a setting or add an element to a pane, click the button and select the desired menu option.
Opening A Table Element Menu
Some panes display a table of elements, each of which may be edited independently. To modify or delete
an element in a table, click the button in the row of the element you want to change, then select the
desired menu option.
30
Loading...
+ 159 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.