Red Hat GLOBAL FILE SYSTEM 5.2 User Manual

Global File System
Red Hat Global File
System
5.2
Global_File_System
ISBN: N/A
Publication date: May 2008
This book provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System) for Red Hat Enterprise Linux 5.2.
Global File System
Global File System: Red Hat Global File System
Copyright © 2008 Red Hat, Inc.
Copyright © 2008 Red Hat, Inc. This material may only be distributed subject to the terms and conditions set forth in the Open Publication License, V1.0 or later with the restrictions noted below (the latest version of the OPL is presently available at http://www.opencontent.org/openpub/).
Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.
Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from the copyright holder.
Red Hat and the Red Hat "Shadow Man" logo are registered trademarks of Red Hat, Inc. in the United States and other countries.
All other trademarks referenced herein are the property of their respective owners. The GPG fingerprint of the security@redhat.com key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E
1801 Varsity Drive Raleigh, NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park, NC 27709 USA
Global File System
Introduction .............................................................................................................. vii
1. Audience ...................................................................................................... vii
2. Related Documentation ................................................................................. vii
3. Document Conventions ................................................................................ viii
4. Feedback ......................................................................................................ix
1. GFS Overview ....................................................................................................... 1
1. New and Changed Features ........................................................................... 2
2. Performance, Scalability, and Economy ........................................................... 2
2.1. Superior Performance and Scalability ................................................... 2
2.2. Economy and Performance .................................................................. 3
3. GFS Software Components ............................................................................ 5
4. Before Setting Up GFS ................................................................................... 6
2. Getting Started ...................................................................................................... 7
1. Prerequisite Tasks ......................................................................................... 7
2. Initial Setup Tasks .......................................................................................... 7
3. Managing GFS ...................................................................................................... 9
1. Creating a File System ................................................................................... 9
2. Mounting a File System .................................................................................13
3. Unmounting a File System .............................................................................16
4. Displaying GFS Tunable Parameters ..............................................................16
5. GFS Quota Management ...............................................................................18
5.1. Setting Quotas ...................................................................................18
5.2. Displaying Quota Limits and Usage .....................................................19
5.3. Synchronizing Quotas ........................................................................21
5.4. Disabling/Enabling Quota Enforcement ................................................22
5.5. Disabling/Enabling Quota Accounting ..................................................23
6. Growing a File System ..................................................................................25
7. Adding Journals to a File System ...................................................................27
8. Direct I/O ......................................................................................................29
8.1. O_DIRECT ...........................................................................................30
8.2. GFS File Attribute ...............................................................................30
8.3. GFS Directory Attribute .......................................................................31
9. Data Journaling .............................................................................................32
10. Configuring atime Updates ..........................................................................33
10.1. Mount with noatime ..........................................................................34
10.2. Tune GFS atime Quantum ...............................................................35
11. Suspending Activity on a File System ...........................................................35
12. Displaying Extended GFS Information and Statistics ......................................36
12.1. Displaying GFS Space Usage ...........................................................36
12.2. Displaying GFS Counters ..................................................................37
12.3. Displaying Extended Status ..............................................................40
13. Repairing a File System ...............................................................................42
14. Context-Dependent Path Names ..................................................................44
Index .......................................................................................................................47
v
vi
Introduction
The Global File System Configuration and Administration document provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster.
HTML and PDF versions of all the official Red Hat Enterprise Linux manuals and release notes are available online at http://www.redhat.com/docs/.
1. Audience
This book is intended primarily for Linux system administrators who are familiar with the following activities:
• Linux system administration procedures, including kernel configuration
• Installation and configuration of shared storage networks, such as Fibre Channel SANs
2. Related Documentation
For more information about using Red Hat Enterprise Linux, refer to the following resources:
Red Hat Enterprise Linux Installation Guide — Provides information regarding installation of Red Hat Enterprise Linux 5.
Red Hat Enterprise Linux Deployment Guide — Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 5.
For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 5, refer to the following resources:
Red Hat Cluster Suite Overview — Provides a high level overview of the Red Hat Cluster Suite.
Configuring and Managing a Red Hat Cluster — Provides information about installing, configuring and managing Red Hat Cluster components.
LVM Administrator's Guide: Configuration and Administration — Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.
vii
Using GNBD with Global File System — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS.
Linux Virtual Server Administration — Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS).
Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat Cluster Suite.
Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at
http://www.redhat.com/docs/.
3. Document Conventions
Certain words in this manual are represented in different fonts, styles, and weights. This highlighting indicates that the word is part of a specific category. The categories include the following:
Courier font
Courier font represents commands, file names and paths, and prompts . When shown as below, it indicates computer output:
Desktop about.html logs paulwesterberg.png Mail backupfiles mail reports
bold Courier font
Bold Courier font represents text that you are to type, such as: service jonas start If you have to run a command as root, the root prompt (#) precedes the command:
# gconftool-2
italic Courier font
Italic Courier font represents a variable, such as an installation directory:
install_dir/bin/
bold font
Bold font represents application programs and text found on a graphical interface. When shown like this: OK , it indicates a button on a graphical application interface.
Introduction
viii
Additionally, the manual uses different strategies to draw your attention to pieces of information. In order of how critical the information is to you, these items are marked as follows:
Note
A note is typically information that you need to understand the behavior of the system.
Tip
A tip is typically an alternative way of performing a task.
Important
Important information is necessary, but possibly unexpected, such as a configuration change that will not persist after a reboot.
Caution
A caution indicates an act that would violate your support agreement, such as recompiling the kernel.
Warning
A warning indicates potential data loss, as may happen when tuning hardware for maximum performance.
4. Feedback
If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component rh-cs.
Be sure to mention the manual's identifier:
Feedback
ix
Bugzilla component: Documentation-cluster Book identifier: Global_File_System(EN)-5.2 (2008-05-21T15:10)
By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If
you have found an error, please include the section number and some of the surrounding text so we can find it easily.
Introduction
x
GFS Overview
The Red Hat GFS file system is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. When implemented as a cluster file system, GFS employs distributed metadata and multiple journals.
A GFS file system can be created on an LVM logical volume. A logical volume is an aggregation of underlying block devices that appears as a single logical device. For information on the LVM volume manager, see the LVM Administrator's Guide.
GFS is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS file system is 25 TB. If your system requires GFS file systems larger than 25 TB, contact your Red Hat service representative.
When determining the size of your file system, you should consider your recovery needs. Running the fsck command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsytem failure, recovery time is limited by the speed of your backup media.
When configured in a Red Hat Cluster Suite, Red Hat GFS nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS then provides data sharing among GFS nodes in a Red Hat cluster, with a single, consistent view of the file system name space across the GFS nodes. This allows processes on different nodes to share GFS files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about Red Hat Cluster Suite refer to Configuring and Managing a Red Hat Cluster.
LVM logical volumes in a Red Hat Cluster suite are managed with CLVM, which is a cluster-wide implementation of LVM, enabled by the CLVM daemon, clvmd running in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. to any directory on your system. For information on the LVM volume manager, see the LVM Administrator's Guide.
This chapter provides some basic, abbreviated information as background to help you understand GFS. It contains the following sections:
Section 1, “New and Changed Features”
Section 2, “Performance, Scalability, and Economy”
Section 3, “GFS Software Components”
Section 4, “Before Setting Up GFS”
Chapter 1.
1
1. New and Changed Features
This section lists new and changed features included with the initial release of Red Hat Enterprise Linux 5.
• GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process.
• While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM
lock manager.
• Upgrade your operating system to Red Hat Enterprise Linux 5, converting the lock manager
to DLM when you do.
For information on upgrading to Red Hat Enterprise Linux 5 and converting GFS file systems to use the DLM lock manager, see Configuring and Managing a Red Hat Cluster.
• Documentation for Red Hat Cluster Suite for Red Hat Enterprise Linux 5 has been expanded and reorganized. For information on the available documents, see Section 2, “Related
Documentation”.
2. Performance, Scalability, and Economy
You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device).
The following sections provide examples of how GFS can be deployed to suit your needs for performance, scalability, and economy:
Section 2.1, “Superior Performance and Scalability”
Section 2.2, “Economy and Performance”
Note
The deployment examples in this chapter reflect basic configurations; your needs might require a combination of configurations shown in the examples.
2.1. Superior Performance and Scalability
You can obtain the highest shared-file performance when applications access storage directly.
Chapter 1. GFS Overview
2
The GFS SAN configuration in Figure 1.1, “GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes.
Figure 1.1. GFS with a SAN
2.2. Economy and Performance
Multiple Linux client applications on a LAN can share the same SAN-based data as shown in
Figure 1.2, “GFS and GNBD with a SAN”. SAN block storage is presented to network clients as
block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. Storage devices and data can be equally shared by network client applications. File locking and sharing functions are handled by GFS for each network client.
Note
Clients implementing ext2 and ext3 file systems can be configured to access their own dedicated slice of SAN storage.
Economy and Performance
3
Figure 1.2. GFS and GNBD with a SAN
Figure 1.3, “GFS and GNBD with Directly Connected Storage” shows how Linux client
applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application failover can be fully automated with Red Hat Cluster Suite.
Chapter 1. GFS Overview
4
Figure 1.3. GFS and GNBD with Directly Connected Storage
3. GFS Software Components
Table 1.1, “GFS Software Subsystem Components” summarizes the GFS software
components.
Software Component Description
gfs.ko Kernel module that implements the GFS file system and is
loaded on GFS cluster nodes.
lock_dlm.ko A lock module that implements DLM locking for GFS. It
plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
lock_nolock.ko A lock module for use when GFS is used as a local file
system only. It plugs into the lock harness,
lock_harness.ko and provides local locking.
Table 1.1. GFS Software Subsystem Components
GFS Software Components
5
4. Before Setting Up GFS
Before you install and set up GFS, note the following key characteristics of your GFS file systems:
GFS nodes
Determine which nodes in the Red Hat Cluster Suite will mount the GFS file systems.
Number of file systems
Determine how many GFS file systems to create initially. (More file systems can be added later.)
File system name
Determine a unique name for each file system. Each file system name is required in the form of a parameter variable. For example, this book uses file system names mydata1 and
mydata2 in some example procedures.
File system size
GFS is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS file system is 25 TB. If your system requires GFS file systems larger than 25 TB, contact your Red Hat service representative.
When determining the size of your file system, you should consider your recovery needs. Running the fsck command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsytem failure, recovery time is limited by the speed of your backup media.
Journals
Determine the number of journals for your GFS file systems. One journal is required for each node that mounts a GFS file system. Make sure to account for additional journals needed for future expansion, as you cannot add journals dynamically to a GFS file system.
GNBD server nodes
If you are using GNBD, determine how many GNBD server nodes are needed. Note the hostname and IP address of each GNBD server node for setting up GNBD clients later. For information on using GNBD with GFS, see the Using GNBD with Global File System document.
Storage devices and partitions
Determine the storage devices and partitions to be used for creating logical volumes (via CLVM) in the file systems.
Chapter 1. GFS Overview
6
Getting Started
This chapter describes procedures for initial setup of GFS and contains the following sections:
Section 1, “Prerequisite Tasks”
Section 2, “Initial Setup Tasks”
1. Prerequisite Tasks
Before setting up Red Hat GFS, make sure that you have noted the key characteristics of the GFS nodes (refer to Section 4, “Before Setting Up GFS”). Also, make sure that the clocks on the GFS nodes are synchronized. It is recommended that you use the Network Time Protocol (NTP) software provided with your Red Hat Enterprise Linux distribution.
Note
The system clocks in GFS nodes must be within a few minutes of each other to prevent unnecessary inode time-stamp updating. Unnecessary inode time-stamp updating severely impacts cluster performance.
2. Initial Setup Tasks
Initial GFS setup consists of the following tasks:
1. Setting up logical volumes
2. Making a GFS files system
3. Mounting file systems
Follow these steps to set up GFS initially.
1. Using LVM, create a logical volume for each Red Hat GFS file system.
Note
You can use init.d scripts included with Red Hat Cluster Suite to automate activating and deactivating logical volumes. For more information about init.d scripts, refer to Configuring and Managing a Red Hat Cluster.
Chapter 2.
7
2. Create GFS file systems on logical volumes created in Step 1. Choose a unique name for
each file system. For more information about creating a GFS file system, refer to Section 1,
“Creating a File System”.
You can use either of the following formats to create a clustered GFS file system:
gfs_mkfs -p lock_dlm -t ClusterName:FSName -j Number BlockDevice
mkfs -t gfs -p lock_dlm -t LockTableName -j NumberJournals BlockDevice
You can use either of the following formats to create a local GFS file system:
gfs_mkfs -p lock_nolock -j NumberJournals BlockDevice
mkfs -t gfs -p lock_nolock -j NumberJournals BlockDevice
For more information on creating a GFS file system, see Section 1, “Creating a File System”.
3. At each node, mount the GFS file systems. For more information about mounting a GFS file
system, see Section 2, “Mounting a File System”. Command usage:
mount BlockDevice MountPoint
mount -o acl BlockDevice MountPoint
The -o aclmount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).
Note
You can use init.d scripts included with Red Hat Cluster Suite to automate mounting and unmounting GFS file systems. For more information about init.d scripts, refer to Configuring and Managing a Red Hat Cluster.
Chapter 2. Getting Started
8
Loading...
+ 40 hidden pages