Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Dell EMC ECS provides a complete software-defined cloud storage platform that supports the
storage, manipulation, and analysis of unstructured data on a massive scale on commodity
hardware. You can deploy ECS as a turnkey storage appliance or as a software product that is
installed on a set of qualified commodity servers and disks. ECS offers the cost advantages of a
commodity infrastructure and the enterprise reliability, availability, and serviceability of traditional
arrays.
ECS uses a scalable architecture that includes multiple nodes and attached storage devices. The
nodes and storage devices are commodity components, similar to devices that are generally
available, and are housed in one or more racks.
A rack and its components that are supplied by Dell EMC and that have preinstalled software, is
referred to as an ECS
referred to as a Dell EMC ECS
A rack, or multiple joined racks, with processing and storage that is handled as a coherent unit by
the ECS infrastructure software is referred to as a
Data Center
Management users can access the ECS UI, which is referred to as the ECS Portal, to perform
administration tasks. Management users include the System Administrator, Namespace
Administrator, and System Monitor roles. Management tasks that can be performed in the ECS
Portal can also be performed by using the ECS Management REST API.
ECS administrators can perform the following tasks in the ECS Portal:
l
l
Object users cannot access the ECS Portal, but can access the object store to read and write
objects and buckets by using clients that support the following data access protocols:
l
l
l
l
For more information about object user tasks, see the
ECS Product Documentation page.
For more information about System Monitor tasks, see the
the ECS Product Documentation page.
appliance
. A rack and commodity nodes that are not supplied by Dell EMC, is
software only solution
. Multiple racks are referred to as a
site
, and at the ECS software level as a
cluster
Virtual
(VDC).
Configure and manage the object store infrastructure (compute and storage resources) for
object users.
Manage users, roles, and buckets within namespaces. Namespaces are equivalent to tenants.
Amazon Simple Storage Service (Amazon S3)
EMC Atmos
OpenStack Swift
ECS CAS (content-addressable storage)
ECS Data Access Guide
ECS Monitoring Guide
, available from the
, available from
.
ECS platform
The ECS platform is composed of the data services, portal, storage engine, fabric, infrastructure,
and hardware component layers.
12ECS Administration Guide
Figure 1 ECS component layers
Data
Services
Portal
Storage Engine
Fabric
Infrastructure
Hardware
ECS
Software
Data services
The data services component layer provides support for access to the ECS object store
through object, HDFS, and NFS v3 protocols. In general, ECS provides multi-protocol access,
data that is ingested through one protocol can be accessed through another. For example,
data that is ingested through S3 can be modified through Swift, NFS v3, or HDFS. This multiprotocol access has some exceptions due to protocol semantics and representations of how
the protocol was designed.
The following table shows the object APIs and the protocols that are supported and that
interoperate.
Overview
Table 1
ECS supported data services
ProtocolsSupportInteroperability
ObjectS3Additional capabilities such as byte range
updates and rich ACLS
AtmosVersion 2.0NFS (only path-based objects, not object ID
* When a bucket is enabled for file system access, permissions set using HDFS are in effect when you access the
bucket as an NFS file system, and vice versa.
File systems (HDFS and NFS), Swift
style objects)
File systems (HDFS and NFS), S3
Portal
The ECS Portal component layer provides a Web-based GUI that allows you to manage,
license, and provision ECS nodes. The portal has the following comprehensive reporting
capabilities:
lCapacity utilization for each site, storage pool, node, and disk
lPerformance monitoring on latency, throughput, transactions per second, and replication
progress and rate
lDiagnostic information, such as node and disk recovery status and statistics on hardware
and process health for each node, which helps identify performance and system
bottlenecks
ECS Administration Guide13
Overview
Storage engine
The storage engine component layer provides an unstructured storage engine that is
responsible for storing and retrieving data, managing transactions, and protecting and
replicating data. The storage engine provides access to objects ingested using multiple object
storage protocols and the NFS and HDFS file protocols.
Fabric
The fabric component layer provides cluster health management, software management,
configuration management, upgrade capabilities, and alerting. The fabric layer is responsible
for keeping the services running and managing resources such as the disks, containers,
firewall, and network. It tracks and reacts to environment changes such as failure detection
and provides alerts that are related to system health. The 9069 and 9099 ports are public IP
ports protected by Fabric Firewall manager. Port is not available outside of the cluster.
Infrastructure
The infrastructure component layer uses SUSE Linux Enterprise Server 12 as the base
operating system for the ECS appliance, or qualified Linux operating systems for commodity
hardware configurations. Docker is installed on the infrastructure to deploy the other ECS
component layers. The Java Virtual Machine (JVM) is installed as part of the infrastructure
because ECS software is written in Java.
Hardware
The hardware component layer is an ECS appliance or qualified industry standard hardware.
For more information about ECS hardware, see the
from the ECS Product Documentation page.
ECS data protection
ECS protects data within a site by mirroring the data onto multiple nodes, and by using erasure
coding to break down data chunks into multiple fragments and distribute the fragments across
nodes. Erasure coding (EC) reduces the storage overhead and ensures data durability and
resilience against disk and node failures.
By default, the storage engine implements the Reed-Solomon 12 + 4 erasure coding scheme in
which an object is broken into 12 data fragments and 4 coding fragments. The resulting 16
fragments are dispersed across the nodes in the local site. When an object is erasure-coded, ECS
can read the object directly from the 12 data fragments without any decoding or reconstruction.
The code fragments are used only for object reconstruction when a hardware failure occurs. ECS
also supports a 10 + 2 scheme for use with cold storage archives to store objects that do not
change frequently and do not require the more robust default EC scheme.
The following table shows the requirements for the supported erasure coding schemes.
Table 2
Erasure encoding requirements for regular and cold archives
ECS Hardware and Cabling Guide
, available
Use caseMinimum
required nodes
Regular archive416321.3312 + 4
Cold archive612241.210 + 2
Minimum
required disks
Recommended
disks
EC efficiencyEC scheme
Sites can be federated, so that data is replicated to another site to increase availability and data
durability, and to ensure that ECS is resilient against site failure. For three or more sites, in
addition to the erasure coding of chunks at a site, chunks that are replicated to other sites are
combined using a technique called XOR to provide increased storage efficiency.
14ECS Administration Guide
Overview
The following table shows the storage efficiency that can be achieved by ECS where multiple sites
are used.
If you have one site, with erasure coding the object data chunks use more space (1.33 or 1.2 times
storage overhead) than the raw data bytes require. If you have two sites, the storage overhead is
doubled (2.67 or 2.4 times storage overhead) because both sites store a replica of the data, and
the data is erasure coded at both sites. If you have three or more sites, ECS combines the
replicated chunks so that, counter intuitively, the storage overhead reduces.
When one node is down in a four nodes system, ECS starts to rebuild the EC on priority to avoid
DU. As one node is down, EC segment separates to other three nodes, which results in segment
number being greater than the EC code number. If the down node comes back, things go back to
normal. When another node with the most number of EC segments goes down, the DU window is
as large as the node NA window, when the node does not recover it causes DL.
EC retiring feature converts unsaved EC chunk into three mirror copies chunk for data safety.
However, EC retiring has some limitations:
l
It increases system capacity, and protection overhead from 1.33 to 3.
l
When there is no node down situation, EC retiring introduce unnecessary IO.
l
The feature applies to four nodes system. EC retiring does not automatically trigger, you need
to trigger it on demand using an API through service console.
For a detailed description of the mechanism used by ECS to provide data durability, resilience, and
availability, see the
ECS High Availability Design White Paper
.
Configurations for availability, durability, and resilience
Depending on the number of sites in the ECS system, different data protection schemes can
increase availability and balance the data protection requirements against performance. ECS uses
the replication group to configure the data protection schemes (see Introduction to storage pools,
VDCs, and replication groups on page 26). The following table shows the data protection schemes
that are available.
ECS Administration Guide15
Overview
Table 4 ECS data protection schemes
Number of sites Data protection scheme
Local Protection Full Copy
Protection*
1YesNot applicableNot applicableNot applicable
2YesAlwaysNot applicableNot applicable
3 or moreYesOptionalNormalOptional
* Full Copy Protection can be selected with Active. Full Copy Protection is not available if
Passive is selected.
ActivePassive
Local Protection
Data is protected locally by using triple mirroring and erasure coding which provides resilience
against disk and node failures, but not against site failure.
Full Copy Protection
When the Replicate to All Sites setting is turned on for a replication group, the replication
group makes a full readable copy of all objects to all sites within the replication group. Having
full readable copies of objects on all VDCs in the replication group provides data durability and
improves local performance at all sites at the cost of storage efficiency.
Active
Active is the default ECS configuration. When a replication group is configured as Active, data
is replicated to federated sites and can be accessed from all sites with strong consistency. If
you have two sites, full copies of data chunks are copied to the other site. If you have three or
more sites, the replicated chunks are combined (XOR'ed) to provide increased storage
efficiency.
When data is accessed from a site that is not the owner of the data, until that data is cached
at the non-owner site, the access time increases. Similarly, if the owner site that contains the
primary copy of the data fails, and if you have a global load balancer that directs requests to a
non-owner site, the non-owner site must recreate the data from XOR'ed chunks, and the
access time increases.
Passive
The Passive configuration includes two, three, or four active sites with an additional passive
site that is a replication target (backup site). The minimum number of sites for a Passive
configuration is three (two active, one passive) and the maximum number of sites is five (four
active, one passive). Passive configurations have the same storage efficiency as Active
configurations. For example, the Passive three-site configuration has the same storage
efficiency as the Active three-site configuration (2.0 times storage overhead).
In the Passive configuration, all replication data chunks are sent to the passive site and XOR
operations occur only at the passive site. In the Active configuration, the XOR operations
occur at all sites.
If all sites are on-premise, you can designate any of the sites as the replication target.
If there is a backup site hosted off-premise by a third-party data center, ECS automatically
selects it as the replication target when you create a Passive geo replication group (see
Create a replication group on page 38). If you want to change the replication target from a
hosted site to an on-premise site, you can do so using the ECS Management REST API.
16ECS Administration Guide
ECS network
ECS network infrastructure consists of top of rack switches allowing for the following types of
network connections:
l
Public network – connects ECS nodes to your organization's network, providing data.
l
Internal private network – manages nodes and switches within the rack and across racks.
For more information about ECS networking, see the
Paper
.
CAUTION It is required to have connections from the customer's network to both front end
switches (rabbit and hare) in order to maintain the high availability architecture of the ECS
appliance. If the customer chooses not to connect to their network in the required HA manner,
there is no guarantee of high data availability for the use of this product.
Load balancing considerations
It is recommended that a load balancer is used in front of ECS.
In addition to distributing the load across ECS cluster nodes, a load balancer provides High
Availability (HA) for the ECS cluster by routing traffic to healthy nodes. Where network separation
is implemented, and data and management traffic are separated, the load balancer must be
configured so that user requests, using the supported data access protocols, are balanced across
the IP addresses of the data network. ECS Management REST API requests can be made directly
to a node IP on the management network or can be load balanced across the management network
for HA.
The load balancer configuration is dependent on the load balancer type. For information about
tested configurations and best practice, contact your customer support representative.
Log in to the ECS Portal........................................................................................................20
l
View the Getting Started Task Checklist............................................................................... 21
l
View the ECS Portal Dashboard............................................................................................ 22
ECS Administration Guide19
Getting Started with ECS
Initial configuration
The initial configuration steps that are required to get started with ECS include logging in to the
ECS Portal for the first time, using the ECS Portal Getting Started Task Checklist and Dashboard,
uploading a license, and setting up an ECS virtual data center (VDC).
About this task
To initially configure ECS, the root user or System Administrator must at a minimum:
Procedure
1. Upload an ECS license.
See Licensing on page 167.
2. Select a set of nodes to create at least one storage pool.
See Create a storage pool on page 28.
3. Create a VDC.
See Create a VDC for a single site on page 31.
4. Create at least one replication group.
See Create a replication group on page 38.
a. Optional: Set authentication.
You can add Active Directory (AD), LDAP, or Keystone authentication providers to ECS
to enable users to be authenticated by systems external to ECS. See Introduction to
authentication providers on page 42.
5. Create at least one namespace. A namespace is the equivalent of a tenant.
See Create a namespace on page 55.
a. Optional: Create object and/or management users.
See Working with users in the ECS Portal on page 68.
6. Create at least one bucket.
See Create a bucket on page 79.
After you configure the initial VDC, if you want to create an additional VDC and federate it
with the first VDC, see Add a VDC to a federation on page 31.
Log in to the ECS Portal
Log in to the ECS Portal to set up the initial configuration of a VDC. Log in to the ECS Portal from
the browser by specifying the IP address or fully qualified domain name (FQDN) of any node, or
the load balancer that acts as the front end to ECS. The login procedure is described below.
Before you begin
Logging in to the ECS Portal requires the System Administrator, System Monitor, Lock
Administrator (emcsecurity user), or Namespace Administrator role.
Note:
configure the system, only with the System Administrator role.
20ECS Administration Guide
You can login to the ECS Portal for the first time with any valid login. However, you can
Procedure
1. Type the public IP address of the first node in the system, or the address of the load
balancer that is configured as the front-end, in the address bar of your browser:
https://<node1_public_ip>.
2. Log in with the default root credentials:
l
User Name: root
l
Password: ChangeMe
You are prompted to change the password for the root user immediately.
3. After you change the password at first login, click Save.
You are logged out and the ECS login screen is displayed.
4. Type the User Name and Password.
5. To log out of the ECS Portal, in the upper-right menu bar, click the arrow beside your user
name, and then click logout.
View the Getting Started Task Checklist
Getting Started with ECS
The Getting Started Task Checklist in the ECS Portal guides you through the initial ECS
configuration. The checklist appears when you first log in and when the portal detects that the
initial configuration is not complete. The checklist automatically appears until you dismiss it. On
any ECS Portal page, in the upper-right menu bar, click the Guide icon to open the checklist.
Figure 2
Guide icon
The Getting Started Task Checklist displays in the portal.
Figure 3
Getting Started Task Checklist
1. The current step in the checklist.
2. An optional step. This step does not display a check mark even if you have completed the step.
3. Information about the current step.
4. Available actions.
5. Dismiss the checklist.
A completed step appears in green color.
ECS Administration Guide21
Getting Started with ECS
A completed checklist gives you the option to browse the list again or recheck your configuration.
View the ECS Portal Dashboard
The ECS Portal Dashboard provides critical information about the ECS processes on the VDC you
are currently logged in to.
The Dashboard is the first page you see after you log in. The title of each panel (box) links to the
portal monitoring page that shows more detail for the monitoring area.
Upper-right menu bar
The upper-right menu bar appears on each ECS Portal page.
Figure 4 Upper-right menu bar
Menu items include the following icons and menus:
1. The Alert icon displays a number that indicates how many unacknowledged alerts are pending
for the current VDC. The number displays 99+ if there are more than 99 alerts. You can click
the Alert icon to see the Alert menu, which shows the five most recent alerts for the current
VDC.
2. The Help icon brings up the online documentation for the current portal page.
3. The Guide icon brings up the Getting Started Task Checklist.
4. The VDC menu displays the name of the current VDC. If your AD or LDAP credentials allow you
to access more than one VDC, you can switch the portal view to the other VDCs without
entering your credentials.
5. The User menu displays the current user and allows you to log out. The User menu displays the
last login time for the user.
View requests
The Requests panel displays the total requests, successful requests, and failed requests.
Failed requests are organized by system error and user error. User failures are typically HTTP 400
errors. System failures are typically HTTP 500 errors. Click Requests to see more request
metrics.
Request statistics do not include replication traffic.
View capacity utilization
The Capacity Utilization panel displays the total, used, available, reserved, and percent full
capacity.
Capacity amounts are shown in gibibytes (GiB) and tibibytes (TiB). One GiB is approximately equal
to 1.074 gigabytes (GB). One TiB is approximately equal to 1.1 terabytes (TB).
The Used capacity indicates the amount of capacity that is in use. Click Capacity Utilization to
see more capacity metrics.
The capacity metrics are available in the left menu.
22ECS Administration Guide
View performance
The Performance panel displays how network read and write operations are currently performing,
and the average read/write performance statistics over the last 24 hours for the VDC.
Click Performance to see more comprehensive performance metrics.
View storage efficiency
The Storage Efficiency panel displays the efficiency of the erasure coding (EC) process.
The chart shows the progress of the current EC process, and the other values show the total
amount of data that is subject to EC, the amount of EC data waiting for the EC process, and the
current rate of the EC process. Click Storage Efficiency to see more storage efficiency metrics.
View geo monitoring
The Geo Monitoring panel displays how much data from the local VDC is waiting for georeplication, and the rate of the replication.
Recovery Point Objective (RPO) refers to the point in time in the past to which you can recover.
The value is the oldest data at risk of being lost if a local VDC fails before replication is complete.
Failover Progress shows the progress of any active failover that is occurring in the federation
involving the local VDC. Bootstrap Progress shows the progress of any active process to add a
new VDC to the federation. Click Geo Monitoring to see more geo-replication metrics.
Getting Started with ECS
View node and disk health
The Node & Disks panel displays the health status of disks and nodes.
A green check mark beside the node or disk number indicates the number of nodes or disks in good
health. A red x indicates bad health. Click Node & Disks to see more hardware health metrics. If
the number of bad disks or nodes is a number other than zero, clicking on the count takes you to
the corresponding Hardware Health tab (Offline Disks or Offline Nodes) on the System Health
page.
View alerts
The Alerts panel displays a count of critical alerts and errors.
Click Alerts to see the full list of current alerts. Any Critical or Error alerts are linked to the Alerts
tab on the Events page where only the alerts with a severity of Critical or Error are filtered and
displayed.
ECS Administration Guide23
Getting Started with ECS
24ECS Administration Guide
CHAPTER 3
Storage Pools, VDCs, and Replication Groups
l
Introduction to storage pools, VDCs, and replication groups................................................. 26
l
Working with storage pools in the ECS Portal....................................................................... 27
l
Working with VDCs in the ECS Portal .................................................................................. 29
l
Working with replication groups in the ECS Portal................................................................ 37
ECS Administration Guide25
Storage Pools, VDCs, and Replication Groups
Introduction to storage pools, VDCs, and replication groups
This topic provides conceptual information on storage pools, virtual data centers (VDCs), and
replication groups and the following topics describe the operations required to configure them:
l
Working with storage pools at the ECS Portal
l
Working with VDCs at the ECS Portal
l
Working with replication groups at the ECS Portal
The storage that is associated with a VDC must be assigned to a storage pool and the storage pool
must be assigned to one or more replication groups to allow the creation of buckets and objects.
A storage pool can be associated with more than one replication group. A best practice is to have a
single storage pool for a site. However, you can have as many storage pools as required, with a
minimum of four nodes (and 16 disks) in each pool.
You might need to create more than one storage pool at a site for the following reasons:
l
The storage pool is used for Cold Archive. The erasure coding scheme used for cold archive
uses 10+2 coding rather than the default ECS 12+4 scheme.
l
A tenant requires the data to be stored on separate physical media.
A storage pool must have a minimum of four nodes and must have three or more nodes with more
than 10% free capacity in order to allow writes. This reserved space is required to ensure that ECS
does not run out of space while persisting system metadata. If this criteria is not met, the write will
fail. The ability of a storage pool to accept writes does not affect the ability of other pools to
accept writes. For example, if you have a load balancer that detects a failed write, the load
balancer can redirect the write to another VDC.
The replication group is used by ECS for replicating data to other sites so that the data is
protected and can be accessed from other, active sites. When you create a bucket, you specify
the replication group it is in. ECS ensures that the bucket and the objects in the bucket are
replicated to all the sites in the replication group.
ECS can be configured to use more than one replication scheme, depending on the requirements
to access and protect the data. The following figure shows a replication group (RG 1) that spans all
three sites. RG 1 takes advantage of the XOR storage efficiency provided by ECS when using
three or more sites. In the figure, the replication group that spans two sites (RG 2), contains full
copies of the object data chunks and does not use XOR'ing to improve storage efficiency.
26ECS Administration Guide
VDCC
VDC A
VDC B
SP 1
VDC C
SP 2
Federation
RG 1 (SP 1,2,3)
RG 2 (SP 1,3)
Rack 1
Rack 1
Rack 1
RG 1
RG 1
SP 3
Storage Pools, VDCs, and Replication Groups
Figure 5 Replication group spanning three sites and replication group spanning two sites
The physical storage that the replication group uses at each site is determined by the storage pool
that is included in the replication group. The storage pool aggregates the disk storage of each of
the minimum of four nodes to ensure that it can handle the placement of erasure coding
fragments. A node cannot exist in more than one storage pool. The storage pool can span racks,
but it is always within a site.
Working with storage pools in the ECS Portal
You can use storage pools to organize storage resources based on business requirements. For
example, if you require physical separation of data, you can partition the storage into multiple
storage pools.
You can use the Storage Pool Management page available from Manage > Storage Pools to view
the details of existing storage pools, to create storage pools, and to edit existing storage pools.
You cannot delete storage pools in this release.
Table 5
FieldDescription
NameThe name of the storage pool.
NodesThe number of nodes that are assigned to the storage pool.
StatusThe state of the storage pool and of the nodes.
HostThe fully qualified host name that is assigned to the node.
Data IPThe public IP address that is assigned to the node or the data IP address in network separation
Rack IDThe name that is assigned to the rack that contains the nodes.
ActionsThe actions that can be completed for the storage pool.
Storage pool properties
l
Ready: At least four nodes are installed and all nodes are in the ready to use state.
l
Not Ready: A node in the storage pool is not in the ready to use state.
l
Partially Ready: Less than four nodes, and all nodes are in the ready to use state.
environment.
ECS Administration Guide27
Storage Pools, VDCs, and Replication Groups
Table 5 Storage pool properties (continued)
FieldDescription
l
Edit: Change the storage pool name or modify the set of nodes that are included in the
storage pool.
l
Delete: Used by Customer Support to delete the storage pool. System Administrators or
root users should not attempt to delete the storage pool. If you attempt this operation in
the ECS Portal, you receive an error message that states this operation is not supported.
If you must delete a storage pool, contact your customer support representative.
Cold StorageA storage pool that is specified as Cold Storage. Cold Storage pools use an erasure coding
(EC) scheme that is more efficient for infrequently accessed objects. Cold Storage is also
known as a Cold Archive. After a storage pool is created, this setting cannot be changed.
Create a storage pool
Storage pools must contain a minimum of four nodes. The first storage pool that is created is
known as the system storage pool because it stores system metadata.
Before you begin
This operation requires the System Administrator role in ECS.
Procedure
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, click New Storage Pool.
3. On the New Storage Pool page, in the Name field, type the storage pool name (for
example, StoragePool1).
4. In the Cold Storage field, specify if this storage pool is Cold Storage. Cold storage contains
infrequently accessed data. The ECS data protection scheme for cold storage is optimized
to increase storage efficiency. After a storage pool is created, this setting cannot be
changed.
Note:
Cold storage requires a minimum hardware configuration of six nodes. For more
information, see ECS data protection on page 14.
5. From the Available Nodes list, select the nodes to add to the storage pool.
a. To select nodes one-by-one, click the + icon beside each node.
b. To select all available nodes, click the + icon at the top of the Available Nodes list.
c. To narrow the list of available nodes, in the search field, type the public IP address for
the node or the host name.
6. In the Available Capacity Alerting fields, select the applicable available capacity thresholds
that will trigger storage pool capacity alerts:
a. In the Critical field, select 10 %, 15 %, or No Alert.
b. In the Error field, select 20 %, 25 %, 30 %, or No Alert.
28ECS Administration Guide
For example, if you select 10 %, that means a Critical alert will be triggered when the
available storage pool capacity is less than 10 percent.
For example, if you select 25 %, that means an Error alert will be triggered when the
available storage pool capacity is less than 25 percent.
7. Click Save.
8. Wait 10 minutes after the storage pool is in the Ready state before you perform other
Edit a storage pool
You can change the name of a storage pool or change the set of nodes included in the storage
pool.
Storage Pools, VDCs, and Replication Groups
c. In the Warning field, select 35 %, 40 %, or No Alert.
For example, if you select 40 %, that means a Warning alert will be triggered when the
available storage pool capacity is less than 40 percent.
When a capacity alert is generated, a call home alert is also generated that alerts ECS
customer support that the ECS system is reaching its capacity limit.
configuration tasks, to allow the storage pool time to initialize.
If you receive the following error, wait a few more minutes before you attempt any further
configuration. Error 7000 (http: 500): An error occurred in the API
Service. An error occurred in the API service.Cause: error
insertVdcInfo. Virtual Data Center creation failure may occur when
Data Services has not completed initialization.
Before you begin
This operation requires the System Administrator role in ECS.
Procedure
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, locate the storage pool you want to edit in the
table. Click Edit in the Actions column beside the storage pool you want to edit.
3. On the Edit Storage Pool page:
l
To modify the storage pool name, in the Name field, type the new name.
l
To modify the nodes included in the storage pool:
n
In the Selected Nodes list, remove an existing node in the storage pool by clicking
the - icon beside the node.
n
In the Available Nodes list, add a node to the storage pool by clicking the + icon
beside the node.
l
To modify the available capacity thresholds that will trigger storage pool capacity alerts,
select the applicable alert thresholds in the Available Capacity Alerting fields.
4. Click Save.
Working with VDCs in the ECS Portal
An ECS virtual data center (VDC) is the top-level resource that represents the collection of ECS
infrastructure components to manage as a unit.
You can use the Virtual Data Center Management page available from Manage > Virtual DataCenter to view VDC details, to create a VDC, to edit an existing VDC, to update endpoints in
multiple VDCs, delete VDCs, and to federate multiple VDCs for a multisite deployment. The
following example shows the Virtual Data Center Management page for a federated deployment.
It is configured with two VDCs named vdc1 and vdc2.
ECS Administration Guide29
Storage Pools, VDCs, and Replication Groups
Table 6 VDC properties
FieldDescription
NameThe name of the VDC.
TypeThe type of VDC is automatically set and can be either Hosted or On-Premise.
l
A Hosted VDC is hosted off-premise by a third-party data center (a backup site).
Replication Endpoints Endpoints for communication of replication data between VDCs when an ECS federation is
configured.
l
By default, replication traffic runs between VDCs over the public network. By default
the public network IP address for each node is used as the Replication Endpoints.
l
If a separate replication network is configured, the network IP address that is configured
for replication traffic of each node is used as the Replication Endpoints.
l
If a load balancer is configured to distribute the load between the replication IP
addresses of the nodes, the address that is configured on the load balancer is displayed.
Management
Endpoints
Endpoints for communication of management commands between VDCs when an ECS
federation is configured.
l
By default, management traffic runs between VDCs over the public network. By default
the public network IP address for each node is used as the Management Endpoints.
l
If a separate management network is configured, the network IP address that is
configured for management traffic of each node is used for the Management Endpoints.
StatusThe state of the VDC.
l
Online
l
Permanently Failed: The VDC was deleted.
ActionsThe actions that can be completed for the VDC.
l
Edit: Change the name of a VDC, the VDC access key, and the VDC replication and
management endpoints.
l
Delete: Delete the VDC. The delete operation triggers permanent failover of the VDC,
You cannot add the VDC again by using the same name. You cannot delete a VDC that is
part of a replication group until you first remove it from the replication group. You
cannot delete a VDC when you are logged in to the VDC you are trying to delete.
l
Fail this VDC:
WARNING Failing a VDC is permanent. The site cannot be added back.
Note:Fail this VDC is available when there is more than one VDC.
30ECS Administration Guide
n
Ensure that Geo replication is up-to-date. Stop all writes to the VDC.
n
Ensure that all nodes of the VDC are shutdown.
n
Replication to/from the VDC will be disabled for all replication groups.
n
Recovery is initiated only when the VDC is removed from the replication group.
Proceed to do that next.
n
This VDC will display a status of Permanently Failed in any replication group
to which it belongs.
n
To Reconstruct this VDC, it must be added as a new site. Any previous data will
be lost, as that data will have failed over to other sites in the federation.
Loading...
+ 172 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.