6.2 Booting up other nodes ...........................................................................................................11
6.2.1 Booting from Expansion ISO ......................................................................................................................... 12
6.2.2 Booting from PXE Server............................................................................................................................... 12
Acronis Storage is a cost-efficient software storage solution with cluster-level erasure coding that
allows optimal use of raw storage capacity at the highest level of resilience.
Acronis Storage is designed to run on commodity hardware, thus eliminating the dependence on
expensive, special-purpose hardware. Simplified deployment allows linear growth of system capacity
by adding extra drives to the existing nodes; or by adding new nodes via PXE or a special,
pre-configured ISO.
Acronis Storage offers simplified management and monitoring via a web console, with REST API for
integration into existing management environments.
This document describes how to deploy Acronis Storage for using with Acronis Backup Cloud.
2 Acronis Storage basics
Acronis Storage breaks the data stream that comes from Acronis Backup Cloud into 320MB
fragments. To ensure fault tolerance, each fragment is stored on multiple storage servers (nodes) as
a set of chunks with some redundancy.
Acronis Storage leverages the so-called N/K redundancy scheme: each fragment is split into K chunks,
then a certain number of additional (parity) chunks are added for redundancy, and then all of the
chunks are distributed among N servers (one chunk per server). The system can survive failure of any
(N-K) storage servers without data loss. The numbers N and K are determined by the storage system
configuration.
3 Acronis Storage redundancy modes
Acronis Storage supports three redundancy modes. You must choose the redundancy mode when
deploying Acronis Storage. For information about changing the redundancy mode at a later time,
refer to "Changing the redundancy mode (p. 16)".
Advanced mode (recommended)
This mode provides a scale-out storage platform with software-level redundancy.
The storage can be based on commodity hardware that does not have built-in data redundancy. For
hardware that has built-in data redundancy, this mode provides an additional software redundancy
layer.
At least eight nodes are required. The system can survive a failure of two nodes without data loss.
The data redundancy overhead is 40 percent, which means that 1.4 GB of disk space is required to
store 1 GB of data.
Express mode
This mode is designed to provide an additional software redundancy layer when deployed on virtual
hosts that share a common storage device that has built-in data redundancy.
At least six nodes are required. The system can survive a failure of one node without data loss. The
data redundancy overhead is 20 percent, which means that 1.2 GB of disk space is required to store 1
GB of data.
This mode assumes a smooth transition to the advanced mode.
Evaluation mode
This mode does not provide software-level redundancy; therefore it creates no data redundancy
overhead.
This mode is designed for product evaluation and assumes a mandatory transition to either the
express or the advanced mode. For this purpose, we recommend six virtual nodes. The system
cannot survive a node failure.
Do not use the evaluation mode in your production environment!
Acronis Storage is deployed on bare metal. It does not require a general-purpose operating system to
run on any of its nodes.
The Acronis Storage server roles are assigned to hard disk drives rather than to a node. A drive can
be assigned only one role. If a node has more than one disk drive, it can run more than one server
role. When a role is assigned to a disk drive, this disk is initialized and all of its data is deleted.
4.1 Metadata Server (MDS)
A metadata server (MDS) stores information about file fragments and the location of chunks that
make up these fragments. It is the most critical component of the system.
Several MDS nodes are needed to build a high-availability metadata cluster. A standard configuration
includes three metadata nodes: one master and two subordinates. The metadata is continuously
replicated from the master node to the subordinate nodes. If the master node fails, one of the
subordinate nodes is elected as the new master node.
We recommend that you dedicate two hard disk drives to the MDS role. The software RAID1 array
will be automatically created on these disks, to increase the MDS reliability.
A management component (MGMT) is installed with each MDS role. This component enables you to
use the Acronis Storage web console for system deployment, monitoring and management.
When the master MDS node fails over to a different node, the management component
automatically starts on that node so that the web console is available at any time.
4.2 Storage Server (STS)
The Storage Server (STS) role is responsible for storing data chunks.
4.3 Front-end Server (FES)
The Front-end Server (FES) role allows Acronis Backup Cloud backup agents to access the Acronis
Storage system for backup data transfer.
RAM: 64 GB DDR3 ECC
CPU: Dual Intel Xeon E5 (for example, Intel Xeon E5-2620 V2)
HDD: Six or more 4TB+ 7200RPM SATA HDDs (for example, Seagate SV35 or Seagate Megalodon).
Smaller disks (500GB+) can be used for the MDS roles.
HBA controller for disks (for example LSI 2308 HBA with IT Mode/Pass-through). Use of a RAID
controller is not recommended.
One network interface with 2-4 bonded (LACP IEEE 802.3ad) 1 Gbps adapters or a 10 Gbps
adapter.
2U-4U chassis (depending on the number of disk drives)
Redundant power supply unit (PSU)
Three nodes for the FES+STS roles:
RAM: 64 GB DDR3 ECC
CPU: Single Intel Xeon E5 (for example, Intel Xeon E5-2620 V2)
HDD: Six or more 4TB+ 7200RPM SATA HDDs (for example, Seagate SV35 or Seagate Megalodon).
Smaller disks (500GB+) can be used for the FES roles.
HBA controller for disks (for example LSI 2308 HBA with IT Mode/Pass-through). Use of a RAID
controller is not recommended.
Two network interfaces. Each interface requires 2-4 bonded (LACP IEEE 802.3ad) 1 gigabit per
second adapters or a 10 gigabits per second adapter.
2U-4U chassis (depending on the number of disk drives)
Redundant power supply unit (PSU)
Two (or more) nodes for the STS role:
RAM: 32 GB DDR3 ECC
CPU: Single Intel Xeon E5 (for example, Intel Xeon E5-2620 V2)
HDD: Six or more 4TB+ 7200RPM SATA HDDs (for example, Seagate SV35 or Seagate Megalodon)
HBA controller for disks (for example LSI 2308 HBA with IT Mode/Pass-through). Use of a RAID
controller is not recommended.
One network interface with 2-4 bonded (LACP IEEE 802.3ad) 1 Gbps adapters or a 10 Gbps
adapter.
2U-4U chassis (depending on the number of disk drives)
Redundant power supply unit (PSU)