HP Apollo 4500, Apollo 4200 Reference Manual

Reference guide
SUSE Enterprise Storage on HPE Apollo 4200/4500 System Servers
Sept 1, 2017
Choosing HPE density-optimized servers as SUSE Enterprise Storage building blocks
Reference guide
Contents
Executive summary ................................................................................................................................................................................................................................................................................................................................ 3
Overview .......................................................................................................................................................................................................................................................................................................................................................... 3
Business problem .............................................................................................................................................................................................................................................................................................................................. 3
Challenges of scale ........................................................................................................................................................................................................................................................................................................................... 4
Why SUSE Enterprise Storage? ............................................................................................................................................................................................................................................................................................ 4
SUSE Enterprise Storage use cases.................................................................................................................................................................................................................................................................................. 4
Solution introduction ............................................................................................................................................................................................................................................................................................................................ 5
SUSE Enterprise Storage architecture—powered by Ceph ........................................................................................................................................................................................................................ 5
Solution ............................................................................................................................................................................................................................................................................................................................................................. 7
SUSE Enterprise Storage v4 .................................................................................................................................................................................................................................................................................................... 7
Hewlett Packard Enterprise value for a Ceph storage environment .................................................................................................................................................................................................. 8
SUSE value .............................................................................................................................................................................................................................................................................................................................................. 8
Server platforms ................................................................................................................................................................................................................................................................................................................................. 8
Configuration guidance .................................................................................................................................................................................................................................................................................................................. 12
General configuration recommendations ................................................................................................................................................................................................................................................................ 12
SSD journal usage ......................................................................................................................................................................................................................................................................................................................... 12
Choosing hardware ...................................................................................................................................................................................................................................................................................................................... 13
Bill of materials ....................................................................................................................................................................................................................................................................................................................................... 15
1x HPE Apollo 4510 Gen9 .................................................................................................................................................................................................................................................................................................. 15
1x Apollo 4200 Gen9 System as block storage servers ........................................................................................................................................................................................................................... 15
1x ProLiant DL360 Gen10 .................................................................................................................................................................................................................................................................................................. 16
Summary ...................................................................................................................................................................................................................................................................................................................................................... 16
Glossary ........................................................................................................................................................................................................................................................................................................................................................ 17
For more information ........................................................................................................................................................................................................................................................................................................................ 17
Reference guide Page 3
Executive summary
Traditional file and block storage architectures are being challenged by the explosive growth of data, fueled by the expansion of Big Data, unstructured data, and the pervasiveness of mobile devices. Emerging open source storage architectures such as Ceph can help businesses deal with these trends, providing cost-effective storage solutions that keep up with capacity growth while providing service-level agreements (SLAs) to meet business and customer requirements.
Enterprise-class storage subsystems are designed to address storage requirements for business-critical transactional data latencies. However, they may not be an optimal solution for unstructured data and backup/archival storage. In these cases, enterprise-class reliability is still required, but massive scale-out capacity and lower investment drive solution requirements. Additionally, modern businesses must provide data access from anywhere at any time, through a variety of legacy and modern access protocols.
Ceph software defined storage is designed to run on industry-standard server platforms, offering lower infrastructure costs and scalability beyond the capacity points of typical file server storage subsystems. HPE Apollo 4000 series cost-effective storage capacity building block for Ceph-based solutions.
Ceph has its code roots in the open source community. When considering an open source-based solution, most enterprise environments will require a strong support organization and a vision to match or exceed the capabilities and functionality they currently experience with their traditional storage infrastructure. Using SUSE Enterprise Storage to build enterprise-ready Ceph solutions fills both of these needs with a world-class support organization and a leadership position within the Ceph community. SUSE Enterprise Storage helps ensure customers are able to deploy Ceph software defined storage on industry-standard x86 server systems, to serve their block, file, and object needs.
hardware provides a comprehensive and
Hewlett Packard Enterprise hardware combined with SUSE Enterprise Storage delivers an open source unified block, file, and object storage solution that:
Has software that offers practical scaling from one petabyte to well beyond a hundred petabytes of data
Lowers upfront solution investment and total cost of ownership (TCO) per gigabyte
Provides a single software-defined storage (SDS) cluster for both object and low to mid-range performance block storage
Uses open source, minimizing concerns about proprietary software vendor lock-in
Provides a better TCO for operating and maintaining the hardware than “white box” servers
Can be configured to offer low-cost, low-performance block and file storage in addition to object storage
HPE hardware gives you the flexibility to choose the configuration building blocks that are right for your business needs. The HPE Apollo 4000 Gen9 server systems are most suited for the task and allow you to find the right balance between performance, cost-per-gigabyte, building block size, and failure domain size.
Target audience
This paper is written for administrators and solution architects who deploy software defined storage solutions within their data centers. This paper assumes knowledge of enterprise data center administration challenges and familiarity with data center configuration and deployment best practices, primarily with regard to storage systems. It also assumes the reader appreciates both the challenges and benefits open source solutions can bring.
Overview
Business problem
Businesses are looking for better and more cost-effective ways to manage their exploding data storage requirements. In recent years, the amount of storage required for businesses to meet increased data retention requirements has increased dramatically. Cost-per-gigabyte and ease of retrieval are important factors for choosing a solution that can scale quickly and economically over many years of continually increasing capacities and data retention requirements.
Organizations that have been trying to keep up with data growth using traditional file and block storage solutions are finding that the complexity of managing and operating them has grown significantly—as have the costs of storage infrastructure. Storage hosting on a public cloud may not meet cost or data control requirements in the long term. The performance and control of on-premises equipment still offers real business advantages.
Reference guide Page 4
Traditional infrastructure is costly to scale massively and offers extra performance features that are not needed for cold or warm data. Ceph software defined storage on industry-standard infrastructure is optimized for this use case and is an ideal supplement to existing infrastructure by creating a network-based active archive repository. Offloading archive data to Ceph—an open source storage platform that stores data on a single distributed computer cluster—can reduce overall storage costs while freeing existing capacity for applications that require traditional infrastructure capabilities.
Challenges of scale
There are numerous difficulties around storing unstructured data at massive scale:
Cost
Unstructured and archival data tend to be written only once or become stagnant over time. This stale data takes up valuable space on
expensive block and file storage.
Tape is an excellent choice for achieving the lowest cost per GB but suffers extremely high latencies. Unstructured and archival data can sit
dormant for long stretches of time and yet need to be available in seconds.
Scalability
Unstructured deployments can accumulate billions of objects and petabytes of data. File system limits on the number and size of files and
block storage limits on the size of presented blocks become significant deployment challenges.
Additionally, block and file storage methods suffer from metadata bloat at a massive scale, resulting in a large system that cannot meet SLAs.
Availability and manageability
Enterprise storage is growing from smaller-scale, single-site deployments to geographically-distributed, scale-out configurations. With this
growth, the difficulty of keeping all the data safe and available is also growing.
Many existing storage solutions are a challenge to manage and control at massive scale. Management silos and user interface limitations make
it harder to deploy new storage into business infrastructure.
Why SUSE Enterprise Storage?
Leveraging industry-standard servers means the lowest possible cost for a disk-based system with a building block your organization already
understands
SUSE Enterprise Storage provides all the benefits of Ceph with the addition of a world-class support organization
Designed to scale indefinitely and scales from one petabyte to well beyond a hundred petabytes of data
A flat namespace and per-object metadata means little space is wasted on overhead and the interface scales efficiently to billions of objects
A single SUSE Enterprise Storage cluster can be configured to meet the requirements of many different storage needs all at once
It is designed to be deployed, accessed, and managed from any location
SUSE Enterprise Storage use cases
OpenStack® cloud storage
SUSE Enterprise Storage integrates well into an OpenStack cluster. A typical setup uses block storage behind OpenStack Cinder and Ceph object storage in lieu of Swift. Ceph can perform the dual role of ephemeral virtual machine storage for OpenStack Nova and image storage for OpenStack Glance. For security, OpenStack Keystone can be configured to provide authentication to the Ceph cluster. In this setup, Ceph can still be used as block and/or object storage for non-OpenStack applications.
Content repository
For a company that can’t or does not want to use a publicly-hosted content repository like Box, Dropbox, or Google Storage is a low-cost private option. The Ceph object store can be configured to meet appropriate latency and bandwidth requirements for whatever the business need. The widespread S3 and Swift REST interfaces can both be used to access data, which means many existing tools can be used and new tools do not require significant development work.
TM
Drive, SUSE Enterprise
Reference guide Page 5
Content distribution origin server
Content Distribution Networks (CDNs) come in both private and public flavors. A business hosting their own, private CDN controls both the origin servers and edge servers. A business using a public CDN must use the content provider’s edge servers but may choose to use a private origin server. SUSE Enterprise Storage object interfaces make an excellent origin in both cases. At scale, SUSE Enterprise Storage offers a lower TCO versus closed source object storage solutions or a content provider’s origin servers.
Video archive
As video surveillance use grows in commercial, government and private use cases, the need for low-cost, multi-protocol storage is growing rapidly. HPE hardware with SUSE Enterprise Storage provides a platform that is an ideal target for these streams as the various interfaces; iSCSI, S3, and Swift service a wide array of applications. The added ability to provide a write-back cache tier enables the system to also service high performance short-term streams where only a percentage of requests actually end up being served from the long-term archive.
Backup target
Most, if not all, modern backup applications provide multiple disk-based target mechanisms. These applications are able to leverage the distributed storage technology provided by SUSE Enterprise Storage as a disk backup device. The advantages of this architecture include high-performance backups, quick restores without loading tape medium, and integration into the multi-tier strategy utilized by most customers today. The economics of HPE servers running SUSE Enterprise Storage provide a superior TCO to utilizing traditional storage for these environments.
Solution introduction
Ceph supports both native and traditional client access. The native clients are aware of the storage topology and communicate directly with the storage daemons, resulting in horizontally scaling performance. Non-native protocols, such as iSCSI, S3, and NFS, require the use of gateway. These gateway can scale horizontally using load balancing techniques.
SUSE Enterprise Storage architecture—powered by Ceph
SUSE Enterprise Storage provides unified block, file, and object access based on Ceph. Ceph is a distributed storage solution designed for scalability, reliability, and performance. A critical component of Ceph is the RADOS object storage. RADOS enables a number of storage nodes to function together to store and retrieve data from the cluster using object storage techniques. The result is a storage solution that is abstracted from the hardware.
Figure1. Ceph architecture diagram
Reference guide Page 6
Cluster roles
There are three primary roles in the SUSE Enterprise Storage cluster covered by this sample reference configuration:
OSD Host—Ceph server storing object data. Each OSD host runs several instances of the Ceph OSD Daemon process. Each process interacts with one Object Storage Disk (OSD), and for production clusters, there is a 1:1 mapping of OSD Daemon to logical volume. The default file system used on each logical volume is XFS, although Btrfs is also supported.
Monitor (MON): Maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group Map, and the CRUSH map. Ceph maintains a history (called an “epoch”) of each state change in the Ceph Monitors, Ceph OSD Daemons, and Placement Groups (PGs). Monitors are expected to maintain quorum to keep an updated cluster state record.
Administrator: This is the self-master and hosts openATTIC, the central management system which supports the cluster.
RADOS Gateway (RGW)—Object storage interface to provide applications with a RESTful gateway to Ceph Storage Clusters. The RADOS
Gateway supports two interfaces: S3 and Swift. These interfaces support a large subset of their respective APIs as implemented by Amazon and OpenStack Swift.
A minimum SES v4 cluster should contain:
One administrator (typically a ProLiant DL360 server)
Three or more MON nodes (typically ProLiant DL360 servers)
Three or more OSD nodes (recommended Apollo 4000 servers)
One or more RGW (typically ProLiant DL360 severs)
Optional: iSCSI gateway (one or more ProLiant DL360 server)
Density-optimized Apollo 4000 servers are ideal for use as the bulk storage OSD nodes. Ceph supports mixing Apollo 4000 server types and generations, enabling seamless growth with current technologies.
Keeping data safe
SUSE Enterprise Storage brings Ceph’s flexibility to bear by supporting data replication as well as erasure coding. Erasure coding mathematically encodes data into a number of chunks that can be reconstructed from partial data into the original object. This is more space efficient than replication on larger objects, but it adds latency and is more computationally intensive. The overhead of erasure coding makes it space inefficient for smaller objects, and block storage requires a replicated cache tier to utilize it. As such, erasure coding is recommended for capacity efficiency, whereas replication is most appropriate for lower capacity block storage and small objects.
Putting data on hardware
One of the key differentiating factors between different object storage systems is the method used to determine where data is placed on hardware. Ceph calculates data locations using a deterministic algorithm called Controlled Replication Under Scalable Hashing (CRUSH). CRUSH uses a set of configurable rules and placement groups (PGs) in this calculation. Placement groups tell data where it is allowed to be stored and are architected in such a way that data will be resilient to hardware failure.
Loading...
+ 12 hidden pages