Dell SQL Server 2019 Containers on Linux User Manual

SQL Server 2019 Containers on Linux
Abstract
This Server 2019 containers for an application development and testing environment
Software Developm ent Use Cases Using Dell EMC Infrastr ucture
White Paper
white paper demonstrates the advantages of using Microsoft SQL
that is hosted on a Dell EMC platform.
Dell Technologies Solutions
Copyright
2
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure White Paper
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software that is described in this publication requires an applicable software license. Copyright © 2019–2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Other t radem ark s may be trademarks of their respective owners. Published in the USA 07/20 White Paper H17857.1.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
Contents
3
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure

Contents

Executive summary ....................................................................................................................... 4
Use case overview ........................................................................................................................ 5
Supporting software technology .................................................................................................. 6
Dell EMC servers and storage .................................................................................................... 11
Use Case 1: Manual provisioning of a containerized dev/test environment ........................... 12
Use Case 2: Automated provisioning of a containerized dev/test environment ..................... 19
Conclusion................................................................................................................................... 28
The road ahead ............................................................................................................................ 29
Appendix A: Solution architecture and component specifications ......................................... 30
Appendix B: Container resource configuration ........................................................................ 34
White Paper
Executive summary
4
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure White Paper

Business challenge

Solution overview

Document purpose

Executive summary

Implementing reliable transaction processing for large-scale systems is beyond the capability of many software developers. However, commercial relational database management system (RDBMS) products enable developers to create many applications that they otherwise could not. Although using an RDBMS solves many software development problems, on e longstanding issue persists—how to ensure code and data consistency between the RDBMS and the application.
This challenge of managing the state of code and data is a particularly thorny one during the software development and testing (dev/test) life cycle. As code is added and changed in the application, testers must have a known state for the code and data in the database. For example, if a test is designed to add 10 customer accounts to an existing database and then test for the total number of customers, the team must ensure that the y are starting with same set of base customers every time. For larger applications with hundreds or thousands of tests, this activity becomes a challenge even for experienced teams.
Container technology enables development teams to quickly provision isolated applications without the traditional complexities. For many companies, to boost productivity and time to value, the use of containers starts with the departments that are focused on software development. The journey typically starts with installing, implementing, and using containers for applications that are based on the microservice architecture. In the past, integration between containerized applications and database services like Microsoft SQL Server were clumsy at best. Often, they would introduce delays in the agile development process.
This solution shows how the use of SQL Server containers, Kubernetes, and the Dell EMC XtremIO X2 Container Storage Interface (CSI) plug-in transforms the dev elopment process. Using orchestration and automation, developers can self-provision a SQL Server database, increasing productivity and saving substantial time.
We are choosing to focus on the software dev/test use case because many analysts agree that this market represents the most immediate opportunity to solve significant business challenges using SQL Server on containers. The current method for developing SQL Server powered applications consists of a hodge-podge of platforms and tools. The process is overly complex and prone to creating schedule delays and cost overruns. Any path forward that has advantages for IT professionals and provides a more heterogeneous and familiar environment for software developers will likely gain significant adoption with minimal friction or risk.
In this paper, we expand on information that is available from Microsoft and the SQL Server ecosystem, providing two use cases that highlight the test/dev benefits that SQL Server containers enable. In addition, we explore the intersection of SQL Server 2019 Docker containers, the Kubernetes implementat ion of the CSI specification, and products and services from Dell Technologies. The use cases that we present are designed to show how developers and others can easily use SQL Server containers with the XtremIO
Use case overview
5
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure

Audience

Terminology

We value your feedback

X2 storage array. Using the XtremIO X2 CSI plug-in enables comprehensive automation and orchestration from server through storage.
This white paper is for IT professionals who are interested in learning about the benefits of implementing SQL Server containers in a dev/test environment.
The following table defines some of the terms that are used in this wh ite pap er:
Table 1. Terminology
Term Description
Container
Cluster A Kubernetes cluster is a set of machines that are known as nodes. One
Node
Pod A pod is the minimum deployment unit of Kubernetes. It is a logical group
An isolated object that includes an application and its dependencies. Programs running on Docker are packaged as Linux container s. Because containers are a widely accepted standard, many prebuilt container images are available for deployment on Docker.
node controls the cluster and is designated as the master node; the remaining nodes are worker nodes. The Kubernetes master is responsible for distributing work among the workers and for monitoring the health of the cluster.
A node runs containerized applications. It can be either a physical machine or a virtual machine. A Kubernetes cluster can contain a mixture of physical machine and virtual machine nodes.
of one or more containers and associated resourc es that are needed to run an application. Each pod runs on a node, which can run one or more pods. The Kubernetes master automati cal ly assigns pods to nodes in the cluster.
Dell Technologies and the authors of this document welcome your feedback on the solution and the solution documentation. Contact the Dell Technologies Solutions team by
email or provide your comments by completing our documentation survey
.
Author: Sam Lucido

Use case overview

Contributors: Phil Hummel, Anil Papisetty, Sanjeev Ranjan, Mahesh Reddy, Abhishek
Sharma, Karen Johnson
Our use cases demonstrate the advantages of using Microsoft SQL Server 2019 containers for an application dev/test environment that is hosted on a Dell EMC infrastructure platform. The test environment for both use cases consisted of three Dell EMC PowerEdge R740 servers and an XtremIO X2 all-flash storage array that were hosted in our labs. For an architecture diagram and details about the solution configuration, see Appendix A: Solution
architecture and component specifications.
The use cases demonstrate how Docker, Kubernetes, and the XtremIO X2 CSI plug-in accelerate the SQL Server development life cycle. With this solution, developers can easily provision SQL Server container databases without the complexities that are associated with installing the database and provisioning storage.
White Paper
Supporting software technology
6
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure White Paper

Use Case 1 overview

Use Case 2 overview

Use case comparison summary

Container-based virtualization

In the first use case, we start the way many companies begin to work with containers—by installing Docker and establishing a functioning development environment. Our goal is to quickly provision a SQL Server container and then attach a copy of a sample database—the popular AdventureWorks database from Microsoft—using a Dell EMC XtremIO X2 storage array. With the SQL AdventureWorks container running, we show how to access the database using a web browser t o sim ul at e a t ypical enterprise web application. Then we remove the container and clean up the environment to free resources for the next sprint.
The second use case continues the containerized application journey by using the XtremIO X2 CSI plug-in for Kubernetes to achieve a greater level of automation and ease of management for dev/test environments. Here we move beyond manually provisioning storage to automated provisioning. Using Kubernetes, our developer controls the provisioning of the SQL container from a local private registry and the database storage from the XtremIO X2 array. After working on the AdventureWorks database application, the developer protects the updated state of the database code and data by using Kubernetes to take an XtremIO Virtual Copies snapshot of the database. After a round of destructive testing, the developer then restores the database to the preserved state by using Kubernetes and XtremIO Virtual Copies. A technical writer provisions the modified database to document the code changes, and the developer removes the containers and cleans up the environment.
The following table provides a high-level comparison of the two use cases:
Table 2. Use-case comparison
Action Use Case 1: Docker only
Provisioning a SQL Server container
Provisioning an AdventureWorks database
Removing the container and persistent storage
Manual, using script
Storage and operating system administrator tasks
Manual, using script

Supporting software technology

This section summarizes the important technology components of this solution.
Two primary methods of enabling software applications to run on virtual hardware are through the use of virtual machines (VMs) and a hypervisor, and through container-based virtualization—also known as operating system virtualization or containerization.
Use Case 2: Kubernetes and XtremIO X2 CSI plug-in
Self-service (full automation)
The older and more pervasive virtualization method, which was first developed by Burroughs Corporation in the 1950s, is thro ugh t he use of VMs and a hypervisor. That method was replicated with the commercialization of IBM mainframes in the early 1960s. The primary virtualization method that is used by platforms such as IBM VM/C MS , VMware ESXi, and Microsoft Hyper-V starts with a hypervisor layer that abstracts the
Supporting software technolog y
7
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure

Docker containers

physical components of the computer. The abstraction enables sharing of the components by multiple VMs, each running a guest operating system. A more recent deve lop ment is container-based virtualization, where a single host operating system supports multiple processes that are running as virtual applications.
The following figure contrasts VM-based vir tu al izati on with container-based virtualization. In container-based virtualization, the combination of the guest operating system components and any isolated software applications constitutes a container running on the host server, as indicated by the App 1, App 2, and App 3 boxes.
Figure 1. Primary virtualization methods
Both types of virtualization were developed to increase the efficiency of computer hardware investments by supporting multiple users and applications in parallel. Containerization further improves IT operations productivity by simplifying application portability. Application developers most often work outside the server environments that their programs will run in. To minimize conflicts in library versions, dependencies, and configuration settings, developers must re-create the production environment multiple times for development, testing, and preproduction integration. IT professionals have found containers easier to deploy consistently across multiple environments because the core operating system can be configured independently of the application container.
Concepts that led to t he de velop men t of container-based virtualiz ation began to emerge when the UNIX operating system became publicly available in the early 1970s. Container technology development expanded on many fronts until 2013 when Solomon Hykes released the Docker code base to the open-source community. The Docker ecosystem is made up of the container runtime environment along with tools to define and build application containers and to manage the interactions between the runtime environment and the host operating system.
Two Docker runtime environments—the Commu nity Edit ion and the Enterprise Edition— are available. The Community Edition is free and comes with best-effort community support. For our use-case testing, we used the Enterprise Edition, which is fitting for most organizations that are using Docker in production or business-critical situations. The Enterprise Edition requires purchasing a license th at is based on the number of cores in the environment. Organizations likely will have licensed and nonlicensed Docker runtimes
White Paper
Supporting software technology
8
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure White Paper

Kubernetes

and should implement safeguards to ensure that the correct version is deployed in environments where support is critical.
A Docker registry is supporting technology that is used for storing and delivering Docker images from a central repository. Registries can be public, such as Docker Hub
, or
private. Docker users install a local registry by downloading from Docker Hub a compressed image that contains all the necessary container components that are specific to the guest operating system and application. Depending on Internet connection speed and availability, a local registry can mitigate many of the challenges that are associated with using a public registry, including high late nc y during image downloading. Docker Hub does provide the option for users to upload private images. However, a local private registry might offer both better security and less latency for deployment.
Private registries can reside in the cloud or in the local data center. Provisioning speed and provisioning frequency are two factors to consider when determining where to locate a private registry. Private registries that are hosted in the data center where they will be used benefit from the speed and reliability of the LAN, which means images can be provisioned quickly in most cas es. For our use cases, we implemented a local private registry to enable fast provisioning without the complexities and cost of hosting in the cloud.
Modern applications—primarily microservices that are packaged with their dependencies and configurations—are increasingly being built using container technology. Kubernetes, also known as K8s, is an open-source platform for deploying and managing containerized applications at scale. The Kubernetes container orchestration system was open-sourced by Google in 2014.
The following figure shows the Kubernetes architecture:
Figure 2. Kubernetes architecture
Supporting software technology
9
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure
Kubernetes features for container orchestration at scale include:
Auto-scaling, replication, and recovery of containers
Intra-container communication, such as IP sharing
A single entity—a pod—for creating and managing multiple containers
A container resource us age and perform anc e ana lys is agen t, cAdvisor
Network pluggable architecture
Load balancing
Health check service
In a simulated dev/test scenario in Use Case 2, we used the Kubernetes container orchestration system to deploy two Docker containers in a pod.

Kubernetes Container Storage Interface specification

The Kubernetes CSI specification was developed as a standard for exposing arbitrary block and file storage systems to containerized workloads through an orchestration layer. Kubernetes previously provided a powerful volume plug-in that was part of the core Kubernetes code and shipped with the core Kubernetes binaries. Before the adoption of CSI, however, adding support for new volume plug-ins to Kubernetes when the code was “in-tree” was challenging. Vendors wanting to add support for their storage system to Kubernetes, or even fix a bug in an existing volume plug-in, were forced t o align with the Kubernetes release process. In addition, third-party storage code could cause reliability and security issues in core Kubernetes binaries. The code was often difficult—or sometimes impossible—for Kubernetes maintainers to test and maintain.
The adoption of the CSI specification makes the Kubernetes volume layer truly extensible. Using CSI, third-party storage providers can write and deploy plug-ins to expose new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This capability gives Kubernetes users more storage options and makes the system more secure and reliable. Our Use Case 2 highlights these advantages by using the
XtremIO X2 CSI plug-in to show the benefits of Kubernetes storage automation.
Dell EMC

Kubernetes storage classes

We do not directly use Kubernetes storage classes in either of the use cases that we describe in this paper; however, the Kubernetes storage classes are closely related to CSI and the XtremIO X2 CSI plug-in. Kubernetes provides administrators an option to describe various lev els of s tor age feat ures and differentiate them by quality-of-service (QoS) levels, backup policies, or other storage-specific services. Kubernetes itself is unopinionated about what these classes represent. In other management systems, this concept is sometimes referred to as storage profiles.
The XtremIO X2 CSI plug-in creates three storage classes in Kubernetes during installation. The XtremIO X2 storage classes, which can be viewed from the Kubernetes dashboard, are predefined. These storage classes enable users to specify the amount of bandwidth to be made available to persistent storage that is created on the array. The following table shows the predefined storage classes:
White Paper
Supporting software technology
10
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure White Paper
SQL Server and c Linux
Table 3. XtremIO X2 CSI predefined storage classes
Storage class MB/s per GB
High 15 Medium 5 Low 1
The size of the requested storage volume and the storage class define the amount of bandwidth to be specified. For example, bandwidth for a 1,000 gibi (Gi) storage volume configured with the medium storage class is computed as follows:
Storage size (1,000 Gi) x storage class (medium at 5 MB/s per GB) = Total bandwidt h (5,000 MB/s)
Note: Gi indicates power-of-two equivalents—10243 in this case.
Using the XtremIO X2 predefined storage classes helps to efficiently scale an environment by defining performance limits. For example, a storage class of low for a pool of 100 containers limits containerized applications so that they consume no more than their allocated bandwidth. Such limitations help to maintain more reliable storage performance across the entire environment.
Docker
ontainers on
Using QoS-based storage classes helps balance the resources that are consumed by containerized applications and the total amount of storage bandwidth. For scenarios that require a more customized set of storage classes than the one that is created by the XtremIO X2 CSI plug-in, you can configure XtremIO X2 QoS in Kubernetes. In creating a custom QoS policy, you can define maximum bandwidth per gigabyte or, alternatively, maximum IOPS. You could also define a burst percentage, which is the amount of bandwidth or IOPS above the maximum limit that the container can use for temporary performance.
The benefits of using predefined storage classes and customized QoS policies include:
Guaranteed service for critical applications
Eliminating “noisy neighbor” problems by placing performance limits on
nonproduction containers
In recent years, Microsoft has been expanding its portfolio of offerings that are either compatible with or ported to the Linux operating system. For example, Microsoft released the first version of its SQL Server RDBMS that was commercially available on Linux in November 2016. More recently, with its SQL Server 2017 release, Microsoft delivered SQL Server on Docker containers. The next generation of SQL Server for Linux containers is in development, as part of SQL Server 2019, with release scheduled for the fall of 2019.
Microsoft is currently developing SQL Server implementations of Linux containers for both Linux and Window hosts as well as Windows containers for Windows. The supported features and road maps for these implementations vary, so carefully verify whether a product will meet your requirements. For this white paper, we worked exclus iv ely w ith
Dell EMC servers and storage
11
SQL Server 2019 Containers on Linux
Software Development Use Cases Using Dell EMC Infrastructure

PowerEdge servers

XtremIO X2 storage

SQL Server containers for Linux. We recommend that you check with Dell Technologies to ensure that the latest certified CSI plug-ins are used in your Kubernetes environment.
Microsoft first introduced support for containerized Linux images in SQL Server 2017. According to Microsoft, one of the primary use cases for customers who are adopting SQL Server containers is for local dev/test in DevOps pipelines, with deployment handled by Kubernetes. SQL Server in container s offer s many advantages for DevOps because of its consistent, isolated, and reliable behavior across environments, ease of use, and ease of starting and stopping. Applications can be built on top of SQL Server containers and run without being affected by the rest of the environment. This isolation makes SQL Server in containers ideal for test deployment scenarios as well as DevOps processes.

Dell EMC servers and storage

Dell EMC PowerEdge servers provide a scalable business architecture, intelligent automation, and integrated security for high-value data-management and analytics workloads. The PowerEdge portfolio of rack, tower, and modular server infrastructure, based on open-standard x86 technology, c an he lp you quic kly scale from the dat a c enter to the cloud. PowerEdge servers deliver the same user experience and the same integrated management experience across all our product options; thus, you have one set of runbooks to patch, manage, update, refresh, and retire all your assets.
For our use cases, we chose the P owerEdge R740 server. The R740 is a 2U form factor that houses up to two Intel Xeon S calabl e p r o cesso rs, each with up to 28 compute cores. It has support for the most popular enterprise-deployed versions of Linux—Canonical Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server. The R740 supports a range of memory configurati ons to sati sf y the most demandi ng data base and anal yti c workloads. It includes 24 slots for registered ECC DDR4 load-reduced DIMMS (LRDIMMs) with speeds up to 2,933 MT/s and has expandable memory up to 3 TB. On-board storage can be configured with front drive bays holding up to 16 x 2.5 in. SAS/SATA SSDs, for a maximum of 122.88 TB, or up to 8 x 3.5 in. SAS/SATA drives, for a maximum of 112 TB.
For details about the PowerEdge server configuration that we used for our use cases, see
Appendix A: Solution architecture and component specifications.
The Dell EMC XtremI O X2 all-flash array is an ideal storage platform for running online transaction processing (OLTP), online analytical processing (OLAP), or mixed workloads. It delivers high IOPS, ultrawide bandwidth, and consistent submillisecond latency for databases of all sizes.
Note: For details about designing a SQL Server solution using XtremIO X2 all-flash storage with PowerEdge servers, see Dell EMC Ready Solutions for Microsoft SQL: Design for Dell EMC
XtremIO. The guide provides recommended design principles, configuration best practices, and
validation with both Windows Server 2016 and Red Hat Enterprise Linux 7.6 running instances of SQL Server 2017. In the solution testing, the XtremIO X2 array delivered sub-500-microsecond latencies while supporting 275,000-plus IOPS with 72 flash drives, compared to a rated 220,000 achievable IOPS per the XtremIO X2 specification sheet. The test engineers found no noticeable increase in latency even when the XtremIO X2 array exceeded the total expected IOPS.
White Paper
Loading...
+ 23 hidden pages