Dell SQL Server 2019 Containers on Linux User Manual

SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

July 2020

H17857.1

White Paper

Abstract

This white paper demonstrates the advantages of using Microsoft SQL Server 2019 containers for an application development and testing environment that is hosted on a Dell EMC platform.

Dell Technologies Solutions

Copyright

The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software that is described in this publication requires an applicable software license.

Copyright © 2019–2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.

Published in the USA 07/20 White Paper H17857.1.

Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.

2 SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

 

Contents

Contents

 

Executive summary.......................................................................................................................

4

Use case overview ........................................................................................................................

5

Supporting software technology..................................................................................................

6

Dell EMC servers and storage....................................................................................................

11

Use Case 1: Manual provisioning of a containerized dev/test environment ...........................

12

Use Case 2: Automated provisioning of a containerized dev/test environment.....................

19

Conclusion...................................................................................................................................

28

The road ahead............................................................................................................................

29

Appendix A: Solution architecture and component specifications.........................................

30

Appendix B: Container resource configuration ........................................................................

34

SQL Server 2019 Containers on Linux

3

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Executive summary

Executive summary

Business

Implementing reliable transaction processing for large-scale systems is beyond the

challenge

capability of many software developers. However, commercial relational database

 

management system (RDBMS) products enable developers to create many applications

 

that they otherwise could not. Although using an RDBMS solves many software

 

development problems, one longstanding issue persists—how to ensure code and data

 

consistency between the RDBMS and the application.

 

This challenge of managing the state of code and data is a particularly thorny one during

 

the software development and testing (dev/test) life cycle. As code is added and changed

 

in the application, testers must have a known state for the code and data in the database.

 

For example, if a test is designed to add 10 customer accounts to an existing database

 

and then test for the total number of customers, the team must ensure that they are

 

starting with same set of base customers every time. For larger applications with

 

hundreds or thousands of tests, this activity becomes a challenge even for experienced

 

teams.

 

Container technology enables development teams to quickly provision isolated

 

applications without the traditional complexities. For many companies, to boost

 

productivity and time to value, the use of containers starts with the departments that are

 

focused on software development. The journey typically starts with installing,

 

implementing, and using containers for applications that are based on the microservice

 

architecture. In the past, integration between containerized applications and database

 

services like Microsoft SQL Server were clumsy at best. Often, they would introduce

 

delays in the agile development process.

Solution

This solution shows how the use of SQL Server containers, Kubernetes, and the Dell

overview

EMC XtremIO X2 Container Storage Interface (CSI) plug-in transforms the development

 

process. Using orchestration and automation, developers can self-provision a SQL Server

 

database, increasing productivity and saving substantial time.

 

We are choosing to focus on the software dev/test use case because many analysts

 

agree that this market represents the most immediate opportunity to solve significant

 

business challenges using SQL Server on containers. The current method for developing

 

SQL Server powered applications consists of a hodge-podge of platforms and tools. The

 

process is overly complex and prone to creating schedule delays and cost overruns. Any

 

path forward that has advantages for IT professionals and provides a more heterogeneous

 

and familiar environment for software developers will likely gain significant adoption with

 

minimal friction or risk.

Document

In this paper, we expand on information that is available from Microsoft and the SQL

purpose

Server ecosystem, providing two use cases that highlight the test/dev benefits that SQL

 

Server containers enable. In addition, we explore the intersection of SQL Server 2019

 

Docker containers, the Kubernetes implementation of the CSI specification, and products

 

and services from Dell Technologies. The use cases that we present are designed to

 

show how developers and others can easily use SQL Server containers with the XtremIO

4 SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

 

 

 

 

Use case overview

 

X2 storage array. Using the XtremIO X2 CSI plug-in enables comprehensive automation

 

and orchestration from server through storage.

Audience

This white paper is for IT professionals who are interested in learning about the benefits of

 

implementing SQL Server containers in a dev/test environment.

Terminology

The following table defines some of the terms that are used in this white paper:

 

Table 1.

Terminology

 

 

 

 

 

 

 

Term

 

Description

 

 

Container

 

An isolated object that includes an application and its dependencies.

 

 

 

 

Programs running on Docker are packaged as Linux containers. Because

 

 

 

 

containers are a widely accepted standard, many prebuilt container

 

 

 

 

images are available for deployment on Docker.

 

 

 

 

 

 

 

Cluster

 

A Kubernetes cluster is a set of machines that are known as nodes. One

 

 

 

 

node controls the cluster and is designated as the master node; the

 

 

 

 

remaining nodes are worker nodes. The Kubernetes master is responsible

 

 

 

 

for distributing work among the workers and for monitoring the health of

 

 

 

 

the cluster.

 

 

 

 

 

 

 

Node

 

A node runs containerized applications. It can be either a physical

 

 

 

 

machine or a virtual machine. A Kubernetes cluster can contain a mixture

 

 

 

 

of physical machine and virtual machine nodes.

 

 

 

 

 

 

 

Pod

 

A pod is the minimum deployment unit of Kubernetes. It is a logical group

 

 

 

 

of one or more containers and associated resources that are needed to

 

 

 

 

run an application. Each pod runs on a node, which can run one or more

 

 

 

 

pods. The Kubernetes master automatically assigns pods to nodes in the

 

 

 

 

cluster.

We value your

 

 

 

Dell Technologies and the authors of this document welcome your feedback on the

feedback

solution and the solution documentation. Contact the Dell Technologies Solutions team by

 

email or provide your comments by completing our documentation survey.

Author: Sam Lucido

Contributors: Phil Hummel, Anil Papisetty, Sanjeev Ranjan, Mahesh Reddy, Abhishek

Sharma, Karen Johnson

Use case overview

Our use cases demonstrate the advantages of using Microsoft SQL Server 2019 containers for an application dev/test environment that is hosted on a Dell EMC infrastructure platform. The test environment for both use cases consisted of three Dell EMC PowerEdge R740 servers and an XtremIO X2 all-flash storage array that were hosted in our labs. For an architecture diagram and details about the solution configuration, see Appendix A: Solution architecture and component specifications.

The use cases demonstrate how Docker, Kubernetes, and the XtremIO X2 CSI plug-in accelerate the SQL Server development life cycle. With this solution, developers can easily provision SQL Server container databases without the complexities that are associated with installing the database and provisioning storage.

SQL Server 2019 Containers on Linux

5

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Supporting software technology

Use Case 1 overview

Use Case 2 overview

Use case comparison summary

In the first use case, we start the way many companies begin to work with containers—by installing Docker and establishing a functioning development environment. Our goal is to quickly provision a SQL Server container and then attach a copy of a sample database—the popular AdventureWorks database from Microsoft—using a Dell EMC XtremIO X2 storage array. With the SQL AdventureWorks container running, we show how to access the database using a web browser to simulate a typical enterprise web application. Then we remove the container and clean up the environment to free resources for the next sprint.

The second use case continues the containerized application journey by using the XtremIO X2 CSI plug-in for Kubernetes to achieve a greater level of automation and ease of management for dev/test environments. Here we move beyond manually provisioning storage to automated provisioning. Using Kubernetes, our developer controls the provisioning of the SQL container from a local private registry and the database storage from the XtremIO X2 array. After working on the AdventureWorks database application, the developer protects the updated state of the database code and data by using Kubernetes to take an XtremIO Virtual Copies snapshot of the database. After a round of destructive testing, the developer then restores the database to the preserved state by using Kubernetes and XtremIO Virtual Copies. A technical writer provisions the modified database to document the code changes, and the developer removes the containers and cleans up the environment.

The following table provides a high-level comparison of the two use cases:

Table 2.

Use-case comparison

 

 

 

 

 

Action

 

Use Case 1: Docker only

Use Case 2: Kubernetes

 

and XtremIO X2 CSI plug-in

 

 

 

 

 

 

 

Provisioning a SQL Server

Manual, using script

 

container

 

 

 

 

 

 

Provisioning an

Storage and operating

Self-service (full automation)

AdventureWorks database

system administrator tasks

 

 

 

 

Removing the container and

Manual, using script

 

persistent storage

 

 

 

 

 

 

Supporting software technology

This section summarizes the important technology components of this solution.

Container-based Two primary methods of enabling software applications to run on virtual hardware are virtualization through the use of virtual machines (VMs) and a hypervisor, and through container-based

virtualization—also known as operating system virtualization or containerization.

The older and more pervasive virtualization method, which was first developed by Burroughs Corporation in the 1950s, is through the use of VMs and a hypervisor. That method was replicated with the commercialization of IBM mainframes in the early 1960s. The primary virtualization method that is used by platforms such as IBM VM/CMS, VMware ESXi, and Microsoft Hyper-V starts with a hypervisor layer that abstracts the

6 SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Supporting software technology

physical components of the computer. The abstraction enables sharing of the components by multiple VMs, each running a guest operating system. A more recent development is container-based virtualization, where a single host operating system supports multiple processes that are running as virtual applications.

The following figure contrasts VM-based virtualization with container-based virtualization. In container-based virtualization, the combination of the guest operating system components and any isolated software applications constitutes a container running on the host server, as indicated by the App 1, App 2, and App 3 boxes.

 

Figure 1.

Primary virtualization methods

 

 

Both types of virtualization were developed to increase the efficiency of computer

 

 

hardware investments by supporting multiple users and applications in parallel.

 

 

Containerization further improves IT operations productivity by simplifying application

 

 

portability. Application developers most often work outside the server environments that

 

 

their programs will run in. To minimize conflicts in library versions, dependencies, and

 

 

configuration settings, developers must re-create the production environment multiple

 

 

times for development, testing, and preproduction integration. IT professionals have found

 

 

containers easier to deploy consistently across multiple environments because the core

 

 

operating system can be configured independently of the application container.

 

Docker

Concepts that led to the development of container-based virtualization began to emerge

 

containers

when the UNIX operating system became publicly available in the early 1970s. Container

 

 

technology development expanded on many fronts until 2013 when Solomon Hykes

 

 

released the Docker code base to the open-source community. The Docker ecosystem is

 

 

made up of the container runtime environment along with tools to define and build

 

 

application containers and to manage the interactions between the runtime environment

 

 

and the host operating system.

 

 

Two Docker runtime environments—the Community Edition and the Enterprise Edition—

 

 

are available. The Community Edition is free and comes with best-effort community

 

 

support. For our use-case testing, we used the Enterprise Edition, which is fitting for most

 

 

organizations that are using Docker in production or business-critical situations. The

 

 

Enterprise Edition requires purchasing a license that is based on the number of cores in

 

 

the environment. Organizations likely will have licensed and nonlicensed Docker runtimes

 

 

 

SQL Server 2019 Containers on Linux

7

 

 

Software Development Use Cases Using Dell EMC Infrastructure

 

 

 

White Paper

 

Dell SQL Server 2019 Containers on Linux User Manual

Supporting software technology

 

and should implement safeguards to ensure that the correct version is deployed in

 

environments where support is critical.

 

A Docker registry is supporting technology that is used for storing and delivering Docker

 

images from a central repository. Registries can be public, such as Docker Hub, or

 

private. Docker users install a local registry by downloading from Docker Hub a

 

compressed image that contains all the necessary container components that are specific

 

to the guest operating system and application. Depending on Internet connection speed

 

and availability, a local registry can mitigate many of the challenges that are associated

 

with using a public registry, including high latency during image downloading. Docker Hub

 

does provide the option for users to upload private images. However, a local private

 

registry might offer both better security and less latency for deployment.

 

Private registries can reside in the cloud or in the local data center. Provisioning speed

 

and provisioning frequency are two factors to consider when determining where to locate

 

a private registry. Private registries that are hosted in the data center where they will be

 

used benefit from the speed and reliability of the LAN, which means images can be

 

provisioned quickly in most cases. For our use cases, we implemented a local private

 

registry to enable fast provisioning without the complexities and cost of hosting in the

 

cloud.

Kubernetes

Modern applications—primarily microservices that are packaged with their dependencies

 

and configurations—are increasingly being built using container technology. Kubernetes,

 

also known as K8s, is an open-source platform for deploying and managing containerized

 

applications at scale. The Kubernetes container orchestration system was open-sourced

 

by Google in 2014.

 

The following figure shows the Kubernetes architecture:

Figure 2. Kubernetes architecture

8 SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Supporting software technology

Kubernetes features for container orchestration at scale include:

Auto-scaling, replication, and recovery of containers

Intra-container communication, such as IP sharing

A single entity—a pod—for creating and managing multiple containers

A container resource usage and performance analysis agent, cAdvisor

Network pluggable architecture

Load balancing

Health check service

In a simulated dev/test scenario in Use Case 2, we used the Kubernetes container orchestration system to deploy two Docker containers in a pod.

Kubernetes Container Storage Interface specification

The Kubernetes CSI specification was developed as a standard for exposing arbitrary block and file storage systems to containerized workloads through an orchestration layer. Kubernetes previously provided a powerful volume plug-in that was part of the core Kubernetes code and shipped with the core Kubernetes binaries. Before the adoption of CSI, however, adding support for new volume plug-ins to Kubernetes when the code was “in-tree” was challenging. Vendors wanting to add support for their storage system to Kubernetes, or even fix a bug in an existing volume plug-in, were forced to align with the Kubernetes release process. In addition, third-party storage code could cause reliability and security issues in core Kubernetes binaries. The code was often difficult—or sometimes impossible—for Kubernetes maintainers to test and maintain.

The adoption of the CSI specification makes the Kubernetes volume layer truly extensible. Using CSI, third-party storage providers can write and deploy plug-ins to expose new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This capability gives Kubernetes users more storage options and makes the system more secure and reliable. Our Use Case 2 highlights these advantages by using the Dell EMC XtremIO X2 CSI plug-in to show the benefits of Kubernetes storage automation.

Kubernetes storage classes

We do not directly use Kubernetes storage classes in either of the use cases that we describe in this paper; however, the Kubernetes storage classes are closely related to CSI and the XtremIO X2 CSI plug-in. Kubernetes provides administrators an option to describe various levels of storage features and differentiate them by quality-of-service (QoS) levels, backup policies, or other storage-specific services. Kubernetes itself is unopinionated about what these classes represent. In other management systems, this concept is sometimes referred to as storage profiles.

The XtremIO X2 CSI plug-in creates three storage classes in Kubernetes during installation. The XtremIO X2 storage classes, which can be viewed from the Kubernetes dashboard, are predefined. These storage classes enable users to specify the amount of bandwidth to be made available to persistent storage that is created on the array. The following table shows the predefined storage classes:

SQL Server 2019 Containers on Linux

9

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Supporting software technology

 

Table 3. XtremIO X2 CSI predefined storage classes

 

 

 

 

 

 

 

 

Storage class

MB/s per GB

 

 

 

 

High

15

 

 

 

 

 

 

 

 

 

 

Medium

5

 

 

 

 

 

 

 

 

 

 

Low

1

 

 

 

 

 

 

 

 

 

The size of the requested storage volume and the storage class define the amount of

 

bandwidth to be specified. For example, bandwidth for a 1,000 gibi (Gi) storage volume

 

configured with the medium storage class is computed as follows:

 

Storage size (1,000 Gi) x storage class (medium at 5 MB/s per GB) = Total bandwidth

 

(5,000 MB/s)

 

 

 

 

 

 

 

Note: Gi indicates power-of-two equivalents—10243 in this case.

 

 

Using the XtremIO X2 predefined storage classes helps to efficiently scale an

 

environment by defining performance limits. For example, a storage class of low for a pool

 

of 100 containers limits containerized applications so that they consume no more than

 

their allocated bandwidth. Such limitations help to maintain more reliable storage

 

performance across the entire environment.

 

Using QoS-based storage classes helps balance the resources that are consumed by

 

containerized applications and the total amount of storage bandwidth. For scenarios that

 

require a more customized set of storage classes than the one that is created by the

 

XtremIO X2 CSI plug-in, you can configure XtremIO X2 QoS in Kubernetes. In creating a

 

custom QoS policy, you can define maximum bandwidth per gigabyte or, alternatively,

 

maximum IOPS. You could also define a burst percentage, which is the amount of

 

bandwidth or IOPS above the maximum limit that the container can use for temporary

 

performance.

 

 

 

 

The benefits of using predefined storage classes and customized QoS policies include:

 

 

• Guaranteed service for critical applications

 

 

• Eliminating “noisy neighbor” problems by placing performance limits on

 

 

nonproduction containers

 

 

 

SQL Server

In recent years, Microsoft has been expanding its portfolio of offerings that are either

and Docker

compatible with or ported to the Linux operating system. For example, Microsoft released

containers on

the first version of its SQL Server RDBMS that was commercially available on Linux in

Linux

November 2016. More recently, with its SQL Server 2017 release, Microsoft delivered

 

SQL Server on Docker containers. The next generation of SQL Server for Linux

containers is in development, as part of SQL Server 2019, with release scheduled for the fall of 2019.

Microsoft is currently developing SQL Server implementations of Linux containers for both Linux and Window hosts as well as Windows containers for Windows. The supported features and road maps for these implementations vary, so carefully verify whether a product will meet your requirements. For this white paper, we worked exclusively with

10 SQL Server 2019 Containers on Linux

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Dell EMC servers and storage

SQL Server containers for Linux. We recommend that you check with Dell Technologies to ensure that the latest certified CSI plug-ins are used in your Kubernetes environment.

Microsoft first introduced support for containerized Linux images in SQL Server 2017. According to Microsoft, one of the primary use cases for customers who are adopting SQL Server containers is for local dev/test in DevOps pipelines, with deployment handled by Kubernetes. SQL Server in containers offers many advantages for DevOps because of its consistent, isolated, and reliable behavior across environments, ease of use, and ease of starting and stopping. Applications can be built on top of SQL Server containers and run without being affected by the rest of the environment. This isolation makes SQL Server in containers ideal for test deployment scenarios as well as DevOps processes.

Dell EMC servers and storage

PowerEdge

Dell EMC PowerEdge servers provide a scalable business architecture, intelligent

servers

automation, and integrated security for high-value data-management and analytics

 

workloads. The PowerEdge portfolio of rack, tower, and modular server infrastructure,

 

based on open-standard x86 technology, can help you quickly scale from the data center

 

to the cloud. PowerEdge servers deliver the same user experience and the same

 

integrated management experience across all our product options; thus, you have one set

 

of runbooks to patch, manage, update, refresh, and retire all your assets.

 

For our use cases, we chose the PowerEdge R740 server. The R740 is a 2U form factor that

 

houses up to two Intel Xeon Scalable processors, each with up to 28 compute cores. It has

 

support for the most popular enterprise-deployed versions of Linux—Canonical Ubuntu, Red

 

Hat Enterprise Linux, and SUSE Linux Enterprise Server. The R740 supports a range of

 

memory configurations to satisfy the most demanding database and analytic workloads. It

 

includes 24 slots for registered ECC DDR4 load-reduced DIMMS (LRDIMMs) with speeds

 

up to 2,933 MT/s and has expandable memory up to 3 TB. On-board storage can be

 

configured with front drive bays holding up to 16 x 2.5 in. SAS/SATA SSDs, for a

 

maximum of 122.88 TB, or up to 8 x 3.5 in. SAS/SATA drives, for a maximum of 112 TB.

 

For details about the PowerEdge server configuration that we used for our use cases, see

 

Appendix A: Solution architecture and component specifications.

XtremIO X2

The Dell EMC XtremIO X2 all-flash array is an ideal storage platform for running online

storage

transaction processing (OLTP), online analytical processing (OLAP), or mixed workloads.

 

It delivers high IOPS, ultrawide bandwidth, and consistent submillisecond latency for

 

databases of all sizes.

 

 

 

 

Note: For details about designing a SQL Server solution using XtremIO X2 all-flash storage with

 

PowerEdge servers, see Dell EMC Ready Solutions for Microsoft SQL: Design for Dell EMC

 

XtremIO. The guide provides recommended design principles, configuration best practices, and

 

validation with both Windows Server 2016 and Red Hat Enterprise Linux 7.6 running instances of

 

SQL Server 2017. In the solution testing, the XtremIO X2 array delivered sub-500-microsecond

 

latencies while supporting 275,000-plus IOPS with 72 flash drives, compared to a rated 220,000

 

achievable IOPS per the XtremIO X2 specification sheet. The test engineers found no noticeable

 

increase in latency even when the XtremIO X2 array exceeded the total expected IOPS.

 

 

 

SQL Server 2019 Containers on Linux 11

Software Development Use Cases Using Dell EMC Infrastructure

White Paper

Loading...
+ 23 hidden pages