2.1.2. Zones and software installation..............................................................................................4
2.1.3. Zones and security.................................................................................................................4
2.1.4. Zones and privileges..............................................................................................................4
2.1.5. Zones and resource management..........................................................................................5
2.1.5.1. CPU resources...............................................................................................................................................5
2.2.3. Containers (Solaris zones) in an OS....................................................................................12
2.2.4. Consolidation in one computer.............................................................................................13
2.2.5. Summary of virtualization technologies................................................................................14
3. Use Cases...............................................................................................................................................16
3.1. Grid computing with isolation............................................................................................................16
3.2. Small web servers............................................................................................................................17
3.7. Consolidation of test systems...........................................................................................................22
3.8. Training systems..............................................................................................................................23
3.9. Server consolidation.........................................................................................................................24
3.10. Confidentiality of data and processes.............................................................................................25
3.11. Test systems for developers..........................................................................................................26
3.12. Solaris 8 and Solaris 9 containers for development.......................................................................27
3.13. Solaris 8 and Solaris 9 containers as revision systems..................................................................28
3.14. Hosting for several companies on one computer...........................................................................29
3.15. SAP portals in Solaris containers...................................................................................................30
3.16. Upgrade- and Patch-management in a virtual environment...........................................................31
3.17. "Flying zones" – Service-oriented Solaris server infrastructure......................................................32
3.18. Solaris Container Cluster (aka "zone cluster")................................................................................33
4. Best Practices.........................................................................................................................................34
4.1.6.1. Storage for the root file system of the local zones........................................................................................38
4.1.6.2. Data storage.................................................................................................................................................38
4.1.6.4. Root disk layout............................................................................................................................................39
4.1.6.5. ZFS within a zone.........................................................................................................................................39
4.1.6.6. Options for using ZFS in local zones............................................................................................................40
4.1.6.7. NFS and local zones.....................................................................................................................................40
4.1.6.8. Volume manager in local zones....................................................................................................................40
4.1.7.1. Introduction into networks and zones...........................................................................................................41
4.1.7.2. Network address management for zones.....................................................................................................41
4.1.7.3. Shared IP instance and routing between zones............................................................................................41
4.1.7.4. Exclusive IP instance....................................................................................................................................42
4.1.7.5. Firewalls between zones (IP filter)................................................................................................................42
4.1.7.6. Zones and limitations in the network.............................................................................................................43
4.1.8. Additional devices in zones..................................................................................................44
4.1.8.1. Configuration of devices...............................................................................................................................44
4.1.8.2. Static configuration of devices......................................................................................................................44
4.1.8.3. Dynamic configuration of devices.................................................................................................................44
4.1.9. Separate name services in zones........................................................................................45
4.6.1. Types of resource management...........................................................................................58
4.6.2. CPU resources.....................................................................................................................58
4.6.2.1. Capping of CPU time for a zone...................................................................................................................58
4.6.2.2. General resource pools................................................................................................................................58
4.6.2.4. Fair share scheduler in a zone......................................................................................................................59
5.1.11.1. Starting zones automatically.......................................................................................................................73
5.1.11.2. Changing the set of privileges of a zone.....................................................................................................73
5.1.12. Storage within a zone.........................................................................................................74
5.1.12.1. Using a device in a local zone....................................................................................................................74
5.1.12.2. The global zone supplies a file system per lofs to the local zone................................................................74
5.1.12.3. The global zone mounts a file system when the local zone is booted.........................................................75
5.1.12.4. The local zone mounts a UFS file system from a device............................................................................75
5.1.12.5. User level NFS server in a local zone.........................................................................................................76
5.1.12.6. Using a DVD drive in the local zone............................................................................................................76
5.1.12.7. Dynamic configuration of devices...............................................................................................................76
5.1.12.8. Several zones share a file system..............................................................................................................78
5.1.12.9. ZFS in a zone.............................................................................................................................................78
5.1.12.10. User attributes for ZFS within a zone........................................................................................................78
5.1.13. Configuring a zone by command file or template...............................................................79
5.1.14. Automatic quick installation of zones..................................................................................79
5.1.15. Accelerated automatic creation of zones on a ZFS file system..........................................80
5.1.16. Zones hardening.................................................................................................................80
IV
Version 3.1-en Solaris 10 Container Guide - 3.1 Effective: 30/11/2009
5.2.1. Change network configuration for shared IP instances........................................................81
5.2.2. Set default router for shared IP instance..............................................................................81
5.2.3. Network interfaces for exclusive IP instances......................................................................81
5.2.4. Change network configuration from shared IP instance to exclusive IP instance................82
5.2.5. IP filter between shared IP zones on a system....................................................................82
5.2.6. IP filter between exclusive IP zones on a system................................................................83
5.2.7. Zones, networks and routing................................................................................................83
5.2.7.1. Global and local zone with shared network...................................................................................................83
5.2.7.2. Zones in separate network segments using the shared IP instance.............................................................84
5.2.7.3. Zones in separate network segments using exclusive IP instances.............................................................85
5.2.7.4. Zones in separate networks using the shared IP instance............................................................................86
5.2.7.5. Zones in separate networks using exclusive IP instances............................................................................87
5.2.7.6. Zones connected to independent customer networks using the shared IP instance.....................................88
5.2.7.7. Zones connected to independent customer networks using exclusive IP instances.....................................90
5.2.7.8. Connection of zones via external routers using the shared IP instance........................................................91
5.2.7.9. Connection of zones through an external load balancing router using exclusive IP instances......................93
A.1. Cookbook: Configuring an ipkg zone...........................................................................................113
A.2. Cookbook: Installing an ipkg zone................................................................................................113
B. References..........................................................................................................................................114
V
Version 3.1-en Solaris 10 Container Guide - 3.1 Disclaimer Effective: 30/11/2009
Disclaimer
Sun Microsystems GmbH does not offer any guarantee regarding the completeness and accuracy of the
information and examples contained in this document.
Revision control
VersionContentsWho
3.130/11/2009
Adjustment with content of „Solaris Container Leitfaden 3.1“
Table of Content with HTML for better navigating through the document
Correction „Patching of systems with local zones”
Correction „Patching with zoneadm attach -u“
Correction Solaris Container Navigator
3.0-en27/11/2009
Review and corrections after translation
Name doc to „Solaris 10 Container Guide - 3.0“
3.0-en Draft 1 27/07/2009
Original English translation received
3.0 19/06/2009
General corrections
Additions in the General part, Resource Management
Additions in patch management of zones
Formatting: URLs and hyperlinks for better legibility
Formatting: Numbering repaired
Detlef Drewanz
Ulrich Gräf
Detlef Drewanz
Ulrich Gräf
Detlef Drewanz,
Uwe Furchheim,
Ulrich Gräf,
Franz Haberhauer,
Joachim Knoke,
Hartmut Streppel,
Thomas Wagner,
Insertion: Possibilities for using ZFS in local zones
3.0 Draft 1 10/06/2009
More hyperlinks in the document
General corrections
Insertion: Solaris Container Navigator
Incorporation: Functionalities Solaris 10 5/08 + 10/08 + 5/09
Insertion: Firewall between zones
Revision: Zones and ZFS
Revision: Storage concepts
Revision: Network concepts
Insertion: Dynamic configuration of devices in zones
Revision: Resource management
Addition: Patching zones
Revision: Zones and high availability
Insertion: Solaris Container Cluster
Insertion: Solaris Container in OpenSolaris
Insertion: Upgrade and patch management in a virtual operating environment
2.1 13/02/2008
Incorporation of corrections (tagged vlanID)Detlef Drewanz, Ulrich Gräf
2.0 21/01/2008
Incorporation of comments on 2.0-Draftv27
Insertion: Consolidation of log information
Insertion: Cookbooks for resource management
2.0-Draftv2711/01/2008
Incorporation of comments on 2.0-Draftv22
Incorporation Live Upgrade and zones
Insertion of several hyperlinks
Accelerated automatic installation of zones on a ZFS file system
2.0-Draftv22 20/12/2007
Incorporation Resource Management
Incorporation of Solaris 10 08/07
Incorporation of Solaris 10 11/06
Incorporation of comments
Complete revision of the manual
Version 3.1-en Solaris 10 Container Guide - 3.1 1. Introduction Effective: 30/11/2009
1. Introduction
[dd/ug] This guide is about Solaris Containers, how they work and how to use them. Although the
original guide was developed in german [25], starting with version 3.1 we begin to deliver a version in
english.
By making Solaris 10 available on 31st January 2005, an operating system with groundbreaking
innovations has been provided by Sun Microsystems. Among these innovations are Solaris
Containers that can be used - among other things - to consolidate and virtualize OS environments, to
isolate applications, and for resource management. Solaris Containers can contribute considerably to
the advanced reconfiguration of IT processes and environments, as well as to cost savings in IT
operations.
Using these new possibilities requires know-how, decision guidance and examples which we have
summarized in this guide. It is directed at decision makers, data center managers, IT groups and
system administrators. The document is subdivided into the following chapters: Introduction,
Functionality, Use Cases, Best Practices, Cookbooks, and a list of references.
A brief introduction is followed by the functionality part, which contains a description of today's typical
data center requirements in terms of virtualization and consolidation, as well as a description and
comparison of Solaris Container technology. This is followed by a discussion of the fields of
application for Solaris Containers in a variety of use cases. Their conceptual implementation is
demonstrated by means of Best Practices. In the chapter on Cookbooks, the commands used to
implement Best Practices are demonstrated using concrete examples. All cookbooks were tested and
verified by the authors themselves. The supplement discusses the specifics of Solaris Containers in
OpenSolaris.
The document itself is designed to be a reference document. Although it is possible to read the
manual from beginning to end, this is not mandatory. The manager of a data center gets an overview
of the Solaris Container technology or have a look at the use cases. An IT architect goes over the
Best Practices in order to build solutions. Meanwhile, a system administrator tests the commands
listed in the cookbooks in order to gain experience. That is why the document offers something for
everyone and in addition provides references to look into other areas.
Many thanks to all who have contributed to this document through comments, examples and
additions. Special thanks goes to the colleagues (in alphabetical order): Dirk Augustin[da], Bernd
Finger[bf], Constantin Gonzalez, Uwe Furchheim, Thorsten Früauf[tf], Franz Haberhauer, Claudia
Hildebrandt, Kristan Klett, Joachim Knoke, Matthias Pfützner, Roland Rambau, Oliver Schlicker[os],
Franz Stadler, Heiko Stein[hes], Hartmut Streppel[hs], Detlef Ulherr[du], Thomas Wagner and Holger
Weihe.
Please do not hesitate to contact the authors with feedback and suggestions.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2. Functionality
2.1. Solaris Containers and Solaris Zones
2.1.1. Overview
[ug] Solaris Zones is the term for a virtualized execution environment – a virtualization at the operating
system level (in contrast to HW virtualization).
Solaris Containers are Solaris Zones with Resource Management. The term is frequently used
(in this document as well) as a synonym for Solaris Zones.
Resource Management has already been introduced with Solaris 9 and allows the definition of CPU,
main memory and network resources.
Solaris Zones represent a virtualization at the interface between the operating system and the
application.
• There is a global zone which is essentially the same as a Solaris operating system was in earlier
versions
• In addition, local zones, also called nonglobal zones, can be defined as virtual execution
environments.
• All local zones use the kernel of the global zone and are thus part of a single physical operating
system installation – unlike HW virtualization, where several operating systems are started on
virtualized hardware instances.
• All shared objects (programs, libraries, the kernel) are loaded only once; therefore, unlike for
HW virtualization, additional consumption of main memory is very low.
• The file system of a local zone is separated from the global zone. It uses a subdirectory of the
global zone's filesystem for a root directory (as in chroot environments).
• A zone can have one or several network addresses and network interfaces of its own.
• Physical devices are not visible in local zones (standard) but can optionally be configured.
• Local zones have their own OS settings, e.g. for name service.
• Local zones are separated from each other and from the global zone with respect to processes,
that is, a local zone cannot see the processes of a different zone.
• The separation extends also to the shared memory segments and logical or physical network
interfaces.
• Access to another local zone on the same computer is therefore possible through the network
only.
• The global zone, however, can see all processes in the local zones for the purpose of control
and monitoring (accounting).
Local
zone
Global zone
Figure 1: [dd] Schematic representation of zones
Local
zone
App
Local
zone
OS
Server
2
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
Thus, a local zone is a Solaris environment that is separated from other zones and can be used
independently. At the same time, many hardware and operating system resources are shared with
other local zones, which causes little additional runtime expenditure.
Local zones execute the same Solaris version as the global zone. Alternatively, virtual execution
environments for older Solaris versions (SPARC: Solaris 8 and 9) or other operating systems (x86:
Linux) can also be installed in so-called Branded Zones. In this case, the original environment is then
executed on the Solaris 10 kernel; differences in the systemcalls are emulated.
Additional details are summarized in the following table:
Shared kernel: The kernel is shared by the global zone and the local zones. The resources needed by the OS are
Shared objects: In Unix, all objects such as programs, files and shared libraries are loaded only once as a shared
File system: The visible portion of the file system of the local zone can be limited to one subtree or several subtrees of
Patches: For packages (Solaris packages) installed as copies in the local zone, patches can be installed
Network: Zones have their own IP addresses on one or more virtual or physical interfaces. Network
Process: Each local zone can see its own processes only. The global zone sees all processes of the local zones.
Separation: Access to the resources of the global zone or other local zones, unless explicitly configured as such
Assigned devices: No physical devices are contained in the standard configuration of a local zone. It is, however, possible to
Shared disk space: In addition, further parts of the file tree (file systems or directories) can be assigned from the global zone
Physical devices: Physical devices are administered from the global zone. Local zones do not have any access to the
Root delegation:A local zone has an individual root account (zone administrator). Therefore, the administration of
Naming
environment:
System settings:Settings in /etc/system apply to the kernel used by all zones. However, the most important settings of
Table 1: [ug] Characteristics of Solaris 10 Zones
needed only once. Costs for a local zone are therefore low, as measured by main memory, CPU
consumption and disk space.
memory segment which improves overall performance. For Solaris 10, this also includes zones; that is,
no matter how frequently e.g. a program or a shared library is used in zones: in the main memory, it will
occupy space only once. (other than in virtual machines.)
the global zone. The files in the local zone can be configured on the basis of directories shared with the
global zone or as copies.
separately as well. The patch level regarding non-Application patches should be the same, because all
zones share the same kernel.
communication between zones takes place, if possible, via the shared network layers or when using
exclusive IP-instances via external network connections.
(devices, memory), is prevented. Any software errors that may occur are limited to their respective local
zone by means of error isolation.
assign devices (e.g. disks, volumes, DVD drives, etc.) to one or more local zones.
Special drivers can be used this way as well.
to one or more local zones.
assignment of these devices.
applications and services in a local zone can be delegated completely to other persons – including the
root portion. Operating safety in the global zone or in other local zones is not affected by this. The global
zone root has general access to all local zones.
Local zones have an independent naming environment with host names, network services, users, roles
and process environments. The name service of one zone can be configured from local files, and another
zone from the same computer can use e.g. LDAP or NIS.
earlier Solaris versions (shared memory, semaphores and message queues) can be modified from
Solaris 10 onwards by the Solaris resource manager for each zone independently.
3
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.2. Zones and software installation
[dd] The respective requirements on local zones determine the manner in which software is installed
in zones.
There are two ways of supplying software in zones:
1. Software is usually supplied in pkg format. If this software is installed in the global zone with
pk g a d d, it will be automatically available to all other local zones as well. This considerably
simplifies the installation and maintenance of software since – even if many zones are
installed – software maintenance can be performed centrally from the global zone.
2. Software can be installed exclusively for a local or for the global zone in order to e.g. be able
to make software changes in one zone independent of other zones. This can be achieved by
installation using special pk g ad d options or by special types of software installations.
In any case the Solaris kernel and the drivers are shared by all zones but can be directly installed and
modified in the global zone only.
2.1.3. Zones and security
[dd] By providing separate root directories for each zone, separate stipulations regarding security
settings can be made by the local name service environments in the zones (RBAC – Role Based
Access Control, passwd database). Furthermore, a separate passwd database with its own user
accounts is provided in each zone. This makes it possible to build separate user environments for
each zone as well as introducing separate administrator accounts for each zone.
Solaris 10 5/08, like earlier Solaris versions, is certified according to Common Criteria EAL4+. This
certification was performed by the Canadian CCS. The Canadian CCS is a member of the group of
certification authorities of Western states of which the Federal Office for Information Security (BSI,
Bundesamt für Sicherheit in der Informationstechnik) is also a member. This certification is also
recognized by BSI. A constituent component of the certification is protection against break-ins,
separation and – new in Solaris 10 – zone differentiation. Details on this are available at:
Solaris Trusted Extensions allow customers who are subject to specific laws or data protection
requirements to use labeling features that have thus far only been contained in highly specialized
operating systems and appliances. To implement labeled security, so-called compartments are used.
For Solaris Trusted Extensions, these compartments are put into practice by Solaris zones.
2.1.4. Zones and privileges
[dd] Local zones have fewer process privileges than the global zone whereby some commands
cannot be executed within a local zone. For standard configurations of zones, this access is permitted
only in the global zone. The restrictions include, among other things:
• Configuration of swap space and processor sets
• Modifications to the process scheduler and the shared memory
• Setting up device files
• Downloading and uploading kernel modules
• For shared IP authorities:
− Access to the physical network interface
− Setting up IP addresses
Since Solaris 10 11/06, local zones can have additional process privileges assigned to them when
zones are being configured that allow extended possibilities for local zones (but not all).
Potential combinations and usable privileges in zones are shown here:
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.5. Zones and resource management
[ug] In Solaris 9, resource management was introduced on the basis of projects, tasks and resource
pools. In Solaris 10, resource management can be applied to zones as well. The following resources
can be managed:
• CPU resources (processor sets, CPU capping and fair share scheduler)
• Memory use (real memory, virtual memory, shared segments)
• Monitoring network traffic (IPQoS = IP Quality of Service)
• Zone-specific settings for shared memory, semaphore, swap
(System V IPC Resource Controls)
2.1.5.1. CPU resources
[ug] Three stages of resource managements can be used for zones:
• Partitioning of CPUs in processor sets that can be assigned to resource pools.
Resource pools are then assigned to local zones, thus defining the usable CPU quantity.
• Using the fair share scheduler (FSS) in a resource pool that is used by one or more local zones.
This allows fine granular allocation of CPU resources to zones in a defined ratio as soon as
zones compete for CPU time. This is the case if system capacity is at 100%. Thus, the FSS
ensures the response time for zones, if configured accordingly.
• Using the FSS in a local zone. This allows fine granular allocation of CPU resources to projects
(groups of processes) in a defined ratio if projects compete for CPU. This takes place, when the
capacity of the CPU time available for this zone is at 100%. Thus, the FSS ensures the process
response time.
Processor sets in a resource pool
Just like a project, a local zone can have a resource pool assigned to it where all zone processes
proceed (zo n e c fg : s et p o o l =). CPUs can be assigned to a resource pool. Zone processes
will then run only on the CPUs assigned to the resource pool. Several zones (or even projects) can
also be assigned to a resource pool which will then share the CPU resources of the resource pool.
The most frequent case is to create a separate resource pool with CPUs per zone. To simplify
matters, the number of CPUs for a zone can then be configured in the zone configuration
(zo ne c f g : a dd de di ca t e d -c pu ). When starting up a zone, a temporary resource pool is
then generated automatically that contains the configured number of CPUs. When the zone is shut
down, the resource pools and CPUs are released again (since Solaris 10 8/07).
Fair share scheduler in a resource pool
In the event that several zones run together in a resource pool, the fair share scheduler (FSS) allows
the allocation of CPU resources within a resource pool to be managed. To this end, each zone or
each project can have a share assigned to it. The settings for zones and projects in a resource pool
are used to manage the CPU resources in the event that the local zones or projects compete for
CPU time:
• If the workload of the processor set is less than 100%, no management is done since free CPU
capacity is still available.
• If the workload is at 100%, the fair share scheduler is activated and modifies the priority of the
participating processes such that the assigned CPU capacity of a zone or a project corresponds
to the defined share.
• The defined share is calculated from the share value of an active zone/project) divided by the
sum of the shares of all active zones/projects.
The allocation can be changed dynamically while running.
CPU resource management within a zone
In a local zone it is furthermore possible to define projects and resource pools and to apply CPU
resources via FSS to projects running in the zone (see previous paragraph).
CPU capping
The maximum CPU usage of zones can be set (c p u - ca ps ). This setting is an absolute limit with
regard to the CPU capacity used and can be adjusted to 1/100 CPU exactly (starting with Solaris 10
5/08). With this configuration option, the allocation can be adjusted much more finely than with
processor sets (1/100 CPU instead of 1 CPU).
Furthermore, CPU capping offers another control option if several zones run within one resource pool
(with or without FSS) in order to limit all users to the capacity that will be available later on.
5
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.5.2. Memory resource management
[ug] In Solaris 10 (in an update of Solaris 9 as well), main memory consumption can be limited at the
level of zones, projects and processes. This is implemented with the so-called resource capping
daemon (rc ap d ).
A limit for physical memory consumption is defined for the respective objects. If consumption of one
of the projects exceeds the defined limit, the rc a pd causes little-used main memory pages of
processes to be paged out. The sum of the space consumption of the processes is used as a
measurement parameter.
In this manner, the defined main memory requirement is complied with. The capacity of processes in
the corresponding object drops as they may need to swap pages back in again if necessary if the
memory areas are used again. Continuous swapping of main memory pages is an indication that the
available main memory is too low or that the settings are too tight or the application currently needs
more than the negotiated amount of memory.
For simplification, starting with Solaris 10 8/07, memory limits can be set for each zone. The amount
of physical memory used (physical), the virtual memory (swap) and locked segments (main
memory pages that cannot be swapped, shared memory segments) can be limited. The settings for
virtual and locked memory are hard limits, that is, once the corresponding value has been reached, a
request of an application for more memory is denied. The limit for physical memory, however, is
monitored by the rcapd, which successively swaps main memory pages if the limit is exceeded.
2.1.5.3. Network resource management (IPQoS = IP Quality of Service)
[ug] In Solaris 10 (also Solaris 9) it is possible to classify network traffic and to manage the data rate
of the classes. One example is giving preference to a web server's network traffic over network
backup. Service to the customer should not suffer while network backup is running.
Configuration is done using rules in a file that is activated with the command i pq os co n f. The rules
consist of a part that allows the user to classify network traffic, and actions to manage the data
rate/burst rate. Classification can take place among other things according to some or all of the
following parameters:
• Address of sender or recipient
• Port number
• Data traffic type (UDP, TCP)
• Userid of the local process
• Project of the local process (/ e t c/ pr oj e ct )
• IP traffic TOS field (change priority in an ongoing connection)
2.1.6. User interfaces for zones
[dd] A variety of tools are available for working with zones and containers. Solaris itself provides a
series of command line interface (CLI) tools such as z on ea d m , z on ec f g and z lo gi n that allow
you to undertake the configuration and installation of zones on the command line or in scripts.
The Solaris Container Manager is available as a graphical user interface (GUI)
(http://www.sun.com/software/products/container_mgr/). This is a separate product operated together
with the Sun Management Center (SunMC). The Container Manager allows simple and rapid
creation, reconfiguration or migration of zones through its user interface, and the effective use of
resource management.
With its zone module, Webmin supplies a browser user interface (BUI) for the installation and
management of zones. The module can be downloaded from http://www.webmin.com/standard.html
as z o ne s. w b m .g z.
6
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.7. Zones and high availability
[tf/du/hs] In the presence of all RAS capabilities, a zone has only the availability of a computer and it
decreases with the number of components of the machine (MTBF).
If this availability is not sufficient, so-called failover zones can be implemented using the HA Solaris
Container Agent, allowing zones to be panned among cluster nodes (from Sun Cluster 3.1 08/05).
This increases the availability of the total system considerably. In addition, a container here becomes
a flexible container. That is to say, it is completely irrelevant which of the computers participating in
the cluster the container is running on. Relocating the container can be done manually by
administrative actions or automatically in the event of a computer malfunction.
Alternatively, it is also possible using the HA Solaris Container Agent to start and to stop local zones
including their services. That is, identical zones exist on several computers with identical services that
have no(!) shared storage. The failover of a zone in such a configuration is not possible, because the
zone rootpath cannot be moved between the zones. Instead one of the zones can be stopped and an
identical zone can then be started on another system.
Container clusters (since Sun Cluster 3.2 1/09) offer a third option. A Sun Cluster, where virtual
clusters can be configured, is installed in the global zone. The virtual clusters consist of virtual
computer nodes that are local zones. Administration can be transferred to the zone operators. (see
2.1.9 Solaris container cluster (aka "zone cluster") ).
2.1.8. Branded zones (Linux and Solaris 8/Solaris 9 compatibility)
[dd/ug] Branded zones allow you to run an OS environment which is different from the one that is
installed in the global zone (BrandZ, since Solaris 10 8/07)
Branded zones extend zone configuration as follows:
• A brand is a zone attribute.
• Each brand contains mechanisms for the installation of a branded zone.
• Each brand can use its own pre-/post-boot procedures.
At runtime, system calls are intercepted and either forwarded to Solaris or emulated. Emulation is
done by modules in libraries contained in the brand. This design has the advantage that additional
brands can easily be introduced without changing the Solaris kernel.
The following brands are currently available:
• na t i v e: Brand for zones with the Solaris 10 version from the global zone
• lx : Solaris Container for Linux Applications (abbreviated: SCLA)
With this, unmodified 32-bit Linux programs can be used under Solaris on x86 systems. To do
so, a 32-bit Linux distribution that can use a Linux 2.4 kernel must be installed in the zone. The
required Linux programs can then be used and/or installed in the zone.
The Linux distribution and license are themselves not contained in Solaris and must be installed
in each zone (new installations or copies of an existing installation). At least the following Linux
distributions work:
• so l a r is 8/ So l a r is 9: Solaris 8 or Solaris 9 container
This allows you to operate a Solaris 8 or Solaris 9 environment in a zone (SPARC only). Such
containers can be made highly available with the HA Solaris Container Agent. Such types of
zones cannot be used as virtual nodes in a virtual cluster.
• so l a r is 10 : Solaris 10 branded zones for OpenSolaris are in the planning stage
(see http://opensolaris.org/os/project/s10brand/)
7
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.9. Solaris container cluster (aka "zone cluster")
[hs] In autumn 2008, within the scope of the Open HA Cluster Project, zone clusters were
announced. The latter has also been available since Sun Cluster 3.2 1/09 in a commercial product as
Solaris Container Cluster. A Solaris Container Cluster is the further development of the Solaris zone
technology up to a virtual cluster, also called "zone cluster”. The installation and configuration
of a container cluster is described in the Sun Cluster Installation Guide
[http://docs.sun.com/app/docs/doc/820-4677/ggzen?a=view].
The Open HA Cluster provides a complete, virtual cluster environment. Zones as virtualized Solaris
authorities are used as elements. The administrator of such an environment can see and notice
almost no difference to the global cluster that is virtualized here.
Two principal reasons have advanced the development of virtual cluster technology:
• the desire to round off container technology
• the customer requirement to be able to run Oracle RAC (Real Application Cluster) within a
container environment.
Solaris containers, operated by Sun Cluster, offer an excellent possibility to operate applications
safely and with high availability in Solaris containers. Two options, Flying Containers and Flying
Services, are available for implementation.
A clean separation in system administration allows container administration to be delegated to the
application operators who install, configure and operate their application within a container. It was
indeed unsatisfactory for an application operator to have administrator privileges in his containers but,
on the other hand, to be limited in handling the cluster resources belonging to his zone.
In the cluster implementation prior to Sun Cluster 3.2 1/09, the cluster consisted mainly of the
components installed, configured and also running in the global zone. Only a very small number of
cluster components were actually active within a container, and even this allowed very limited
administrative intervention only.
Now, virtual clusters make the zone administrator feel that he has almost complete control of his
cluster. Restrictions apply to services that continue to exist only once in the cluster, such as e.g.
quorum devices or even heartbeats.
Oracle RAC users could not understand that this product could not simply be installed and operated
within a container. One has to know, however, that Oracle CRS, the so-called Oracle Clusterware –
an operating system independent cluster layer – requires rights that the original safety concept of
Solaris containers did not delegate to a non-global zone. Since Solaris 10 5/08 it is, however, possible
to administer network interfaces even within a zone such that Oracle CRS can be operated there.
The goal of providing a cluster environment that does not require any adjustments of applications
whatsoever has been achieved. Even Oracle RAC can be installed and configured just like in a
normal Solaris instance.
Certification of Oracle RAC in a Solaris Container Cluster is currently (June 2009) not yet finalized.
However, Sun Microsystems offers support for such an architecture.
It is also possible to install Oracle RAC with CRS in a Solaris container without a zone cluster but it
will not yet be certified. The disadvantage of such a configuration consists in the fact that solely
exclusive-IP configurations can be used, which unnecessarily increases the number of required
network interfaces (if not using VLAN interfaces).
8
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2. Virtualization technologies compared
[ug] Conventional data center technologies include
• Applications on separate computers
This also includes multi-tier architectures with firewall, load balancing, web and application
servers and databases.
• Applications on a network of computers
This includes distributed applications and job systems.
• Many applications on a large computer
The separation of applications on computers simplifies the installation of the applications but
increases administrative costs since the operating systems must be installed several times and
maintained separately. Furthermore, computers are usually underutilized (< 30%).
Distributed applications are applications running simultaneously on several computers that
communicate (MPP computers, MPI software, etc.) via a network (TCP/IP, Infiniband, Myrinet, etc.).
For job systems, the computation is broken up into self-contained sub-jobs with dependencies and is
carried out on several computers by a job scheduler who also undertakes data transport (grid
system).
Both alternatives require high network capacity and appropriate applications and make sense only in
areas where the applications are already adapted to this type of computing. Modifying of an
application is a major step and is rarely performed for today's standard applications. But this
technology can become more interesting in the future with new applications.
The manner in which mainframes and larger Unix systems are operated today is to run many
applications in one computer. The advantages are that the systems have a better workload (several
applications) and a lower number of operating system installations to be serviced. It is therefore
exactly this variant which is of interest for consolidation in the data center.
The challenges consist in creating an environment for the applications where the latter can run
independently (separation) while still sharing resources with each other to save costs. Particularly
interesting areas are:
• Separation. How far separated are the environments of the applications?
• Application. How does the application fit into the environment?
• Effects on software maintenance
• Effects on hardware maintenance
• Delegation: Can administrative tasks be delegated to the environment?
• Scaling of the environment?
• Overhead of the virtualization technology?
• Can different OS versions be used in the environments?
A variety of virtualization techniques were developed for this purpose and are presented below.
For comparison, see also: http://en.wikipedia.org/wiki/Virtualization
9
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2.1. Domains/physical partitions
[ug] A computer can be partitioned by configuration into sub-computers (domain, partition). Domains
are almost completely physically separated since electrical connections are turned off. Shared parts
are either very failsafe (cabinet) or redundantly structured (service processor, power supplies).
Advantages:
• Separation: Applications are well separated from each other; mutual influence via the OS or
failed shared hardware is not possible.
• Application: All applications are executable as long as they are executable in the basic operating
system.
• Scalability: The capacity of a virtualization instance (here: a domain) can be changed for some
implementations while running (dynamic reconfiguration) by relocating hardware resources
between domains.
• HW maintenance: If one component fails and the domain is constructed appropriately, the
application can still run. Dynamic reconfiguration allows repairs to be performed while running
(in redundant setups). A cluster must be set up only to intercept total failures (power supply,
building on fire, data center failure, software errors).
• OS versions: The partitions are able to run different operating systems/versions.
Disadvantages:
• OS maintenance: Each machine has to be administered separately. OS installation, patches
and the implementation of in-house standards must be done separately for each machine.
• Delegation: The department responsible for the application/service requires root privileges, or
must communicate with computer operations regarding modifications. All aspects of the
operating system can be administered in the physical partition. This can affect security and can
become costly/time-consuming.
• Overhead: Each machine has a separate operating system overhead.
Sun offers domains in the high-end servers SunFire E20K, E25K, the mid-range servers SunFire
E2900, E4900, E6900 and the Sun SPARC Enterprise M4000, M5000, M8000 and M9000.
App 3App 1App 2
App
OS
Server
Figure 2: [dd] Domains/Physical domains
This virtualization technology is provided by several manufacturers (Sun Dynamic System Domains,
Fujitsu-Siemens Partitions, HP nPars). HW support and (a little) OS support are required.
10
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2.2. Logical partitions
[ug] A minimal operating system called the hypervisor, that virtualizes the interface between the
hardware and the OS of a computer, runs on the computer's hardware. A separate operating system
(guest operating system) can be installed on the arising so-called virtual machines.
In some implementations, the hypervisor runs as a normal application program; this involves
increased overhead.
Virtual devices are usually created from real devices by emulation; real and virtual devices are
assigned to the logical partitions by configuration.
Advantages:
• Application: All applications of the guest operating system are executable.
• Scalability: The capacity of a logical partition can be modified in some cases while running,
when the OS and the hypervisor support this.
• Separation: Applications are separated from each other; direct mutual influence via the OS is
not possible.
• OS versions: The partitions are able to run different operating systems/versions.
Disadvantages:
• HW maintenance: If a shared component fails, many or all logical partitions may be affected. An
attempt is made, however, to recognize symptoms of future failure by preventive analysis, in
order to segregate errors in advance.
• Separation: The applications can influence each other via shared hardware. One example for
this is the virtual network since the hypervisor has to emulate a switch. Virtual disks which are
located together on a real disk are “pulling away” the disk head from each other are another
example of this behavior. To prevent this from happening, real network interfaces or dedicated
disks can be used which, however, increases the cost for using logical partitions.
• OS maintenance: Each partition has to be administered separately. OS installation, patches and
the implementation of in-house standards must be done separately for each partition.
• Delegation: If the department responsible for the application/service requires root privileges, or
must communicate with computer operations regarding modifications. All aspects of the
operating system can be administered in the logical partition. This can affect security and can
become costly/time-consuming.
• Overhead: Each logical partition has its own operating system overhead; in particular the main
memory requirements of the individual systems are maintained.
App 3App 2App 1
App
OS
Server
Figure 3: [dd] Logical partitions
Logical partitioning systems include the IBM VM operating system, IBM LPARs on z/OS and AIX, HP
vPars, as well as VMware and XEN. Sun offers Logical Domains (SPARC: since Solaris 10 11/06) as
well as Sun xVM VirtualBox (x86 and x64 architectures).
The Sun xVM server is in collaboration with the XEN community for x64 architectures in development.
The virtualization component (xVM Hypervisor) can already be used since OpenSolaris 2009.06.
11
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2.3. Containers (Solaris zones) in an OS
[ug] In an operating system installation, execution environments for applications and services are
created that are independent of each other. The kernel becomes multitenant enabled: it exists only
once but appears in each zone as though it was assigned exclusively.
Separation is implemented by restricting access to resources, such as e.g. the visibility of processes
(modified procfs), the usability of the devices (modified devfs) and the visibility of the file tree (as with
chroot).
Advantages:
• Application: All applications are executable unless they use their own drivers or other system-
oriented features. Separate drivers can, however, be used via installations in the global zone.
• Scalability: Container capacity can be configured (through resource management, processor
sets and CPU caps).
• Separation: Applications are separated from each other; direct mutual influence via the OS is
not possible.
• OS maintenance: OS installation, patches and implementation of in-house standards must take
place in a central location (in the global zone) only.
• Delegation: The department responsible for the application/ service requires root privileges for
part of the administration. Here, it can obtain the root privileges within the zone without being in
a position to affect other local zones or the global zone. The right to allocate resources is
reserved to the global zone only.
• Overhead: All local zone processes are merely normal application processes from the point of
view of the global zone. The OS overhead (memory management, scheduling, kernel) and
memory requirements for shared objects (files, programs, libraries) are created only once. Each
zone has only a small additional number of system processes. For that reason, it is possible to
have hundreds of zones on a single-processor system.
Disadvantages:
• HW maintenance: If a shared component fails, many or all zones may be affected. Solaris 10
recognizes symptoms of a future failure through FMA (Fault Management Architecture) and can
deactivate the affected components (CPU, memory, bus systems) while running, or instead use
alternative components that are available. Through the use of cluster software (Sun Cluster),
the availability of the application in the zone can be improved (Solaris Container Cluster/ Solaris
Container Agent).
• Separation: The applications can influence each other through shared hardware. That influence
can be minimized in Solaris with resource management and network bandwidth management.
• OS versions: Different operating systems/versions are possible with branded zones only. Here,
a virtual process environment for another operating system is created in one zone but the kernel
of the global zone is used by the branded zones as well.
App 1App 2App 3
App
BrandZ
OS
Server
Figure 4: [dd] Container (Solaris zones) in an OS
Implementations in the BSD operating system are Jails, in Solaris: zones, and in Linux the vserver
project. HW requirements are not necessary.
12
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2.4. Consolidation in one computer
[ug] The applications are installed on a computer and used under different userid. This is the type of
consolidation feasible with modern operating systems.
Advantages:
• Application: All applications are executable as long as they are executable in the basic operating
system and do not use their own OS drivers. However, there are restrictions if different versions
of the application with a defined install directory are required, or if two instances require the
same userid (e.g. Oracle instances), or if the configuration files are located in the same
locations in the file system.
• Scalability: The capacity of an application can be modified online.
• OS maintenance: OS installation, patches and implementation of in-house standards must take
place for one OS only. That is to say, many applications can be run with the administrative effort
for one machine only.
• Overhead: Overhead is low since only the application processes must run for each application.
Disadvantages:
• HW maintenance: If a shared component fails, many or all applications may be affected.
• OS maintenance: Administration becomes complicated as soon as applications are based on
different versions of a software (e.g. versions for Oracle, Weblogic, Java, etc.). Such a system
becomes difficult to control without accurate documentation and change management. Any error
in the documentation that is not noticed immediately can have fatal consequences in an
upgrade (HW or OS) later on.
• Separation: The applications can influence each other through shared hardware and the OS. In
Solaris, the influence can be reduced by using resource management and network bandwidth
management.
• Delegation: The department responsible for the application/service requires root privileges for a
portion of the job control or must communicate with computer operations regarding
modifications. This can therefore affect security or become more costly/time-consuming.
• OS versions: Different operating systems/versions are not possible.
App1
App 3App 2
App
OS
Server
Figure 5: [dd] Consolidation in one computer
Many modern operating systems facilitate this type of consolidation, which allows several software
packages to be installed, if necessary, even different versions of the same software.
13
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.2.5. Summary of virtualization technologies
[ug] The virtualization technologies discussed above can be summarized in the following table –
compared to installation on a separate computer.
Separate
computer
Separation+O+(Software)
Application++++-
SW maintenance---+O
HW maintenance++OOO
Delegation---+-
ScalabilityO+O++
Overhead---++
OS versionsone eachseveralseveraloneone
Table 2: [ug] Summary of virtualization technologies
Domains/
Physical
partitions
Logical
partitions
- (Hardware)
Containers
(Solaris zones)
in an OS
+ (Software)
- (Hardware)
Consolidation in
one computer
-
Key for the meaning of the symbols:
• + good
• O is neither good nor bad
• - means: has disadvantages
While separation is of course best in a stand-alone computer. Physical partitions use the same
cabinet with shared power supplies (fire can harm each partition), although the partitions are
independent. Lpars, LDom and containers are separated in the OS environment only.
Consolidation in an operating system reveals all that is visible in the applications to each application.
In all other cases, applications are separated.
Unified SW maintenance can only be performed with zones and consolidation in one computer. The
other technologies require multiple maintenance.
HW maintenance on the machine of the application is practicable only for domains or separate
computers without affecting other applications, unless mobile Lpars/LDom or cluster technologies with
flying zones are used.
The delegation of portions of administrative tasks is possible for containers only. All other
technologies require that the tasks be defined exactly and that dedicated roles are assigned. This is
costly and time-consuming.
Scalability for separate computers and Lpars is limited based on the hardware (defined performance
of the shared interconnect) while domain capacity can be customized by additional hardware.
Containers and consolidation on one computer run without adaption of the application on bigger
computers. Additional reserves can be used by relocating additional containers to this computer.
Overhead is higher for separate computers and physical and logical partitioning because one
operating system with CPU and memory requirements runs per application. Containers and
consolidation on one computer share one operating system and are therefore considerably more
economical with regard to their consumption of resources. The lower the resource requirements of an
application, the more pronounced the effect.
The OS version of each operating system installation must be maintained separately. This means
overhead. Therefore, for separate computers as well as physical and logical virtualization, more effort
must be expended than for containers or consolidation on one computer. However, multiple OS
versions enable the use of different versions of the OS if required by the applications. The
assessment in this regard depends on data center policies and operation purposes.
The overhead of several OS instances can be reduced by using management software such as
e.g. Sun xVM OpsCenter, which requires a certain investment.
14
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
OS
virtualisation
(Container
Physical
virtualisation
(Domains/
physical partitions)
OS and memory sharing
Sharing of OS
environments
Logical
virtualisation
(logical partitions)
HW error containment
Separate OS / self-contained memory
Isolation of the OS-Environment
Figure 6: [dd] Comparison of virtualization technologies
Resource
Management
(consolidation in
one computer)
App
OS
HW
15
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3. Use Cases
The following chapter discusses a variety of use cases for Solaris Containers and evaluates them.
3.1. Grid computing with isolation
Requirement
[ug] There is a need within a company to use free cycles on existing computers to perform a certain
amount of computing work in the background.
Grid software such as Sun GridEngine is suitable for this. However, the processes then running in the
grid should be blocked from seeing other processes in the system.
Solution
[ug] Sun GridEngine is installed in one zone on computers that have spare capacity:
• Sparse-root zones are used (4.1.1 Sparse-root zones ).
• Software and data are installed in the global zone per NFS.
• Sun GridEngine is installed in the local zone.
• Automatic creation of the zone (5.1.14 Automatic quick installation of zones ).
• Software installation per lofs from a global zone filesystem
Assessment
[ug] This use case has the following characteristics:
• Computers can use spare capacities at defined times.
• Other application data remain protected from inspection. This frequently increases the
willingness of the persons responsible for the application to make the computers available for
this purpose.
• The applications department can "sell" the spare capacity to grid users.
Above all, computing capacities are economized.
Grid
node
App 1 App 2
1
Global Zone
System 1
Figure 7: [dd] Use case: Grid computing with isolation
Grid
node
App 3 App 4
2
Global Zone
System 2
Grid network
Grid node 3
Grid node 4
16
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.2. Small web servers
Requirement
[ug] One of the following situations exists:
• An Internet Service Provider (ISP) would like to have the option to set up web servers
automatically, without additional costs. Based on this technology, the ISP wants to create an
attractive offer for web servers with root access.
• The data center of a major company has many requests from departments for internal web
servers for the purpose of information exchange between the departments.
It is in the customers' interest to deposit web content as simply as possible, without many rules
and arrangements (little effort). Therefore, the requesting department aims at a web server that
nobody else works with. The data center is interested in keeping administrative costs low. It
should therefore not be a separate computer. The data traffic will most likely be small and does
not justify a full computer of their own.
Solution
[ug] The web servers are implemented by automatically installed zones. The following details are
used in particular:
• Sparse-root zones, that is, zones inherit everything, if possible, from the global zone (4.1.1
Sparse-root zones).
• The software directory of the web server, e.g. /o pt / w e bs er ve r , is not inherited
(in he r i t -p kg -d i r) in order to facilitate different versions of the web server.
• Automatic creation of a zone per script
• Automatic system configuration in the zone with s y s i dc fg .
• Automatic IP address administration.
• Option: Automatic quick installation of a zone.
• Option: Application administrator with root access.
• Option: Software installation per mount.
Assessment
[ug] This use case has the following characteristics:
• The operating division's expenses for creating the zones are low.
• Since the expected load is very small, the consolidation effect is very large since only active
processes in the zone require resources.
• The users of the automatically generated web servers in the zones are free to use a variety of
different versions without having to re-educate themselves. That is to say, their costs are low
and there is hardly any need for training.
• The web servers can work with the standard ports and the preset container IP addresses – no
special configurations are required as when several web servers are to be operated on one
system.
Figure 8: [dd] Use case: Small web server
Webserver1Webserver
2
Global Zone
System 1
Webserver
3
17
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.3. Multi-network consolidation
Requirement
[dd] A company uses several different networks that are separated either by firewalls or by routers.
Applications are run in the individual networks. The company would like to use the applications from
different networks or security areas together on one physical system as an application itself does not
require the capacity of a single system.
Solution
[dd] The individual applications are installed in one zone each. Zones are clustered together on
physical servers according to certain criteria (redundancy, similar application, load behavior, etc.).
Routing between the zones is switched off to separate the networks. The following details are used in
particular:
• Creation of zones.
• Zones as runtime environments for one application each.
• Routing of the global zone on the interfaces is switched off so that zones cannot reach each
other. That is, the zones can only reach addresses in their respective network.
• Use of exclusive-IP instances.
Assessment
[dd] This use case has the following characteristics:
• The network structure is simplified by economizing routes and routers.
• The number of required systems is reduced.
• Applications can be organized according to new aspects, e.g. all web servers on a physical
server, or e.g. T2000 are used for web servers, T1000 are used for proxy servers, UltraSPARC
IV+ systems for all databases, etc.
• The global zone can be used as the central administrative authority for all zones in a system. A
separate administrative network can be placed on the global zones.
• Application administration is located within the zone. If the same applications are clustered
together on systems, an application administrator can administer all applications in the zones
out of the global zone more easily, or can simplify administration by the use of sparse root
zones.
Network A
App A1 App A2 App B
Global Zone
System 1
Figure 9: [dd] Use case: Multi-network consolidation
Gateway/Router/
FireWall
Network B
App A1' App A3 App B'
Global Zone
System 2
Network C
App C App C App C
Global Zone
System 3
18
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.4. Multi-network monitoring
Requirement
[dd] A company has several different networks that are separated into several levels either by
firewalls or by routers. A variety of computers are installed in the individual networks. Administration
is to be simplified, and the company would like to be able to "look into" all the networks directly from a
central location and administer without having to connect the networks by routing.
Solution
[dd] A central monitoring and administrator server is installed. On this server, several zones are
created that have each a connection to a network. Monitoring or administration of the computers of
the individual networks is done from the zones. The following details are used in particular:
• Sparse-root zones, that is, the zones inherit everything, if possible, from the global zone.
• All zones use the same monitoring and administration tools.
• Monitoring data are stored in file systems that are shared between zones.
• Data can be evaluated from a local zone or centrally from the global zone.
• From a central location (the global zone), central configuration files can be distributed directly to
all zones or to all systems in the networks. Circuitous paths via routers and firewalls are omitted.
• Routing between zones must be turned off.
• Option: Use exclusive-IP instances.
Assessment
[dd] This use case has the following characteristics:
• The operating division's expenses for creating the zones are low.
• The administrative overhead decreases for systems in the networks since no multiple login via
routers or firewalls must be performed.
• A single point of administration can be created.
• Relief of the strain on routers and firewalls stemming from network load and additional
configurations.
• Use of uniform monitoring tools.
• Use of uniform configurations is simplified.
Network CNetwork D
Network B
MonitorAMonitorBMonitor
Network A
Figure 10: [dd] Use case: Multi-network monitoring
Monitor
C
D
Global Zone
System 1
MonitorEMonitor
F
Network E
Network F
19
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.5. Multi-network backup
Requirement
[dd] A company has several different networks that are separated in different stages either by
firewalls or by routers. Different computers are installed in the individual networks. The backup is to
be simplified by allowing direct system backups in the individual networks to be performed from one
location without having to connect the networks by routing.
Solution
[dd] A server with backup server software is installed. Several zones are created on this server that
have a network connection to each network. Backup clients are started in the zones that
communicate with the backup server in the global zone via the internal network or that write directly
on an available backup device as a backup server. The following details are used in particular:
• Sparse root zones, that is, the zones inherit everything possible from the global zone.
• The backup client or backup server software is installed separately for each zone.
• A device is provided from the global zone to a local zone.
• Network setup to connect from the global to the local zone.
Assessment
[dd] This use case has the following characteristics:
• The operating division's expenses for creating the zones are low.
• Backup and restore can be organized and carried out from one central location.
• Higher backup speeds can be achieved by direct backup. Routers or firewalls are not burdened
by backup data.
• Possibly, licensing costs are saved for backup software.
• Backup hardware is shared among departments and networks.
It has, unfortunately, turned out so far that some software manufacturers have not yet released
backup server software for application in zones. Therefore, this use case is applicable with some
restrictions only. Backup clients have already been released for application in zones here and there.
Network B
Backup
Backup
Network A
Client
A
Client
B
Backup
Client
C
Network C
Global Zone
System 1
Figure 11: [dd] Use case: Multi-network backup
20
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
Figure 12: [dd] Use case: Consolidation development/test/integration/production
[ug] Usually, further systems supporting the same application exist while an application is in
production:
• Development systems
• Test systems
• Integration systems, with simulation of the application environment where applicable
• Disaster recovery systems
Solution
[ug] The different systems can be implemented on a computer in zones. The details:
• Sparse-root zones, that is, the zones inherit everything possible from the global zone.
• Option: Resource pools with their own processor set for production.
• Option: Application administrator with root access.
• Option: Software installation per mount.
• Option: OS release upgrade per flash image.
Assessment
[ug] This use case has the following characteristics:
• Fewer computers need to be operated overall. Therefore, savings can be made on space,
power consumption and air conditioning.
• The applications can be tested on exactly the same environment that they are to be run on later.
• Switch to a new software version in production can be achieved simply by rerouting the load to
the zone with the new version. Installation after test is not required.
Develop-
ment
Production Integration
Test
Global Zone
System
21
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.7. Consolidation of test systems
Requirement
[ug] To test software and applications, there are many test systems in the data center environment
that are only ever used for tests. They are mostly used only for qualitative tests where systems are
not stressed as with performance tests. However, switching between test environments by reinstallation or restore of a shared test computer is out of question. The testers want to access their
environment without waiting period or hassle over the right to access. Test systems are therefore
underutilized.
Solution
[ug] Test systems are implemented by zones on a larger server. The details:
• Sparse root zones / whole root zones, as needed.
• File system decisions analogous to the production system.
• Option: Resource management with processor sets for simultaneous tests.
• Option: Automatic zone creation.
• Option: Software installation per mount.
• Option: Moving the zone among computers.
• Option: Automatic quick installation of a zone.
Assessment
[ug] This use case has the following characteristics:
• The operating division requires far fewer computers serving as test systems. Far fewer
installations need to be performed.
• Expensive systems for performance tests only need to be purchased once, and not for every
application. Performance tests on the machines shared by the use of zones must, however, be
coordinated (operation).
• The operators can access the test installation at any time, qualitatively, at any rate; possibly not
with full performance.
Test
A
Figure 13: [dd] Use case: Consolidation of test systems
Test
B
Test
C
Global Zone
System
Test
D
Test
E
22
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009
3.8. Training systems
Requirements
[ug] In training departments, computers that are provided for training participants (including
pupils/students) must frequently be reset.
Solution
[ug] The training systems are implemented by automatically installed zones:
• Sparse-root zones, that is, the zones inherit everything possible from the global zone.
• Automatic zone creation per script.
• Automatic system configuration in the zone with s y s i dc fg .
• Automatic IP address administration.
• Option: / o p t is not inherited (i nh er i t - pk g- d i r ), for installation training.
• Option: Automatic quick installation of a zone.
• Option: Application administrator with root access.
• Option: Installation of shared software per mount
Assessment
[ug] This use case has the following characteristics:
• Operating the training computers is extremely simple since zones can easily be created again
for the next course.
• Training participants themselves are root in the zone, which allows them to perform all essential
administrative functions without posing a hazard to the total system. Re-installation of the
computer is thus not necessary.
• The trainer can see the work of the training participants from the global zone.
• The costs for the training system therefore decrease drastically.
StudentAStudentBStudent
Figure 14: [dd] Use case: Training systems
Student
C
D
Global Zone
System
StudentEStudent
F
23
Loading...
+ 91 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.