1.1EJR4/19/07 Updates based on comments from Stephan Mueller and Klaus Weidner
1.2GCW4/26/07 Incorporated Stephan's comment to remove racoon
1.2.1GCW10/27/08 Added legal matter missing from final draft.
Novell, the Novell logo, the N logo, and SUSE are registered trademarks of Novell, Inc. in the United States and other
countries.
IBM, IBM logo, BladeCenter, eServer, iSeries, i5/OS, OS/400, PowerPC, POWER3, POWER4, POWER4+,
POWER5+, pSeries, S390, System p, System z, xSeries, zSeries, zArchitecture, and z/VM are trademarks or registered
trademarks of International Business Machines Corporation in the United States, other countries, or both.
Linux is a registered trademark of Linus Torvalds.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Intel and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Other
company, product, and service names may be trademarks or service marks of others.
This document is provided “AS IS” with no express or implied warranties. Use the information in this document at your
own risk.
This document may be reproduced or distributed in any form without prior permission provided the copyright notice is
retained on all copies. Modified versions of this document may be freely distributed provided that they are clearly
identified as such, and this copyright is included intact.
2 System Overview...........................................................................................................................................3
3.1 System x...............................................................................................................................................14
3.1.1 System x hardware overview........................................................................................................14
3.1.2 System x hardware architecture....................................................................................................14
3.2 System p...............................................................................................................................................16
3.2.1 System p hardware overview........................................................................................................16
3.2.2 System p hardware architecture....................................................................................................17
3.3 System z...............................................................................................................................................17
3.3.1 System z hardware overview........................................................................................................17
3.3.2 System z hardware architecture....................................................................................................17
This document describes the High Level Design (HLD) for the SUSE® Linux® Enterprise Server 10 Service
Pack 1 operating system. For ease of reading, this document uses the phrase SUSE Linux Enterprise Server and
the abbreviation SLES as a synonym for SUSE Linux Enterprise Server 10 SP1.
This document summarizes the design and Target of Evaluation Security Functions (TSF) of the SUSE Linux
Enterprise Server (SLES) operating system. Used within the Common Criteria evaluation of SUSE Linux
Enterprise Server at Evaluation Assurance Level (EAL) 4, it describes the security functions defined in the
Common Criteria Security Target document.
1.1 Purpose of this document
The SLES distribution is designed to provide a secure and reliable operating system for a variety of purposes.
This document describes the high-level design of the product and provides references to other, more detailed
design documentation that describe the structure and functions of the system. This document is consistent with
additional high-level design documents, as well as with the supporting detailed design documents for the system.
There are pointers to those documents in this document.
The SLES HLD is intended as a source of information about the architecture of the system for any evaluation
team.
1.2 Document overview
This HLD contains the following chapters:
Chapter 2 presents an overview of the IBM® eServer™ systems, including product history, system architecture,
and TSF identification.
Chapter 3 summarizes the eServer hardware subsystems, characterizes the subsystems with respect to security
relevance, and provides pointers to detailed hardware design documentation.
Chapter 4 expands on the design of the TSF software subsystems, particularly the kernel, which is identified in
Chapter 2.
Chapter 5 addresses functional topics and describes the functionality of individual subsystems, such as memory
management and process management.
Chapter 6 maps the Target of Evaluation (TOE) summary specification from the SUSE Linux Enterprise Server
Security Target to specific sections in this document.
1.3 Conventions used in this document
The following font conventions are used in this document:
Constant Width (Monospace) shows code or output from commands, and indicates source-code keywords
that appear in the code as well as file and directory names, program and command names, command-line
options.
Italic indicates URLs, book titles, and introduces new terms.
1.4 Terminology
For definitions of technical terms and phrases that have specific meaning for Common Criteria evaluation, please
refer to the Security Target.
2 System Overview
The Target of Evaluation (TOE) is SUSE Linux Enterprise Server (SLES) running on an IBM eServer host
computer. The SLES product is available on a wide range of hardware platforms. This evaluation covers the
SLES product on the IBM eServer System x™, System p™, and System z™, and eServer 326 (Opteron).
(Throughout this document, SLES refers only to the specific evaluation platforms).
Multiple TOE systems can be connected via a physically-protected Local Area Network (LAN). The IBM
eServer line consists of Intel processor-based System x systems, POWER5™ and POWER5+™ processorbased System p systems, IBM mainframe System z systems, and AMD Opteron processor-based systems that
are intended for use as networked workstations and servers.
Figure 2-1 shows a series of interconnected TOE systems. Each TOE system is running the SLES operating
system on an eServer computer. Each computer provides the same set of local services, such as file, memory,
and process management. Each computer also provides network services, such as remote secure shells and
file transfers, to users on other computers. A user logs in to a host computer and requests services from the
local host and also from other computers within the LAN.
Figure 2-1: Series of TOE systems connected by a physically protected LAN
User programs issue network requests by sending Transmission Control Protocol (TCP) or User Datagram
Protocol (UDP) messages to another computer. Some network protocols, such as Secure Shell (ssh), can start
a shell process for the user on another computer, while others are handled by trusted server daemon processes.
2
The TOE system provides user Identification and Authentication (I&A) mechanism by requiring each user to
log in with proper password at the local workstation, and also at any remote computer where the user can
enter commands to a shell program (for example, remote ssh sessions). Each computer enforces a coherent
Discretionary Access Control (DAC) policy, based on UNIX®-style mode bits and an optional Access
Control List (ACL) for the named objects under its control.
This chapter documents the SUSE Linux Enterprise Server and IBM eServer product histories, provides an
overview of the TOE system, and identifies the portion of the system that constitutes the TOE Security
Functions (TSF).
2.1 Product history
This section gives a brief history of the SLES and the IBM eServer series systems.
2.1.1 SUSE Linux Enterprise Server
SUSE Linux Enterprise Server is based on version 2.6 of the Linux kernel. Linux is a UNIX-like open-source
operating system originally created in 1991 by Linus Torvalds of Helsinki, Finland. SUSE was founded in
1992 by four German software engineers, and is the oldest major Linux solutions provider.
2.1.2 eServer systems
IBM eServer systems were introduced in 2000. The IBM eServer product line brings technological
innovation, application flexibility, and autonomic capabilities for managing the heterogeneous mix of servers
required to support dynamic on-demand business. It enables customers to meet their business needs by
providing unlimited scalability, support for open standards, and mission-critical qualities of service.
Following are systems in the IBM eServer product line that are included in the TOE:
•System p: UNIX servers, technologically advanced POWER5 and POWER5+ processor-based
servers for commercial and technical computing applications.
•System x: Intel-based servers with high performance and outstanding availability.
•eServer 326: AMD Opteron-based servers with outstanding value in high performance computing in
both 32-bit and 64-bit environments.
•BladeCenter
®
: Intel Xeon, AMD Opteron, PowerPC, POWER5, and POWER5+ processor based
servers.
Since introducing eServers in 2000, new models with more powerful processors have been added to the
System x, System p, and System z lines. The AMD Opteron processor-based eServer 325 was added to the
eServer series in 2003; the eServer 326 is the next iteration of that model with updated components. The
AMD Opteron eServer 326 is designed for powerful scientific and technical computing. The Opteron
processor supports both 32-bit and 64-bit architectures, thus allowing easy migration to 64-bit computing.
2.2 High-level product overview
The TOE consists of SLES running on an eServer computer. The TOE system can be connected to other
systems by a protected LAN. SLES provides a multi-user, multi-processing environment, where users
interact with the operating system by issuing commands to a command interpreter, by running system
utilities, or by the users developing their own software to run in their own protected environments.
3
The Common Criteria for Information Technology Security Evaluation [CC] and the Common Methodology
for Information Technology Security Evaluation [CEM] demand breaking the TOE into logical subsystems
that can be either (a) products, or (b) logical functions performed by the system.
The approach in this section is to break the system into structural hardware and software subsystems that
include, for example, pieces of hardware such as planars and adapters, or collections of one or more software
processes such as the base kernel and kernel modules. Chapter 4 explains the structure of the system in terms
of these architectural subsystems. Although the hardware is also described in this document, the reader should
be aware that while the hardware itself is part of the TOE environment, it is not part of the TOE.
The following subsections present a structural overview of the hardware and software that make up an
individual eServer host computer. This single-computer architecture is one of the configurations permitted
under this evaluation.
2.2.1 eServer host computer structure
This section describes the structure of SLES for an individual eServer host computer. As shown in Figure 2-2,
the system consists of eServer hardware, the SLES kernel, trusted non-kernel processes, TSF databases, and
untrusted processes. In this figure, the TOE itself consists of Kernel Mode software, User Mode software,
and hardware. The TOE Security Functions (TSF) are shaded in gray. Details such as interactions within the
kernel, inter-process communications, and direct user access to the hardware are omitted.
Figure 2-2: Overall structure of the TOE
The planar components, including CPUs, memory, buses, on board adapters, and support circuitry; additional
adapters, including LAN and video; and, other peripherals, including storage devices, monitors, keyboards,
and front-panel hardware, constitute the hardware.
4
The SLES kernel includes the base kernel and separately-loadable kernel modules and device drivers. (Note
that a device driver can also be a kernel module.) The kernel consists of the bootable kernel image and its
loadable modules. The kernel implements the system call interface, which provides system calls for file
management, memory management, process management, networking, and other TSF (logical subsystems)
functions addressed in the Functional Descriptions chapter of this document. The structure of the SLES kernel
is described further in the Software Architecture chapter of this paper.
Non-kernel TSF software includes programs that run with the administrative privilege, such as the sshd,
cron, atd, and vsftpd daemons. The TSF also includes the configuration files that define authorized
users, groups of users, services provided by the system, and other configuration data. Not included as TSF
are shells used by administrators, and standard utilities invoked by administrators.
The SLES system, which includes hardware, kernel-mode software, non-kernel programs, and databases,
provides a protected environment in which users and administrators run the programs, or sequences of CPU
instructions. Programs execute as processes with the identity of the users that started them (except for some
exceptions defined in this paper), and with privileges as dictated by the system security policy. Programs are
subject to the access control and accountability processes of the system.
5
2.2.2 eServer system structure
The system is an eServer computer, which permits one user at a time to log in to the computer console.
Several virtual consoles can be mapped to a single physical console. Different users can login through
different virtual consoles simultaneously. The system can be connected to other computers via physically and
logically protected LANs. The eServer hardware and the physical LAN connecting the different systems
running SLES are not included within the evaluation boundary of this paper. External routers, bridges, and
repeaters are also not included in the evaluation boundary of this paper.
A standalone host configuration operates as a CC-evaluated system, which can be used by multiple users at a
time. Users can operate by logging in at the virtual consoles or serial terminals of a system, or by setting-up
background execution jobs. Users can request local services, such as file, memory, and process management,
by making system calls to the kernel. Even though interconnection of different systems running SLES is not
included in the evaluation boundary, the networking software is loaded. This aids in a user’s request for
network services (for example, FTP) from server processes on the same host.
Another configuration provides a useful network configuration, in which a user can log in to the console of
any of the eServer host computers, request local services at that computer, and also request network services
from any of the other computers. For example, a user can use ssh to log into one host from another, or scp
to transfer files from one host to another. The configuration extends the single LAN architecture to show that
SLES provides Internet Protocol (IP) routing from one LAN segment to another. For example, a user can log
in at the console of a host in one network segment and establish an ssh connection to a host in another
network segment. Packets on the connection travel across a LAN segment, and they are routed by a host in
that segment to a host on another LAN segment. The packets are eventually routed by the host in the second
LAN segment to a host on a third LAN segment, and from there are routed to the target host. The number of
hops from the client to the server are irrelevant to the security provided by the system, and are transparent to
the user.
The hosts that perform routing functions have statically-configured routing tables. When the hosts use other
components for routing (for example, a commercial router or switches), those components are assumed to
perform the routing functions correctly, and do not alter the data part of the packets.
If other systems are to be connected to the network, with multiple TOE systems connected via a physically
protected LAN, then they need to be configured and managed by the same authority using an appropriate
security policy not conflicting with the security policy of the TOE.
2.2.3 TOE services
Each host computer in the system is capable of providing the following types of services:
•Local services to the users who are currently logged in to the system using a local computer console,
virtual consoles, or terminal devices connected through physically protected serial lines.
•Local services to the previous users via deferred jobs; an example is the cron daemon.
•Local services to users who have accessed the local host via the network using a protocol such as
ssh, which starts a user shell on the local host.
•Network services to potentially multiple users on either the local host or on remote hosts.
Figure 2-3 illustrates the difference between local services that take place on each local host computer, versus
network services that involve client-server architecture and a network service layer protocol. For example, a
user can log in to the local host computer and make file system requests or memory management requests for
services via system calls to the kernel of the local host. All such local services take place solely on the local
host computer and are mediated solely by trusted software on that host.
6
Figure 2-3: Local and network services provided by SLES
Network services, such as ssh or ftp, involve client-server architecture and a network service-layer
protocol. The client-server model splits the software that provides a service into a client portion that makes
the request, and a server portion that carries out the request, usually on a different computer. The service
protocol is the interface between the client and server. For example, User A can log in at Host 1, and then use
ssh to log in to Host 2. On Host 2, User A is logged in from a remote host.
On Host 1, when User A uses ssh to log in to Host 2, the ssh client on Host 1 makes protocol requests to an
ssh server process on Host 2. The server process mediates the request on behalf of User A, carries out the
requested service, if possible, and returns the results to the requesting client process.
Also, note that the network client and server can be on the same host system. For example, when User B uses
ssh to log in to Host 2, the user's client process opens an ssh connection to the ssh server process on Host 2.
Although this process takes place on the local host computer, it is distinguished from local services because it
involves networking protocols.
2.2.4 Security policy
A user is an authorized individual with an account. Users can use the system in one of three ways:
1. By interacting directly with the system thorough a session at a computer console (in which case the
user can use the graphical display provided as the console), or
2. By interacting directly with system through a session at a serial terminal, or
3. Through deferred execution of jobs using the cron and atd utilities.
A user must log in at the local system in order to access the protected resources of the system. Once a user is
authenticated, the user can access files or execute programs on the local computer, or make network requests
to other computers in the system.
The only subjects in the system are processes. A process consists of an address space with an execution
context. The process is confined to a computer; there is no mechanism for dispatching a process to run
remotely (across TCP/IP) on another host. Every process has a process ID (PID) that is unique on its local
host computer, but PIDs are not unique throughout the system. As an example, each host in the system has an
init process with PID 1. Section 5.2 of this document explains how a parent process creates a child by making
a clone(), fork() or a vfork() system call; the child can then call execve() to load a new program.
7
Objects are passive repositories of data. The TOE defines three types of objects: named objects, storage
objects, and public objects. Named objects are resources, such as files and IPC objects, which can be
manipulated by multiple users using a naming convention defined at the TSF interface. A storage object is an
object that supports both read and write access by multiple non-trusted subjects. Consistent with these
definitions, all named objects are also categorized as storage objects, but not all storage objects are named
objects. A public object is an object that can be publicly read by non-trusted subjects and can be written only
by trusted subjects.
SLES enforces a DAC policy for all named objects under its control, and an object reuse policy for all storage
objects under its control. Additional access control checks are possible, if an optional kernel module is
loaded, such as AppArmor. If AppArmor is loaded, DAC policy is enforced first, and the additional access
control checks are made only if DAC would allow the access. The additional checks are non-authoritative;
that is, a DAC policy denial cannot be overridden by the additional access control checks in the kernel
module.
While the DAC policy that is enforced varies among different object classes, in all cases it is based on user
identity and on group membership associated with the user identity. To allow for enforcement of the DAC
policy, all users must be identified, and their identities must be authenticated. The TOE uses both hardware
and software protection mechanisms.
The hardware mechanisms used by SLES to provide a protected domain for its own execution include a
multistate processor, memory segment protection, and memory page protection. The TOE software relies on
these hardware mechanisms to implement TSF isolation, non-circumventability, and process address-space
separation.
A user can log in at the console, at other directly attached terminals, or through a network connection.
Authentication is based on a password entered by the user and authentication data stored in a protected file.
Users must log in to a host before they can access any named objects on that host. Some services, such as
ssh to obtain a shell prompt on another host, or ftp to transfer files between hosts in the distributed system,
require the user to re-enter authentication data to the remote host. SLES permits the user to change passwords
(subject to TOE enforced password guidelines), change identity, submit batch jobs for deferred execution, and
log out of the system. The Strength of Function Analysis [VA] shows that the probability of guessing a
password is sufficiently low given the minimum password length and maximum password lifetime.
The system architecture provides TSF self-protection and process isolation mechanisms.
2.2.5 Operation and administration
The eServer networks can be composed of one, several, or many different host computers, each of which can
be in various states of operation, such as being shut down, initializing, being in single-user mode, or online in
a secure state. Thus, administration involves the configuration of multiple computers and the interactions of
those computers, as well as the administration of users, groups, files, printers, and other resources for each
eServer system.
The TOE provides the useradd, usermod, and userdel commands to add, modify, and delete a user
account. It provides the groupadd, groupmod, and groupdel commands to add, modify, and delete a
group from the system. These commands accept options to set up or modify various parameters for accounts
and groups. The commands modify the appropriate TSF databases and provide a safer way than manual
editing to update authentication databases. Refer to the appropriate command man pages for detailed
information about how to set up and maintain users and groups.
2.2.6 TSF interfaces
The TSF interfaces include local interfaces provided by each host computer, and the network client-server
interfaces provided by pairs of host computers.
8
The local TSF interfaces provided by an individual host computer include:
•Files that are part of the TSF database that define the configuration parameters used by the security
functions.
•System calls made by trusted and untrusted programs to the privileged kernel-mode software. As
described separately in this document, system calls are exported by the base SLES kernel and by
kernel modules.
•Interfaces to trusted processes and trusted programs
•Interfaces to the SLES kernel through the /proc and the /sys pseudo file systems
External TSF interfaces provided by pairs of host computer include SSH v2 and SSL v3.
For more detailed information about these interfaces, refer to:
•SSH v2 Proposed Standard RFC 4819 Secure Shell Public Key Subsystem
•RFC 3268 Advanced Encryption Standard (AES) Ciphersuites for Transport Layer Security (TLS)
http://www.ietf.org/rfc/rfc3268.txt
The following are interfaces that are not viewed as TSF interfaces:
•Interfaces between non-TSF processes and the underlying hardware. Typically, user processes do not
interface directly with the hardware; exceptions are processor and graphics hardware. User processes
interact with the processor by executing CPU instructions, reading and modifying CPU registers, and
modifying the contents of physical memory assigned to the process. User processes interact with
graphics hardware by modifying the contents of registers and memory on the graphics adapter.
Unprivileged processor instructions are externally visible interfaces. However, the unprivileged
processor instructions do not implement any security functionality, and the processor restricts these
instructions to the bounds defined by the processor. Therefore, this interface is not considered as part
of the TSF.
•Interfaces between different parts of the TSF that are invisible to normal users (for example, between
subroutines within the kernel) are not considered to be TSF interfaces. This is because the interface is
internal to the trusted part of the TOE and cannot be invoked outside of those parts. Those interfaces
are therefore not part of the functional specification, but are explained in this HLD.
•The firmware (PR/SM
TM
, z/VMTM, P5-LPAR), while part of the TOE, are not considered as providing
TSF interfaces because they do not allow direct unprivileged operations to them.
•System z processor exceptions reflected to the firmware, including z/VM, PR/SM, and LPAR, are not
considered to be TSF interfaces. They are not relevant to security because they provide access to the
z/VM kernel, which does not implement any security functionality.
•The System z z/VM DIAGNOSE code interface is not considered a TSF interface because it is not
accessible by unprivileged processes in the problem state, and does not provide any security
functionality.
TSF interfaces include any interface that is possible between untrusted software and the TSF.
2.3 Approach to TSF identification
This section summarizes the approach to identification of the TSF.
As stated in Section 2.2.6, while the hardware and firmware (z/VM, PR/SM, LPAR) are part of the TOE, they
are not considered as providing TSF interfaces. The SLES operating system, on the other hand, does provide
TSF interfaces.
9
The SLES operating system is distributed as a collection of packages. A package can include programs,
configuration data, and documentation for the package. Analysis is performed at the file level, except where a
particular package can be treated collectively. A file is included in the TSF for one or more of the following
reasons:
•It contains code, such as the kernel, kernel module, and device drivers, that runs in a privileged
hardware state.
•It enforces the security policy of the system.
•It allows setuid or setgid to a privileged user (for example, root) or group.
•It started as a privileged daemon; an example is one started by /etc/init.d.
•It is software that must function correctly to support the system security mechanisms.
•It is required for system administration.
•It consists of TSF data or configuration files.
•It consists of libraries linked to TSF programs.
There is a distinction between non-TSF user-mode software that can be loaded and run on the system, and
software that must be excluded from the system. The following methods are used to ensure that excluded
software cannot be used to violate the security policies of the system:
•The installation software will not install any device drivers except those required for the installed
hardware. Consequently, excluded device drivers will not be installed even if they are on the
installation media.
•The installation software may change the configuration (for example, mode bits) so that a program
cannot violate the security policy.
10
11
3 Hardware architecture
The TOE includes the IBM System x, System p, System z, and eServer 326. This section describes the
hardware architecture of these eServer systems. For more detailed information about Linux support and
resources for the entire eServer line, refer to http://www.ibm.com/systems/browse/linux.
3.1 System x
IBM System x systems are Intel processor-based servers with X-architecture technology enhancements for
reliability, performance, and manageability. X-architecture is based on technologies derived from the IBM
ESTM-, RSTM-, and ASTM-series servers.
3.1.1 System x hardware overview
The IBM System x servers offer a range of systems, from entry-level to enterprise class. The high-end
systems offer support for gigabytes of memory, large RAID configurations of SCSI and fiber channel disks,
and options for high-speed networking. IBM System x servers are equipped with a real-time hardware clock.
The clock is powered by a small battery and continues to tick even when the system is switched off. The realtime clock maintains reliable time for the system. For the specification of each of the System x servers, refer
to the system x hardware Web site at http://www.ibm.com/systems/x/.
3.1.2 System x hardware architecture
The IBM System x servers are powered by Intel Xeon® and Xeon MP processors. For detailed specification
information for each of these processors, refer to the Intel processor spec-finder Web site at
The Intel Xeon processor is mainly based on EM64 technology, which has the following three operating
modes:
•32-bit legacy mode: In this mode, both AMD64 and EM64T processors will act just like any other
IA32 compatible processor. One can install this 32-bit operating system and run 32-bit applications
on such a system, but it fails to make use of new features such as flat memory addressing above 4 GB
or the additional general Purpose Registers (GPRs). 32-bit applications will run just as fast as they
would on any current 32-bit processor.
•Compatibility mode: This is an intermediate mode of the full 64-bit mode described next. In this
mode one has to install a 64-bit operating system and 64-bit drivers. If a 64-bit operating system and
drivers are installed, Xeon processors will be enabled to support a 64-bit operating system with both
32-bit applications or 64-bit applications. Hence this mode has the ability to run a 64-bit operating
system while still being able to run unmodified 32-bit applications. Each 32-bit application will still
be limited to a maximum of 4 GB of physical memory. However, the 4 GB limit is now imposed on
a per-process level, not at a system-wide level.
•Full 64-bit mode: This mode is referred as IA-32e mode. This mode is operative when a 64-bit
operating system and 64-bit application are used. In the full 64-bit operating mode, an application
can have a virtual address space of up to 40 bits, which equates to 1 TB of addressable memory. The
amount of physical memory will be determined by how many Dual In-line Memory Module (DIMM)
slots the server has, and the maximum DIMM capacity supported and available at the time.
12
In this mode, applications may access:
•64-bit flat linear addressing
•8 new general-purpose registers (GPRs)
•8 new registers for streaming Single Instruction/Multiple Data (SIMD) extensions (SSE, SSE2 and
SSE3)
•64-bit-wide GPRs and instruction pointers
•uniform byte-register addressing
•fast interrupt-prioritization mechanism
•a new instruction-pointer relative-addressing mode.
For architectural details about all System x models, and for detailed information about individual components
such as memory, cache, and chipset, refer to the “Accessories & Upgrades” section at
http://www.ibm.com/systems/x/
USB (except keyboard and mouse), PCMCIA, and IEEE 1394 (Firewire) devices are not supported in the
evaluated configuration.
3.2 System p
The IBM System p systems are PowerPC, POWER5 and POWER5+ processor-based systems that provide
high availability, scalability, and powerful 64-bit computing performance.
For more detailed information about the System p hardware, refer to the System p hardware website at
http://www.ibm.com/systems/p/.
3.2.1 System p hardware overview
The IBM System p servers offer a range of systems, from entry level to enterprise class. The high-end
systems offer support for gigabytes of memory, large RAID configurations of SCSI and fiber channel disks,
and options for high-speed networking. The IBM System p servers are equipped with a real-time hardware
clock. The clock is powered by a small battery, and continues to tick even when the system is switched off.
The real-time clock maintains reliable time for the system. For the specification of each of the System p
servers, refer to the corresponding data sheets on the System p literature website:
For a detailed look at various peripherals such as storage devices, communications interfaces, storage
interfaces, and display devices supported on these System p models, refer to the Linux on POWER website.
http://www.ibm.com/systems/linux/power/.
3.2.2 System p hardware architecture
The IBM System p servers are powered by PowerPC™, POWER5™ and POWER5+™ processors. For
detailed specification information for each of these processors, refer to the PowerPC processor documentation
at http://www.ibm.com/chips/power/powerpc/ and POWER documentation at
http://www.ibm.com/chips/power/aboutpower/.
For architectural details about all System p models, and for detailed information about individual components
such as memory, cache, and chipset, refer to the IBM System p technical documentation at
http://publib16.boulder.ibm.com/pseries/en_US/infocenter/base/hardware.htm or
http://www.ibm.com/servers/eserver/pseries/library/.
13
USB (except keyboard and mouse), PCMCIA, and IEEE 1394 (Firewire) devices are not supported in the
evaluated configuration.
3.3 System z
The IBM System z is designed and optimized for high-performance data and transaction serving
requirements. On a System z system, Linux can run on native hardware, in a logical partition, or as a guest of
the z/VM® operating system. SLES runs on System z as a guest of the z/VM Operating System.
For more detailed information about the System z hardware, refer to the System z hardware website at
http://www.ibm.com/systems/z/.
3.3.1 System z hardware overview
The System z hardware runs z/Architecture™ and the S/390™ Enterprise Server Architecture (ESA)
software. The IBM System z server is equipped with a real-time hardware clock. The clock is powered by a
small battery, and continues to tick even when the system is switched off. The real-time clock maintains
reliable time for the system. For a more detailed overview of the System z hardware models, or detailed
information about specific models, refer to the http://www.ibm.com/systems/z/hardware/ site.
3.3.2 System z hardware architecture
The System z servers are powered by IBM’s multi-chip module (MCM), which contains up to 20 processing
units (PUs). These processing units contain the z/Architecture logic. There are three modes in which Linux
can be run on a System z server: native hardware mode, logical partition mode, and z/VM guest mode. The
following paragraphs describe these modes.
•Native hardware mode: In native hardware mode, Linux can run on the entire machine without any
other operating system. Linux controls all I/O devices and needs support for their corresponding
device drivers.
•Logical partition mode: A System z system can be logically partitioned into a maximum of 30
separate Logical Partitions (LPARs). A single System z server can then host the z/OS operating
system in one partition, and Linux in another. Devices can be dedicated to a particular logical
partition, or they can be shared among several logical partitions. The Linux operating system controls
devices allocated to its partition, and thus needs support for their corresponding device drivers.
•z/VM guest mode: Linux can run in a virtual machine using the z/VM operating system as a
hypervisor. The hypervisor provides virtualization of CPU processors, I/O subsystems, and memory.
In this mode, hundreds of Linux instances can run on a single System z system. SLES runs on
System z in the z/VM guest mode. Virtualization of devices in the z/VM guest mode allows SLES to
operate with generic devices. The z/VM maps these generic devices to actual devices.
Figure 3-1 from the Linux Handbook [LH] illustrates z/VM concepts:
14
Figure 3-1: z/VM as hypervisor
For more details about z/Architecture, refer to the z/Architecture document z/Architecture Principles of
Operation at http://publibz.boulder.ibm.com/epubs/pdf/dz9zr002.pdf.
USB (except keyboard and mouse), PCMCIA, and IEEE 1394 (Firewire) devices are not supported in the
evaluated configuration.
3.4 eServer 326
The IBM eServer 326 systems are AMD Opteron processor-based systems that provide high performance
computing in both 32-bit and 64-bit environments. The eServer 326 significantly improves on existing 32-bit
applications, and excels at 64-bit computing in performance, allowing for easy migration to 64-bit computing.
For more detailed information about eServer 326 hardware, refer to the eServer 326 hardware Web site at
http://www.ibm.com/servers/eserver/opteron/.
3.4.1 eServer 326 hardware overview
The IBM eServer 326 systems offer support for up to two AMD Opteron processors, up to twelve GB of
memory, hot-swap SCSI or IDE disk drives, RAID-1 mirroring, and options for high-speed networking. The
IBM eServer 326 server is equipped with a real-time hardware clock. The clock is powered by a small battery
and continues to tick even when the system is switched off. The real-time clock maintains reliable time for
the system.
3.4.2 eServer 326 hardware architecture
The IBM eServer 326 systems are powered by AMD Opteron processors. For detailed specifications of the
Opteron processor, refer to the processor documentation at
The Opteron is based on the AMD x86-64 architecture. The AMD x86-64 architecture is an extension of the
x86 architecture, extending full support for 16-bit, 32-bit, and 64-bit applications running concurrently.
The x86-64 architecture adds a mode called the long mode. The long mode is activated by a global control bit
called Long Mode Active (LMA). When LMA is zero, the processor operates as a standard x86 processor
and is compatible with the existing 32-bit SLES operating system and applications. When LMA is one, 64-bit
15
processor extensions are activated, allowing the processor to operate in one of two sub-modes of LMA.
These are the 64-bit mode and the compatibility mode.
•64-bit mode: In 64-bit mode, the processor supports 64-bit virtual addresses, a 64-bit instruction
pointer, 64-bit general-purpose registers, and eight additional general-purpose registers, for a total of
16 general-purpose registers.
•Compatibility mode: Compatibility mode allows the operating system to implement binary
compatibility with existing 32-bit x86 applications. These legacy applications can run without
recompilation. This coexistence of 32-bit legacy applications and 64-bit applications is implemented
with a compatibility thunk layer.
Figure 3-2: AMD x86-64 architecture in compatibility mode
The thunk layer is a library provided by the operating system. The library resides in a 32-bit process created
by the 64-bit operating system to run 32-bit applications. A 32-bit application, transparent to the user, is
dynamically linked to the thunk layer and implements 32-bit system calls. The thunk layer translates system
call parameters, calls the 64-bit kernel, and translates results returned by the kernel appropriately and
transparently for a 32-bit application.
For detailed information about the x86-64 architecture, refer to the AMD Opteron technical documentation at
USB (except keyboard and mouse), PCMCIA, and IEEE 1394 (Firewire) devices are not supported in the
evaluated configuration.
16
17
4 Software architecture
This chapter summarizes the software structure and design of the SLES system and provides references to
detailed design documentation.
The following subsections describe the TOE Security Functions (TSF) software and the TSF databases for the
SLES system. The descriptions are organized according to the structure of the system and describe the SLES
kernel that controls access to shared resources from trusted (administrator) and untrusted (user) processes.
This chapter provides a detailed look at the architectural pieces, or subsystems, that make up the kernel and
the non-kernel TSF. This chapter also summarizes the databases that are used by the TSF.
The Functional Description chapter that follows this chapter describes the functions performed by the SLES
logical subsystems. These logical subsystems generally correspond to the architectural subsystems described
in this chapter. The two topics were separated into different chapters in order to emphasize that the material
in the Functional Descriptions chapter describes how the system performs certain key security-relevant
functions. The material in this chapter provides the foundation information for the descriptions in the
Functional Description chapter.
4.1 Hardware and software privilege
This section describes the terms hardware privilege and software privilege as they relate to the SLES
operating system. These two types of privileges are critical for the SLES system to provide TSF selfprotection. This section does not enumerate the privileged and unprivileged programs. Rather, the TSF
Software Structure identifies the privileged software as part of the description of the structure of the system.
4.1.1 Hardware privilege
The eServer systems are powered by different types of processors. Each of these processors provides a notion
of user mode execution and supervisor, or kernel, mode execution. The following briefly describes how these
user- and kernel-execution modes are provided by the System x, System p, System z, and eServer 326
systems.
4.1.1.1 Privilege level
This section describes the concept of privilege levels by using Intel-based processors as an example. The
concept of privilege is implemented by assigning a value of 0 to 3 to key objects recognized by the processor.
This value is called the privilege level. The following processor-recognized objects contain privilege levels:
•Descriptors contain a field called the descriptor privilege level (DPL).
•Selectors contain a field called the requestor’s privilege level (RPL). The RPL is intended to
represent the privilege level of the procedure that originates the selector.
•An internal processor register records the current privilege level (CPL). Normally the CPL is equal
to the DPL of the segment the processor is currently executing. The CPL changes as control is
transferred to segments with differing DPLs.
Figure 4-1 shows how these levels of privilege can be interpreted as layers of protection. The center is for the
segments containing the most critical software, usually the kernel of the operating system. Outer layers are for
the segments of less critical software.
18
Loading...
+ 216 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.