IBM PC Server and Novell NetWare
Integration Guide
December 1995
SG24-4576-00
Take Note!
Before using this information and the product it supports, be sure to read the general information under
“Special Notices” on page xv.
First Edition (December 1995)
This edition applies to IBM PC Servers, for use with an OEM operating system.
Order publications through your IBM representative or the IBM branch office serving your locality. Publications
are not stocked at the address given below.
An ITSO Technical Bulletin Evaluation Form for reader′s feedback appears facing Chapter 1. If the form has been
removed, comments may be addressed to:
IBM Corporation, International Technical Support Organization
Dept. HZ8 Building 678
P.O. Box 12195
Research Triangle Park, NC 27709-2195
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1995. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Abstract
This document describes the procedures necessary to successfully implement
Novell NetWare on an IBM PC Server platform. It describes the current IBM PC
Server line and discusses the technology inside the machines. It outlines
step-by-step procedures for installing both NetWare V3.12 and V4.1 using both
IBM ServerGuide and the original product media. It has a detailed section on
performance tuning. It covers IBM′s NetFinity systems management tool, which
ships with every IBM PC Server and IBM premium brand PC.
This document is intended for IBM customers, dealers, systems engineers and
consultants who are implementing NetWare on an IBM PC Server platform.
A basic knowledge of PCs, file servers, DOS, and NetWare is assumed.
B.1 Finding Compatibility Information on the World Wide Web
B.2 Finding Device Drivers on the World Wide Web
B.3 Finding Software Patches on the World Wide Web
Appendix C. Configuring DOS CD-ROM Support
C.1 Installing CD-ROM Support for PCI Adapters.
C.2 Installing CD-ROM Support for Adaptec Adapters
.............. 200
............. 201
................ 203
................ 203
............. 203
C.3 Installing CD-ROM Support for Micro-Channel Adapters
This document is intended for IBM customers, dealers, systems engineers and
consultants who are implementing Novell NetWare on an IBM PC Server. The
information in this publication is not intended as the specification of any
programming interfaces that are provided by IBM.
References in this publication to IBM products, programs or services do not
imply that IBM intends to make these available in all countries in which IBM
operates. Any reference to an IBM product, program, or service is not intended
to state or imply that only IBM′s product, program, or service may be used. Any
functionally equivalent program that does not infringe any of IBM′s intellectual
property rights may be used instead of the IBM product, program or service.
Information in this book was developed in conjunction with use of the equipment
specified, and is limited in application to those specific hardware and software
products and levels.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director of
Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594 USA.
The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The information about non-IBM
(VENDOR) products in this manual has been supplied by the vendor and IBM
assumes no responsibility for its accuracy or completeness. The use of this
information or the implementation of any of these techniques is a customer
responsibility and depends on the customer′s ability to evaluate and integrate
them into the customer′s operational environment. While each item may have
been reviewed by IBM for accuracy in a specific situation, there is no guarantee
that the same or similar results will be obtained elsewhere. Customers
attempting to adapt these techniques to their own environments do so at their
own risk.
Any performance data contained in this document was determined in a
controlled environment, and therefore, the results that may be obtained in other
operating environments may vary significantly. Users of this document should
verify the applicable data for their specific environment.
Reference to PTF numbers that have not been released through the normal
distribution process does not imply general availability. The purpose of
including these reference numbers is to alert IBM customers to specific
information relative to the implementation of the PTF when it becomes available
to each customer according to the normal IBM PTF distribution process.
The following terms are trademarks of the International Business Machines
Corporation in the United States and/or other countries:
AIXAIX/6000
ATDB2/2
DataHubDatagLANce
EtherStreamerFirst Failure Support Technology
IBMLANStreamer
Micro ChannelNetFinity
Copyright IBM Corp. 1995 xv
NetViewOS/2
PS/2Personal System/2
Power Series 800Presentation Manager
SystemViewUltimedia
VM/ESA
The following terms are trademarks of other companies:
C-bus is a trademark of Corollary, Inc.
PC Direct is a trademark of Ziff Communications Company and is
used by IBM Corporation under license.
UNIX is a registered trademark in the United States and other
countries licensed exclusively through X/Open Company Limited.
Windows is a trademark of Microsoft Corporation.
386Intel Corporation
486Intel Corporation
AHAAdaptec, Incorporated
AppleTalkApple Computer, Incorporated
BanyanBanyan Systems, Incorporated
CAComputer Associates
DECnetDigital Equipment Corporation
EtherLink3COM Corporation
HPHewlett-Packard Company
IPXNovell, Incorporated
IntelIntel Corporation
Lotus 1-2-3Lotus Development Corporation
Lotus NotesLotus Development Corporation
MSMicrosoft Corporation
MicronicsMicronics Electronics, Incorporated
MicrosoftMicrosoft Corporation
Microsoft ExcelMicrosoft Corporation
NFSSun Microsystems Incorporated
NetWareNovell, Incorporated
NovellNovell, Incorporated
OpenViewHewlett-Packard Company
PentiumIntel Corporation
PhoenixPhoenix Technologies, Limited
PowerChuteAmerican Power Conversion
SCOThe Santa Cruz Operation, Incorporated
SCSISecurity Control Systems, Incorporated
SCSISelectAda ptec, Incorporated
VINESBanyan Systems, Incorporated
Windows NTMicrosoft Corporation
X/OpenX/Open Company Limited
i386Intel Corporation
i486Intel Corporation
i960Intel Corporation
Other trademarks are trademarks of their respective companies.
xviNetWare Integration Guide
Preface
This document describes the procedures necessary to implement Novell
NetWare on IBM PC Server platforms. It provides detailed information on
installation, configuration, performance tuning, and management of the IBM PC
Server in the NetWare environment. It also discusses the features and
technologies of the IBM PC Server brand and positions the various models in the
brand.
How This Document is Organized
The document is organized as follows:
•
Chapter 1, “IBM PC Server Technologies”
This chapter introduces many of the technologies used in the IBM PC Server
brand and gives examples of system implementations where they are used.
•
Chapter 2, “IBM PC Server Family Overview”
This chapter positions the various models within the IBM PC Server brand
and gives specifications for each model.
•
Chapter 3, “Hardware Configuration”
This chapter provides a roadmap for configuring the various models of the
IBM PC Server line and describes the configuration process in detail.
•
Chapter 4, “Novell NetWare Installation”
This chapter gives a step-by-step process for installing both NetWare V3.12
and V4.1 and the NetFinity Manager using both ServerGuide and the original
product diskettes and CD-ROM. It also contains an overview of the
ServerGuide product. It also covers the RAID administration tools and
details a process for simulating and recovering from a DASD failure.
•
Chapter 5, “Performance Tuning”
This chapter presents an in-depth discussion of tuning NetWare as it relates
to the major hardware subsystems of the file server. It also discusses
performance monitoring tools.
•
Appendix A, “EISA Configuration File”
This appendix contains a sample report printed from the EISA configuration
utility.
•
Appendix B, “Hardware Compatibility, Device Driver, and Software Patch
Information”
This appendix gives information on where to find the latest compatibility
information, device drivers, and code patches in the NetWare environment.
•
Appendix C, “Configuring DOS CD-ROM Support”
This appendix gives information on how to configure your IBM PC Server for
CD-ROM support in the DOS environment.
Copyright IBM Corp. 1995 xvii
Related Publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this document.
•
IBM PC Server 310 System Library,
•
IBM PC Server 320 System Library for Non-Array Models,
•
IBM PC Server 320 System Library for Array Models,
•
IBM PC Server 320 PCI/Micro Channel System Library,
•
IBM PC Server 520 System Library,
•
The PC Server 720 System Library
S52H-3697
S52H-3695
, S30H-1782
International Technical Support Organization Publications
•
Advanced PS/2 Servers Planning and Selection Guide
•
NetWare 4.0 from IBM: Directory Services Concepts
•
NetWare from IBM: Network Protocols and Standards
S19H-1175
S19H-1196
S30H-1778
, GG24-3927
, GG24-4078
, GG24-3890
A complete list of International Technical Support Organization publications,
known as redbooks, with a brief description of each, may be found in:
International Technical Support Organization Bibliography of Redbooks,
GG24-3070.
To get a catalog of ITSO redbooks, VNET users may type:
TOOLS SENDTO WTSCPOK TOOLS REDBOOKS GET REDBOOKS CATALOG
A listing of all redbooks, sorted by category, may also be found on MKTTOOLS
as ITSOCAT TXT. This package is updated monthly.
How to Order ITSO Redbooks
IBM employees in the USA may order ITSO books and CD-ROMs using
PUBORDER. Customers in the USA may order by calling 1-800-879-2755 or by
faxing 1-800-445-9269. Most major credit cards are accepted. Outside the
USA, customers should contact their local IBM office. Guidance may be
obtained by sending a PROFS note to BOOKSHOP at DKIBMVM1 or E-mail to
bookshop@dk.ibm.com.
Customers may order hardcopy ITSO books individually or in customized
sets, called BOFs, which relate to specific functions of interest. I BM
employees and customers may also order ITSO books in online format on
CD-ROM collections, which contain redbooks on a variety of products.
ITSO Redbooks on the World Wide Web (WWW)
Internet users may find information about redbooks on the ITSO World Wide Web
home page. To access the ITSO Web pages, point your Web browser to the
following URL:
http://www.redbooks.ibm.com/redbooks
xviiiNetWare Integration Guide
IBM employees may access LIST3820s of redbooks as well. Point your web
browser to the IBM Redbooks home page at the following URL:
http://w3.itsc.pok.ibm.com/redbooks/redbooks.html
Acknowledgments
This project was designed and managed by:
Tim Kearby
International Technical Support Organization, Raleigh Center
The authors of this document are:
Wuilbert Martinez Zamora
IBM Mexico
Jean-Paul Simoen
IBM France
Angelo Rimoldi
IBM Italy
Tim Kearby
International Technical Support Organization, Raleigh Center
This publication is the result of a residency conducted at the International
Technical Support Organization, Raleigh Center.
Thanks to the following people for the invaluable advice and guidance provided
in the production of this document:
Barry Nusbaum, Michael Koerner, Gail Wojton
International Technical Support Organization
Tom Neidhardt, Dave Laubscher, Marc Shelley
IBM PC Server Competency Center, Raleigh
Ted Ross, Ron Abbott
IBM PC Company, Raleigh
Gregg McKnight, Phil Horwitz, Paul Awoseyi
IBM PC Server Performance Laboratory, Raleigh
John Dinwiddie, Alison Farley, Victor Guess, Dottie Gardner-Lamontagne
IBM PC Server Unit, Raleigh
Parts of this document are based on an earlier version of the
Integration Guide,
Center in Basingstoke, U.K.
which was produced by the IBM European Personal Systems
NetWare
Thanks also to the many people, both within and outside IBM, who provided
suggestions and guidance, and who reviewed this document prior to publication.
Prefacexix
Chapter 1.IBM PC Server Technologies
IBM PC Servers use a variety of technologies. This chapter introduces many of
these technologies and gives examples of system implementations where they
are used.
1.1 Processors
The microprocessor is the central processing unit (CPU) of the server. It is the
place where most of the control and computing functions occur. All operating
system and application program instructions are executed here. Most
information passes through it, whether it is a keyboard stroke, data from a disk
drive, or information from a communication network.
The processor needs data and instructions for each processing operation that it
performs. Data and instructions are loaded from memory into data-storage
locations, known as registers, in the processor. Registers are also used to store
the data that results from each processing operation, until the data is transferred
to memory.
1.1.1 Clock Rate
The microprocessor is packaged as an integrated circuit which contains one or
more arithmetic logic units (ALUs), a floating point unit, on-board cache,
registers for holding instructions and data, and control circuitry.
Note: The ALUs and the floating point unit are often collectively referred to as
execution units.
A fundamental characteristic of all microprocessors is the rate at which they
perform operations. This is called the clock rate and is measured in millions of
cycles per second or Megahertz (MHz). The maximum clock rate of a
microprocessor is determined by how fast the internal logic of the chip can be
switched. As silicon fabrication processes are improved, the integrated devices
on the chip become smaller and can be switched faster. Thus, the clock speed
can be increased.
For example, the Pentium P54C processor in the IBM PC Server 720 operates at
a clock speed of 100 MHz. The P54C is based on a fabrication process where
transistors on the chip have a channel width of .6 microns (a .6 micron BiCMOS
process). The original P5 processor is based on a .8 micron process and could
only be clocked at a maximum of 66 MHz.
The clock rate of the external components can be different from the rate at which
the processor is clocked internally. Clock doubling is a technique used in the
Intel DX2 and DX4 class processors to clock the processor internally faster than
the external logic components. For example, the 486DX2 at 66/33 MHz clocks the
processor internally at 66 MHz, while clocking the external logic components at
33 MHz. This is an efficient systems design technique when faster external logic
components are not available or are prohibitively expensive.
One might think that the faster the clock speed, the faster the performance of the
system. This is not always the case. The speed of the other system
components, such as main memory, can also have a dramatic effect on
Copyright IBM Corp. 1995 1
performance. (Please see 1.3, “Memory” on page 3 for a discussion of memory
speeds and system performance.) The point is that you cannot compare system
performance by simply looking at the speed at which the processor is running.
A 90 MHz machine with a set of matched components can out perform a 100
MHz machine which is running with slow memory. IBM PC Servers are always
optimized to incorporate these factors and they always deliver a balanced
design.
1.1.2 External Interfaces
The processor data interface, or data bus, is the data connection between the
processor and external logic components. The Pentium family of processors
utilizes a 64-bit data bus, which means that they are capable of reading in 8
bytes of data in one memory cycle from processor main memory. The Intel 486
has a data bus of only 32-bits, which limits its memory cycles to 4 bytes of data
per cycle.
The width of the processor address interface, or address bus, determines the
amount of physical memory the processor can address. A processor with a
24-bit address bus, such as the i286 class of processors, can address a
maximum of 16 megabytes (MB) of physical memory. Starting with the i386 class
of processors, the address bus was increased to 32 bits, which correlates to 4
gigabyte (GB) of addressability.
1.1.3 Processor Types
IBM currently uses two processors in the PC Server line:
•
80486DX2
The 80486DX2 has a 32-bit address bus and 32-bit data bus. I t utilizes clock
doubling to run at 50/25 MHz or 66/33 MHz. I t is software compatible with all
previous Intel processors. The 80486DX2 has an internal two-way set
associative 8KB cache.
•
Pentium
The Pentium has a 32-bit address bus and 64-bit data bus. It has internal
split data and instruction caches of 8KB each. The instruction cache is a
write-through cache and the data cache is a write-back design. The Pentium
microprocessor is a two-issue superscalar machine. This means that there
are two integer execution units (ALUs) in addition to the on-board floating
point unit. The superscalar architecture is one of the key techniques used to
improve performance over that of the previous generation i486 class
processors. Intel was able to achieve this design while maintaining
compatibility with applications written for the Intel i368/i486 family of
processors.
Note: A superscalar architecture is one where the microprocessor has
multiple execution units, which allow it to perform multiple operations during
the same clock cycle.
2NetWare Integration Guide
1.2 Multiprocessing
Multiprocessing uses two or more processors in a system to increase
throughput. Multiprocessing yields high performance for CPU intensive
applications such as database and client/server applications.
There are two types of multiprocessing:
•
Asymmetric Multiprocessing
•
Symmetric Multiprocessing
1.3 Memory
Asymmetric Multiprocessing:
In asymmetric multiprocessing the program tasks
(or threads) are strictly divided by type between processors and each processor
has its own memory address space. These features make asymmetric
multiprocessing difficult to implement.
Symmetric Multiprocessing (SMP):
Symmetric multiprocessing means that any
processor has access to all system resources including memory and I/O devices.
Threads are divided evenly between processors regardless of type. A process is
never forced to execute on a particular processor.
Symmetric multiprocessing is easier to implement in network operating systems
(NOSs) and is the method used most often in operating systems that support
multiprocessing. It is the technology currently used by OS/2 SMP, Banyan Vines,
SCO UNIX, Windows NT, and UnixWare 2.0.
The IBM PC Server 320, 520, and 720 support SMP. The PC Server 320 and 520
support two-way SMP via an additional Pentium processor in a socket on the
planar board. The 720 supports two-to-six way SMP via additional processor
complexes.
The system design of PC servers (in fact all microprocessor-based systems) is
centered around the basic memory access operation. System designers must
tune
always
this operation to be as fast as possible in order to achieve the
highest possible performance.
Processor architectures always allow a certain number of clock cycles in order
to read or write information to system memory. If the system design allows this
to be completed in the given number of clock cycles, then this is called a zero
wait state design.
If for some reason the operation does not complete in the given number of
clocks, the processor must
These are called
wait states
by inserting extra
and are always an integer multiple of clock cycles.
states
into the basic operation.
wait
The challenge is that as each new generation of processors is clocked faster, it
becomes more expensive to incorporate memory devices that have access times
allowing zero wait designs. For example, state of the art Dynamic Random
Access Memory, or DRAM, has a typical access time of about 60 nanoseconds
(ns). A 60 ns DRAM is not fast enough to permit a zero wait state design with a
Pentium class processor. Static RAM, or SRAM, has an access time of less than
10 ns. A 10 ns SRAM design would allow for zero waits at current processor
speeds but would be prohibitively expensive to implement as main memory. A
basic trade-off that all system designers must face is simply that as the access
time goes down, the price goes up.
Chapter 1. IBM PC Server Technologies3
1.3.1 Caches
The key is to achieve a balanced design where the speed of the processor is
matched to that of the external components. IBM engineers achieve a balanced
design by using several techniques to reduce the
effective
access time of main
system memory:
•
Cache
•
Interleaving
•
Dual path buses
•
SynchroStream technology
Research has shown that when a system uses data, it will be likely to use it
again. As previously discussed, the faster the access to this data occurs, the
faster the overall machine will operate. Caches are memory buffers that act as
temporary storage places for instructions and data obtained from slower, main
memory. They use static RAM and are much faster than the dynamic RAM used
for system memory (typically five to ten times faster). However, SRAM is more
expensive and requires more power, which is why it is not used for all memory.
Caches reduce the number of clock cycles required for a memory access since
they are implemented with fast SRAMs. Whenever the processor must perform
external memory read accesses, the cache controller always pre-fetches extra
bytes and loads them into the cache. When the processor needs the next piece
of data, it is likely that it is already in the cache. If so, processor performance is
enhanced, if not, the penalty is minimal.
Caches are cost-effective because they are relatively small as compared to the
amount of main memory.
There are several levels of cache implemented in IBM PC servers. The cache
incorporated into the main system processor is known as Level 1 (L1) cache.
The Intel 486 incorporates a single 8KB cache. The Intel Pentium family has two
8KB caches, one for instructions and one for data. Access to these on-board
caches are very fast and consume only a fraction of the time required to access
memory locations external to the chip.
The second level of cache, called second-level cache or L2 cache, provides
additional high speed memory to the L1 cache. If the processor cannot find what
it needs in the processor cache (a first-level
cache miss
additional cache memory. If it finds the code or data there (a second-level
) the processor will use it and continue. If the data is in neither of the caches,
hit
), it then looks in the
cache
an access to main memory must occur.
L2 caches are standard in all IBM PC server models.
With all types of caching, more is not always better. Depending on the system,
the optimum size of Level 2 Cache is usually 128KB to 512KB.
L2 Caches can be of two types:
•
4NetWare Integration Guide
Write-Through Cache
Read operations are issued from the cache but write operations are sent
directly to the standard memory. Performance improvements are obtained
only for read operations.
•
Write-Back Cache
Write operations are also performed on the cache. Transfer to standard
memory is done if:
− Memory is needed in the cache for another operation
− Modified data in the cache is needed for another application
The third level of cache or L3 cache is sometimes referred to as a victim cache.
This cache is a highly customized cache used to store recently evicted L2 cache
entries. It is a smaller cache usually less than 256 bytes. An L3 cache is
implemented in the IBM PC Server 720 SMP system.
1.3.1.1 SM P Caching
Within SMP designs, there are two ways in which a cache is handled:
•
Shared cache
•
Dedicated cache
Shared Cache:
expensive SMP design. However, the performance gains associated with a
shared cache are not as great as with a dedicated cache. With the shared
secondary cache design, adding a second processor can provide as much as a
30% performance improvement. Additional processors provide very little
incremental gain. If two many processors are added, the system will even run
slower due to memory bus bottlenecks caused by processor contention for
access to system memory.
The IBM PC server 320 supports SMP with a shared cache.
Figure 1 shows SMP with shared secondary cache.
Sharing a single L2 cache among processors is the least
processor. This allows more cache hits than a shared L2 cache. Adding a
second processor using a dedicated L2 cache can improve performance as
much as 80%. With current technology, adding even more processors can
further increase performance in an almost linear fashion up to the point where
the addition of more processors does not increase performance and can actually
decrease performance due to excessive overhead.
The IBM PC Server 720 implements SMP with dedicated caches.
Figure 2 shows SMP with dedicated secondary cache.
This SMP design supports a dedicated L2 cache for each
Dedicated caches are also more complicated to manage. Care needs to be
taken to ensure that a processor needing data always gets the latest copy of that
data. If this data happens to reside in another processor′ s cache, then the two
caches must be brought into sync with one another.
The cache controllers maintain this
another using a special protocol called MESI, which stands for
E
xclusive, Shared, or Invalid. These refer to tags that are maintained for each
line of cache, and indicate the state of each line.
The implementation of MESI in the IBM PC server 720 supports two sets of tags
for each cache line, which allows for faster cache operation than when only one
set of tags is provided.
1.3.2 Memory Interleaving
Another technique used to reduce effective memory access time is interleaving.
This technique greatly increases memory bandwidth when access to memory is
sequential such as in program instruction fetches.
coherency
by communicating with one
M
odified,
6NetWare Integration Guide
In interleaved systems, memory is currently organized in either two or four
banks. Figure 3 on page 7 shows a two-way interleaved memory
implementation.
Figure 3. Two-Way Interleaved Memory Banks
Memory accesses are overlapped so that as the controller is reading/writing
from bank 1, the address of the next word is presented to bank 2. This gives
bank 2 a head start on the required access time. Similarly, when bank 2 is being
read, bank 1 is fetching/storing the next word.
The PC server 500 uses a two-way interleaved memory. In systems
implementing two-way interleaved memory, additional memory must be added in
pairs of single in-line memory modules (SIMMs) operating at the same speed
(matched SIMMs).
The PC server 720 uses a four-way interleaved memory with a word length of 64
bits. In this system, in order to interleave using 32-bit SIMMs, it is necessary to
add memory in matched sets of eight SIMMs each.
1.3.3 Dual Path Buses
A dual path bus allows both the processor and a bus master to access system
memory simultaneously. Figure 4 on page 8 shows a dual path bus
implementation.
Without a dual path bus, there is often contention for system resources such as
main memory. When contention between the processor and a bus master
occurs, one has to wait for the other to finish its memory cycle before it can
proceed. Thus, fast devices like processors have to wait for much slower I/O
devices, slowing down the performance of the entire system to the speed of the
slowest device. This is very costly to the overall system performance.
1.3.4 SynchroStream Technology
SynchroStream is an extension of the dual bus path technique. The
SynchroStream controller synchronizes the operation of fast and slow devices
and streams data to these devices to ensure that all devices work at their
optimum levels of performance.
It works much like a cache controller in that it pre-fetches extra data on each
access to memory and buffers this data in anticipation of the next request. When
the device requests the data, the IBM SynchroStream controller provides it
quickly from the buffer and the device continues working. It does not have to
wait for a normal memory access cycle.
When devices are writing data into memory, the IBM SynchroStream controller
again buffers the data, and writes it to memory after the bus cycle is complete.
Since devices are not moving data to and from memory directly, but to the
SynchroStream controller, each device has its own logical path to memory. The
devices do not have to wait for other, slower devices.
8NetWare Integration Guide
1.4 Memory Error Detection and Correction
IBM PC servers implement four different memory systems:
•
Standard (parity) memory
•
Error Correcting Code-Parity
•
Error Correcting Code (ECC) memory
•
ECC Memory on SIMMs (EOS) Memory
1.4.1 Standard (Parity) Memory
Parity memory is standard IBM memory with 32 bits of data space and 4 bits of
parity information (one check bit/byte of data). The 4 bits of parity information
are able to tell you an error has occurred but do not have enough information to
locate which bit is in error. In the event of a parity error, the system generates a
non-maskable interrupt (NMI) which halts the system. Double bit errors are
undetected with parity memory.
Standard memory is implemented in the PC Servers 300 and 320 as well as in
the majority of the IBM desktops (for example the IBM PC 300, IBM PC 700, and
PC Power Series 800).
1.4.2 Error Correcting Code (ECC)
The requirements for system memory in PC servers has increased dramatically
over the past few years. Several reasons include the availability of 32 bit
operating systems and the caching of hard disk data on file servers.
As system memory is increased, the possibility for memory errors increase.
Thus, protection against system memory failures becomes increasingly
important. Traditionally, systems which implement only parity memory halt on
single-bit errors, and fail to detect double-bit errors entirely. Clearly, as memory
is increased, better techniques are required.
To combat this problem, the IBM PC servers employ schemes to detect and
correct memory errors. These schemes are called Error Correcting Code (or
sometimes Error Checking and Correcting but more commonly just ECC). ECC
can detect and correct single bit-errors, detect double-bit errors, and detect
some triple-bit errors.
ECC works like parity by generating extra check bits with the data as it is stored
in memory. However, while parity uses only 1 check bit per byte of data, ECC
uses 7 check bits for a 32-bit word and 8 bits for a 64-bit word. These extra
check bits along with a special hardware algorithm allow for single-bit errors to
be detected and corrected in real time as the data is read from memory.
Figure 5 on page 10 shows how the ECC circuits operate. The data is scanned
as it is written to memory. This scan generates a unique 7-bit pattern which
represents the data stored. This pattern is then stored in the 7-bit check space.
Chapter 1. IBM PC Server Technologies9
Figure 5. ECC Memory Operation
As the data is read from memory, the ECC circuit again performs a scan and
compares the resulting pattern to the pattern which was stored in the check bits.
If a single-bit error has occurred (the most common form of error), the scan will
always detect it, automatically correct it and record its occurrence. In this case,
system operation will not be affected.
The scan will also detect all double-bit errors, though they are much less
common. With double-bit errors, the ECC unit will detect the error and record its
occurrence in NVRAM; it will then halt the system to avoid data corruption. The
data in NVRAM can then be used to isolate the defective component.
In order to implement an ECC memory system, you need an ECC memory
controller and ECC SIMMs. ECC SIMMs differ from standard memory SIMMs in
that they have additional storage space to hold the check bits.
The IBM PC Servers 500 and 720 have ECC circuitry and provide support for ECC
memory SIMMs to give protection against memory errors.
1.4.3 Error Correcting Code-Parity Memory (ECC-P)
Previous IBM servers such as the IBM Server 85 were able to use standard
memory to implement what is known as ECC-P. ECC-P takes advantage of the
fact that a 64-bit word needs 8 bits of parity in order to detect single-bit errors
(one bit/byte of data). Since it is also possible to use an ECC algorithm on 64
bits of data with 8 check bits, IBM designed a memory controller which
implements the ECC algorithm using the standard memory SIMMs.
10NetWare Integration Guide
Figure 6 on page 11 shows the implementation of ECC-P. When ECC-P is
enabled via the reference diskette, the controller reads/writes two 32-bit words
and 8 bits of check information to standard parity memory. Since 8 check bits
are available on a 64-bit word, the system is able to correct single-bit errors and
detect double-bit errors just like ECC memory.
┌───────────────────┐┌───────────────────┐
││││
││││
├─────────┬┬────────┤├─────────┬┬────────┤
│ 32 data ││4 parity││ 32 data ││4 parity│
└────┬────┘└───┬────┘└─────┬───┘└────┬───┘
While ECC-P uses standard non-expensive memory, it needs a specific memory
controller that is able to read/write the two memory blocks and check and
generate the check bits. Also, the additional logic necessary to implement the
ECC circuitry make it slightly slower than true ECC memory. Since the price
difference between a standard memory SIMM and an ECC SIMM has narrowed,
IBM no longer implements ECC-P.
1.4.4 ECC on SIMMs (EOS) Memory
A server that supports one hundred or more users can justify the additional cost
necessary to implement ECC on the system. It is harder to justify this cost for
smaller configurations. It would be desirable for a customer to be able to
upgrade his system at a reasonable cost to take advantage of ECC memory as
his business grows.
The problem is that the ECC and ECC-P techniques previously described use
special memory controllers imbedded on the planar board which contain the
ECC circuits. I t is impossible to upgrade a system employing parity memory
(with a parity memory controller) to ECC even if we upgrade the parity memory
SIMMs to ECC memory SIMMs.
To answer this problem, IBM has introduced a new type of memory SIMM which
has the ECC logic integrated on the SIMM. These are called ECC on SIMMs or
EOS memory SIMMs. With these SIMMs, the memory error is detected and
corrected directly on the SIMM before the data gets to the memory controller.
This solution allows a standard memory controller to be used on the planar
board and allows the customer to upgrade a server to support error checking
memory.
Chapter 1. IBM PC Server Technologies11
1.4.5 Performance Impact
As previously discussed, systems which employ ECC memory have slightly
longer memory access times depending on where the checking is done. It
should be stressed that this affects only the access time of external system
memory, not L1 or L2 caches. Table 1 shows the performance impacts as a
percentage of system memory access times of the different ECC memory
solutions.
Again, these numbers represent only the impact to accessing external memory.
They do not represent the impact to overall system performance which is harder
to measure but will be substantially less.
Table 1. ECC Memory Performances
ECCXX3%PC Servers 500 and 720
ECC-PX14%No more (Mod 85)
EOSXNoneOption for PC Servers
SIMMMemory
Controller
Impact to
Access Time
Systems where
implemented
300, 320
Standard for PC Servers
520
1.4.6 Memory Options and Speed
The following memory options are available from IBM:
•
4MB, 8MB, 16MB, 32MB 70 ns Standard (Parity) Memory SIMMs
•
4MB, 8MB, 16MB, 32MB 70 ns ECC Memory SIMMs
•
4MB, 8MB, 16MB, 32MB 60 ns ECC Memory SIMMs
•
4MB, 8MB, 16MB, 32MB 70 ns EOS Memory SIMMs
Table 2 shows the options used by each PC server.
Table 2. Summary of Memory Implementations
PS/2 Model70 ns
PC Server
300/310/320
PC Server 500X
PC Server 520X
PC Server 720X
1.5 Bus Architectures
70 ns
Standard
XOPT
ECC-P
70 ns
ECC
60 ns
ECC
70 ns
EOS
There are a number of bus architectures implemented in IBM PC servers:
•
•
•
•
12NetWare Integration Guide
ISA
EISA
MCA
PCI
1.5.1 ISA Bus
The Industry Standard Architecture (ISA) is not really an architecture at all but a
defacto standard based on the original IBM PC/AT bus design. The main
characteristics of the ISA bus include a 16-bit data bus and a 24-bit address bus.
The bus speed is limited to 8 MHz and it did not allow for DMA and bus masters
in its original form. It does not support automatically configuring adapters and
resolving resource conflicts among adapters nor does it allow for sharing of
interrupts. Nonetheless, it was an extremely successful design and even with
these disadvantages, it is estimated that the ISA bus is in 70% of the PCs
manufactured today.
1.5.2 EISA Bus
The Extended Industry Standard Bus Architecture (EISA) is a 32-bit superset of
the ISA bus providing improved functionality and greater data rates while
maintaining backward compatibility with the many ISA products already
available.
The main advancements of the EISA bus are 32-bit addressing and 16-bit data
transfer. It supports DMA and bus master devices. It is synchronized by an 8.33
MHz clock and can achieve data transfer of up to 33 MBps. A bus arbitration
scheme is also provided which allows efficient sharing of multiple EISA bus
devices. EISA systems can also automatically configure adapters.
1.5.3 Micro Channel Bus
The Micro Channel Architecture (MCA) was introduced by IBM in 1987. Micro
Channel is an improvement over ISA in all of the areas discussed in the previous
section on EISA. I n addition, it supports data streaming which is an important
performance feature of the MCA architecture.
1.5.3.1Data Streaming
The data streaming transfer offers considerably improved I/O performance. In
order to understand data streaming transfers we need to see how data is
transferred between Micro Channel bus master adapters and memory.
The standard method of transfer across the Micro Channel is known as basic
data transfer. In order to transfer a block of data in basic data transfer mode, an
address is generated on the address bus to specify where the data should be
stored; then the data is put on the data bus.
This process is repeated until the entire block of data has been transferred.
Figure 7 on page 14 shows basic data transfer in operation. Basic data transfer
on the Micro Channel runs at 20 MBps (each cycle takes 200 nanoseconds, and
32 bits or 4 bytes of data are transferred at a time).
Chapter 1. IBM PC Server Technologies13
Figure 7. Micro Channel - Basic Data Transfer (20 MBps)
However, in many cases, blocks transferred to and from memory are stored in
sequential addresses, so repeatedly sending the address for each 4 bytes is
unnecessary. With data streaming transfers, the initial address is sent, and then
the blocks of data are sent; it is then assumed that the data requests are
sequential. Figure 8 shows 40 MBps data streaming in operation.
Figure 8. Micro Channel - Data Streaming Transfer (40 MBps)
14NetWare Integration Guide
The Micro Channel supports another mode of data streaming whereby the
address bus can also be used to transfer data. This is depicted in Figure 9 on
page 15.
1.5.4 PCI Bus
Figure 9. Micro Channel - Data Streaming Transfer (80 MBps)
As can be seen from this figure, in this mode, after the initial address is
presented during the first bus cycle, the address bus is then multiplexed to carry
an additional 32 bits of data. This results in an effective data transfer rate of 80
MBps.
Data streaming, as well as improving the data transfer rate, also provides a
more efficient use of the Micro Channel. Since MCA operations complete in a
shorter amount of time, the overall throughput of the system is increased.
Data streaming is useful for any adapters that perform block transfers across the
Micro Channel such as the IBM SCSI-2 Fast/Wide Streaming RAID Adapter/A.
MCA is implemented in some models of the IBM PC Server 300 and 500 lines
and in all models of the PC Server 720.
In the later part of 1992, Intel, IBM and a number of other companies worked
together to define a new local component bus which was designed to provide a
number of new features and work with a wide range of new processors. The
result was the Peripheral Component Interconnect (PCI) bus. The PCI bus was
designed to provide the Pentium processor with all the bandwidth it needed and
to provide for more powerful processors in the future. It was also designed for
use in multiprocessing environments.
The PCI bus was designed to work with a number of buses including Micro
Channel, ISA and EISA buses. I t was designed to provide a local bus, more
tightly integrated with the processor, to provide more bandwidth to I/O devices
such as LAN adapters and DISK controllers, which require more bandwidth than
Chapter 1. IBM PC Server Technologies15
Loading...
+ 205 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.