PART NO.: 49.59902.001
DOC. NO.: SG255-9801APRINTED IN TAIWAN
Page 2
Copyright
Copyright 1998 by Acer Incorporated. All rights reserved. No part of this publication may be
reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language
or computer language, in any form or by any means, electronic, mechanical, magnetic, optical,
chemical, manual or otherwise, without the prior written permission of Acer Incorporated.
Disclaimer
Acer Incorporated makes no representations or warranties, either expressed or implied, with
respect to the contents hereof and specifically disclaims any warranties of merchantability or
fitness for any particular purpose. Any Acer Incorporated software described in this manual is sold
or licensed "as is". Should the programs prove defective following their purchase, the buyer (and
not Acer Incorporated, its distributor, or its dealer) assumes the entire cost of all necessary
servicing, repair, and any incidental or consequential damages resulting from any defect in the
software. Further, Acer Incorporated reserves the right to revise this publication and to make
changes from time to time in the contents hereof without obligation of Acer Incorporated to notify
any person of such revision or changes.
All brand and product names mentioned in this manual are trademarks and/or registered trademarks of their respective
companies.
ii
Page 3
About this Manual
Purpose
This service guide aims to furnish technical information to the service engineers and advanced
users when upgrading, configuring, or repairing the X3 system.
Manual Structure
This service guide contains technical information about the X3 system. It consists of three
chapters and five appendices.
Chapter 1System Introduction
This chapter describes the system features and major components. It contains the X3
system board layout, block diagrams, cache and memory configurations, power
management and mechanical specifications, and operation theory.
Chapter 2Major Chipsets
This chapter describes the features and functions of the major chipsets used in the system
board, including the Pentium Pro processor. It also includes chipset block diagrams, pin
diagrams, and pin descriptions.
Chapter 3BIOS Setup Utility
This chapter describes the parameters in the BIOS Utility screens.
Appendix AModel Definition
This appendix shows the different configuration options for the X3 system.
Appendix BSpare Parts List
This appendix lists the spare parts for the X3 system with their part numbers and other
information.
Appendix CSchematics
This appendix contains the schematic diagrams for the system board.
Appendix DSilk Screen
This appendix illustrates the system board silk screen.
Appendix EBIOS POST Check Points
This appendix lists and describes the BIOS POST check points.
iii
Page 4
Conventions
The following are the conventions used in this manual:
Text entered by userRepresents text input by the user.
Screen messages
, , , etc.Represent the actual keys that you have to press on the
Denotes actual messages that appear onscreen.
keyboard.
NOTE
Gives bits and pieces of additional information related to the
current topic.
WARNING
Alerts you to any damage that might result from doing or not
doing specific actions.
CAUTION
Gives precautionary measures to avoid possible hardware or
software problems.
IMPORTANT
Reminds you to do specific actions relevant to the
accomplishment of procedures.
TIP
Tells how to accomplish a procedure with minimum steps
through little shortcuts.
2- 1GTL+ Bus Termination Voltage Specifications......................................................2-6
2- 2P6 Processor Signal Descriptions .........................................................................2-8
2- 3S82451GX Signal Descriptions...........................................................................2-35
2- 4S82452GX Signal Descriptions...........................................................................2-39
2- 5S82453GX Signal Descriptions...........................................................................2-42
2- 6S82454GX Signal Descriptions...........................................................................2-48
2- 782379AB Signal Descriptions..............................................................................2-55
2- 8ESC (82374SB) Signal Abbreviations.................................................................2-68
2- 9ESC (82374SB) Signal Descriptions ................................................................... 2-69
2- 10PCEB (82375SB) Signal Descriptions................................................................. 2-96
2- 11AIC 7880 I/O Type Descriptions........................................................................2-116
2- 12AIC 7880 Signal Descriptions............................................................................ 2-116
2- 13ATI 264VT Signal Descriptions .........................................................................2-130
2- 14Display Modes for DRAM (45ns) or EDO DRAM (60ns) ....................................2-135
2- 15Display Modes for Synchronous DRAM or EDO DRAM with Burst CAS ............ 2-135
2- 1637C935 Buffer Type Descriptions......................................................................2-140
2- 1737C935 Signal Descriptions..............................................................................2-140
3- 1Parallel Port Operation Mode Settings................................................................3-15
3- 2Drive Control Settings .........................................................................................3-25
xii
Page 13
C h a p t e r 1C h a p t e r 1
System Introduction
1.1 Configuration Overview
1.1.1 Front Panel
The system front panel is divided into two sections. The upper front panel consists of the
diskette/CD-ROM/tape drive bays, keylock, power switch, LED indicators, LCD display screen, and
an embedded reset switch.
The lower part contains the externally accessible hard disk drive bays with 14 drive trays for
narrow or wide SCSI drives.
Figure 1- 1Front Panel
One pair of system keys and one pair of power switch keylock are hung
inside the upper front door. Additional duplicate keys can be found at the
back of the system.
System Introduction1-1
Page 14
1.1.1.1 Front Panel Features
Drive Bays
Figure 1-2 gives a closer look of the upper front panel features.
LED Indicators
5.25-inch
CD-ROM Drive
3.5-inch
Keylock
LCD Display Screen
Figure 1- 2Front Panel Features
1.1.1.2 CD-ROM Drive
The basic system comes with a SCSI CD-ROM drive already installed.
1.1.1.3 3.5-inch Diskette Drive
Power Switch
Reset Switch
(embedded)
A 3.5-inch diskette drive also comes with the basic system.
1.1.1.4 5.25-inch Drive Bays
Two empty 5.25-inch drive bays allow installation of additional devices.
1.1.1.5 Power Switch
The power switch allows you to turn the system power on or off.
1.1.1.6 Reset Switch
Pressing the reset switch generates a hardware reset pulse that restarts the system initializing all
the registers, buffers, and memory subsystems.
1.1.1.7 Keylock
The keylock gives security to the system against unauthorized users. Turning the keylock to the
unlocked position enables the power and reset switches. Turning the keylock to the locked
position disables both switches whether the system is on or off. Supposing the system is on and
you intend to reset or turn it off, make sure that the keylock is unlocked. Otherwise, the switches
do not respond.
1-2Service Guide
Page 15
1.1.1.8 LED Indicators
Table 1- 1LED Indicator Description
LED IconsDescription
Power StatusGreenIndicates that power is on. This color also denotes that the system
is running on a good supply of AC power.
RedIndicates that power is on. The AC power supply fails and the
system is running on battery power.
Battery Status
UPS
Hard Disk BusyGreenIndicates that at least one of the hard disks is currently accessing.
GreenIndicates that a battery is present and in good condition. The
battery LED shows this color during normal system operation,
during which the battery automatically charges.
When the power status LED is red, a green battery LED also
indicates that the system is running on battery power. When this
happens, shutdown the system immediately because the battery
keeps a fully-configured system running only for about eight
minutes.
RedNormally, this color indicates that the battery is bad. However,
there are times when the battery LED turns red for a few seconds
due to other factors and NOT because the battery is bad. See
below.
Hard Disk
Failure
GreenIndicates that all the hard disks installed on the backplane board
are in good condition.
RedIndicates that one of the hard disks installed on the backplane
board is bad.
In these instances, the battery LED may turn red for a few seconds but DOES NOT necessarily
indicate that the battery is bad.
• System Startup
At system power on, the battery LED shows red light when the system performs initialization
and self-tests. The red light should remain for only a few seconds and eventually turn to
green.
• Resumption of AC power supply while the system is running on battery power.
When AC power is cut-off, the battery automatically supplies the system power. The sudden
return of AC power at this time when the system is running on battery may cause the battery
LED to change to red. Simultaneously, the message “Battery Fails !” may appear on the LCD
screen. When this happens, allow the battery to recover for a while. Wait for the battery LED
to return to green and the LCD message to disappear.
If the battery LED remains red for several seconds and the
message “Battery Fails !” still shows on the LCD screen, change
the battery or call your dealer or a technician for assistance.
System Introduction1-3
Page 16
1.1.1.9 LCD Display Screen
The LCD display is a two-line by 16-character screen that indicates the boot status as well as any
BIOS check point errors encountered upon system initialization. Normally, the system BIOS and
the microcontroller firmware send the LCD display messages that appear on the screen. However,
if you hooked up a special purpose driver to control the LCD module, this driver define the
messages. See the driver manual for more information.
Table 1-2 lists the LCD messages from the system BIOS and the microcontroller at power on.
Table 1- 2LCD Messages
MessageDescription
Hello! Welcome !
POST Checkpoints
Power #1 Fails !
Power #2 Fails !
Power #3 Fails !
Battery Fails !
Power Fan Fails !This message indicates that one or more fans on the power
AC Power Fails !This message indicates that there is no power coming from the AC
The system
is running well !
This is the first message that appears on the LCD screen. This
message indicates that the microcontroller works fine.
During the system power-on self-tests (POST), the LCD screen
shows which POST check-point is currently being tested.
After POST, the microcontroller checks the power subsystem
status. If it detects that power supply module 1 is bad, this
message appears on the LCD screen.
If the microcontroller detects that power supply module 2 is bad,
this message appears on the LCD screen.
If the microcontroller detects that power supply module 3 is bad,
this message appears on the LCD screen.
Normally, this message indicates that the battery is bad and must
be replaced with a new one.
There are times when this message appears for a few seconds but
do not necessarily mean that the battery is bad. Refer to the
previous page for these instances.
subsystem failed.
line and the system is currently running only on battery power.
This message appears after POST and other tests. It shows that
the system has passed all the tests and is running fine.
1-4Service Guide
Page 17
1.1.1.10 RDM LED
The RDM LED located on the lower right panel enables the remote diagnostic management
feature. Refer to the RDM User’s Guide for information on the RDM feature.
RDM Icon
RDM LED
Figure 1- 3RDM LED
System Introduction1-5
Page 18
1.1.2 Rear Panel
The rear panel includes the connectors for the keyboard, mouse, VGA monitor, printer, and serial
devices. Below the connectors are the slot openings for expansion boards. On the lower left is
the socket for the system power cable. A standby current adapter socket is located on the lower
right corner.
Keyboard Port
Mouse Port
Serial Port 1
Video Port
Parallel Port
Serial Port 2
Narrow SCSI
Knockout
Figure 1- 4Rear Panel
Expansion Slot Brackets
Power Socket
1-6Service Guide
Page 19
1.2 Features
The AcerAltos 19000Pro4 is a powerful 64-bit quad-processor system loaded with a host of new
and innovative features. The system offers a new standard for flexible productivity ideal for local
area networks and multiuser server environments.
1.2.1 Intel Pentium Pro Microprocessor
The Intel Pentium Pro CPU is the heart of the AcerAltos 19000Pro4 system. Designed to work
with the Orion chipset composed of a PCI bridge and memory controller, the Pentium Pro running
at 200 MHz carries a new generation of power not present in its predecessors.
The system board has four CPU sockets to accommodate up to four Intel Pentium Pro CPUs for a
multiprocessor configuration. This configuration doubles efficiency and reliability thereby
upgrading overall system performance. The Pentium Pro supports a wide range of applications
running under SMP network operating systems such as WindowsNT, UNIX, NetWare, etc.
The CPU also incorporates the first-level (L1) and second-level (L2) caches, the advanced
peripheral interrupt controller (APIC), and the system bus controller. Figure 1-5 shows the CPU
architecture.
1.2.1.1 First-level and Second-level Cache
The Pentium Pro has a 16-KB first-level and 256/512/1024-KB second-level cache. These caches
produce a high hit rate that reduces the processor’s external memory bandwidth requirements.
The APIC unit inside the CPU along with the I/O APIC unit facilitate multiprocessor interrupt
management. The APIC works with multiple I/O subsystems where each subsystem have its own
interrupts that help minimize centralized system overhead.
1.2.1.3 Bus Controller
The bus controller integrated in the Pentium Pro CPU controls the system bus to make it perform
its functions efficiently. It ensures that the bus serves as a reliable interconnection between one or
two CPUs, I/O bridge, and memory controllers.
System Introduction1-7
Page 20
1.2.1.4 Pentium Pro CPU Architecture
Figure 1- 5Pentium Pro CPU Architecture
1.2.2 System Architecture
The system bus, PCI buses, EISA bus, Orion PCI bridge (OPB), Orion memory controller (OMC),
PCI/EISA Bridge (PCEB), and EISA system controller (ESC) comprise the basic system
architecture.
Figure 1- 6System Architecture
1-8Service Guide
Page 21
1.2.2.1 System Bus
The system bus is the CPU’s major connection to all the system devices, primarily the PCI and
EISA bridges, and the memory controllers. It can handle as many as eight outstanding
transactions at a time through the transaction pipelining feature in which consecutive tasks from
the CPU are queued in and transported to the designated devices on a first-in first-out basis.
Pipelining allows for transaction overlapping in different phases as the CPU does not have to wait
for each transaction to complete before it issues the next transaction. This produces significant
improvement on overall system performance.
The bus architecture supports a number of features that ensure high reliability. It has an 8-bit error
correction code (ECC) that protects the data lines and a 2-bit parity code that protects the address
lines.
The bus uses the gunning transceiver logic (GTL+), a synchronous latched bus protocol that
simplifies timing constraints. This protocol supports higher frequency system designs but requires
a low voltage that reduces electromagnetic interference (EMI) resulting to a lower power
consumption.
1.2.2.2 PCI and EISA Buses
The system supports two PCI buses created by the two PCI bridge chipsets (OPB). The PCI
buses serve as the links between the PCI bridges and the PCI devices onboard. The presence of
two buses instead of one reduces the I/O bottleneck and matches the higher bandwidth of the CPU
for faster data transfers.
The EISA bus connects the EISA devices to the other system devices through the PCI/EISA
bridge (PCEB) and the EISA system controller (ESC). The use of the PCEB and ESC maintains
compatibility with the EISA environment.
1.2.2.3 Orion PCI Bridge
The Orion PCI bridge (OPB) is a low-cost I/O subsystem solution for high-performance systems.
The OPB translates transactions between the system bus and the PCI buses using 32-byte buffers
for inbound and outbound postings. The use of two OPBs in the system creates an architecture
that allows faster data transfers.
1.2.2.4 Orion Memory Controller
The Orion memory controller (OMC) acts as an interface between the system bus and the system
memory. It consists of the DRAM control (DC) chip and the data path (DP) chip. The OMC
relates to the DRAM array through four memory interface controller (MIC) chips. The OMC
supports 256-bit 4-way memory interleaving resulting to a more efficient memory traffic
management.
1.2.3 SCSI Disk Array
The system supports an array of 14 hot-swappable disk drive trays through two 7-slot SCSI
backplane boards (Acer BP-W7). The trays accommodate wide and narrow SCSI hard disks.
With the AIC-7880 SCSI controller onboard, the transfer rate reaches up to 40 MB per second for
ultra-wide SCSI.
System Introduction1-9
Page 22
1.2.4 Server Management
The system comes with the ASM Pro feature that allows voltage stability and CPU thermal
monitoring, prevents data loss by prompt ECC memory error reporting, maximizes system
resources by indicating the PCI bus utilization, and promotes efficiency by minimizing system
downtime.
A related feature of ASM is the remote diagnostic management (RDM) that permits system
diagnosis from a remote site through a modem. The RDM facilitate the fixing of detected
problems, changing system configurations or rebooting in the event of system failure.
1.2.5 Redundant Power Supply Subsystem
The system comes with a power backplane that holds up to three 400-watt power supply modules.
The power subsystem supports a redundant configuration such that even if one power supply fails,
the remaining two continues to work together to supply the 800-watt requirement for a fullyconfigured system.
Two important segments of the power subsystem configuration are the charger board and battery
box. Together, these two components function like an uninterruptible power supply (UPS).
Providing an additional support to the three 400-watt power supply modules, the battery
automatically charges whenever the system is on. The battery gives a fully-configured system the
ability to run continuously through short interruptions in wall power or for a maximum of six
minutes in the event of total AC power shutdown.
1.2.6 Security
The system housing comes with mechanical security locks on both the front panel and the side
panel preventing unauthorized access to the internal components and system use.
The system BIOS secures the CMOS data and other system software with power-on password,
keyboard password, setup control, disk drive control, and monitor control.
1.2.7 Memory Board
The memory board comes already installed with the basic system. A total of 16 168-pin DIMM
sockets reside on the board. The sockets accept 32-MB, 64-MB, and 128-MB DIMMs for a
maximum of 2 GB memory configuration.
1-10Service Guide
Page 23
1.2.8 Major Components
The main board contains the following features:
• Four ZIF-type socket 8 for Intel Pentium Pro CPU
• Four VRM8 sockets for voltage regulator modules
• Seven PCI, three EISA expansion slots
• Intel Orion chip set and Intel EISA bridge
• Two Adaptec 7880 PCI fast/wide SCSI controller
• ATI 264VT/GT PCI VGA chip plus 1M/2M byte DRAM
• SMC FDC37C935 super IO chip
• Acer server management hardware module
• Remote diagnostic module
• 128-byte CMOS NVRAM as system clock/calendar storage plus 8K-byte extended NVRAM
for EISA configuration storage
• 256K-byte Flash ROM containing system, onboard SCSI, and on-board VGA BIOS
• PS/2 keyboard and mouse interface
• Front panel interface including power and hard disk LEDs
System Introduction1-11
Page 24
1.3 Board Layouts
1.3.1 System Board
4
3
2
1
32
31
30
29
28
27
26
25
1. VRM connector 1
2. Pentium Pro CPU socket 1
3. VRM connector 3
4. Pentium Pro CPU socket 3
5. BIOS
6. Battery
7. +12V, +5V downside power connector
8. Buzzer
9. Narrow SCSI interface
10. Wide SCSI interface 1
11. Wide SCSI interface 2
12. PCI slots
13. EISA slots
14. Keyboard controller
15. Pentium Pro CPU socket 4
16. Parallel port
24
22
8
9
10
11
19
20
21
7
6
5
23
17. Video port
18. Serial port 1
19. Serial port 2
20. Mouse port
21. Keyboard port
22. VRM connector 4
23. VRM connector 2
24. RDM connectors
25. Pentium Pro CPU socket 2
26. ±12V, ±5V power connector
27. +12V, +5V power connector
28. VCC3 power connector
29. Memory board slot
30. IDE connector
31. Front Panel Connector
32. Diskette drive connector
12
13
14
15
16
17
18
1-12Service Guide
Page 25
1.3.2 Memory Board
Connectors
Memory Banks
Figure 1- 7Memory Board Layout
1.3.3 SCSI Disk Array Backplane Board
Memory Interface
Components (82451)
Status Signal
Connector
Jumper J4
SCSI Channel 1
Channel Configuration
Switches
SCSI Channel 2
SCSI Channel Out
Jumper J3
Power
SCSI Drive Slot
SCSI Drive Switch
Terminators
RA4, RA5, RA6
Terminators
RA1, RA2, RA3
Figure 1- 8SCSI Disk Array Backplane Board
System Introduction1-13
Page 26
1.3.4 RDM Module
123-pin connector
2RDM controller
3RDM LED connector
423-pin connector
Figure 1- 9RDM Module Layout
1-14Service Guide
Page 27
1.4 Jumpers and Connectors
Figure 1- 10System Board Jumper and Connector Locations
The blackened pin of a jumper or connector represents pin 1.
System Introduction1-15
Page 28
1.4.1 Jumper Settings
Table 3-2Jumper Settings
JumperSettingFunction
JP1Reserved
JP2Reserved
JP10Reserved
JP111-2
JP12Reserved
JP131-2*
JP14Reserved
JP151-2
JPX1Reserved
JPX2Reserved
JPX3Reserved
*
2-3
2-3
2-3*
Open
Password Security
Check password
Bypass password
Onboard VGA
Enabled
Disabled
CPU Bus Frequency
60 MHz
66 MHz
50 MHz
The following sections describe and illustrate the jumpers that are not listed in the above table.
*
Default setting
1-16Service Guide
Page 29
1.4.1.1 CPU Activation Jumpers
Jumpers JP4, JP5, JP6, JP7, JP8,and JP9 allow you to select the CPU to activate at a time.
Table 3-3 lists the settings and the corresponding functions of these jumpers.
Table 3-3CPU Activation Jumpers
Group 1 CPUs (CPU1 and CPU3)
JP6JP7Function
2-3OpenCPU1 only
1-21-2CPU3 only
1-22-3CPU1 and CPU3
Group 2 CPUs (CPU2 and CPU4)
JP4JP5Function
2-3OpenCPU2 only
1-21-2CPU4 only
1-22-3CPU2 and CPU4
Groups 1 and 2 CPUs
JP8JP9Function
2-31-2Group 1 only
1-22-3Group 2 only
2-32-3Group 1 and Group 2
1.4.1.2 CPU Frequency Jumper
Table 3-4 lists the CPU frequency ratios depending on JP3 settings.
Table 3-4CPU Frequency Ratios (JP3)
JP3 Settings
1-23-45-67-8
CCCC2
CCOC3
CCCO4
CCOO5
OCCC2.5
OCOC3.5
C = Closed (Processor pin connected to Vss)
O = Open
Core/Bus
DO NOT change jp3 settings unless you are qualified to do so. Ask
a technician if you need help when configuring the jumper.
System Introduction1-17
Page 30
1.4.2 Connector List
Table 3-5Connector Functions
ConnectorFunction
CN1Power connector for ±12V, ±5V
CN2Power connector for ±12V, ±5V
CN3Power connector for VCC3
CN4Power switch connector
CN5Front panel connector
CN6Power connector for ±12V, ±5V
CN7System fan connector
CN8System fan connector
CN9System fan connector
CN10System fan connector
CN11Diskette drive connector
CN12RDM LED connector
CN13RDM connector (to FP11 on the front panel board)
CN14RDM connector (to FP11 on the front panel board)
CN15IDE connector
CN16CPU2 fan connector
CN17CPU1 fan connector
CN18CPU2 temp. connector
CN19CPU1 temp. connector
CN20Voltage regulator module 2 (VRM2)
CN21Voltage regulator module 1 (VRM1)
CN22Keyboard/mouse connector
CN23Serial ports 1 and 2
CN24Video port/Parallel port
CN25Voltage regulator module 4 (VRM4)
CN26Voltage regulator module 3 (VRM3)
CN27CPU4 temp. connector
CN28CPU4 fan connector
CN29CPU3 fan connector
CN30CPU3 temp. connector
CN31ITP connector
CN32System fan connector
CN33System fan connector
CN34System fan connector
1-18Service Guide
Page 31
Table 3-5Connector Functions (continued)
ConnectorFunction
CN35HDD LED connector
CN36Extended controller connector
CN37Redundant power signal connector
CN38Intel feature connector
CN40Narrow SCSI connector
CN42Wide SCSI connector 2
CN43Wide SCSI connector 1
CN44Down-side power connector for +12V, +5V
1/2/4-way interleaving access
Slots6 PCI, 2 EISA, plus one PCI/EISA shared slot
RDM ModuleRemote Diagnostic Module H/W & F/W built in
Server ManagementASM Pro Server Management H/W & S/W included
VGA and DRAMPCI VGA, 1MB DRAM on-board, expandable to 2MB
PCI-SCSITwo Adaptec 7880 Ultra/Fast, Wide/Narrow SCSI on-board
I/O IntegratedSuper I/O SMC935- FDC, AT-IDE, ECP/EPP, 16550 * 2
1-20Service Guide
Page 33
1.6 Hardware Configurations
1.6.1 Memory Configurations
The memory board comes already installed with the basic system. A total of 16 168-pin DIMM
sockets reside on the board. The sockets accept 32-MB, 64-MB, and 128-MB DIMMs for a
maximum of 2 GB memory configuration.
1.6.1.1 Important points to configure memory
• The above table must be followed when upgrading memory
• All DIMMs in each of slot0-3, 4-7, 8-11, 12-15 must be populated with
identical ones
• Banks should be populated in the order from Bank 0 to Bank 15
• Install DIMMs in slot0-3, (and 4-7, and 8-11, and 12-15) makes 4-way
Memory size1MB2MB
Memory typeEDO RAM
Memory configuration256K*16 x 2256K*16 x 4
Fixed or upgradeable1st MB is fixed onboard, 2nd MB is upgradeable
Memory speed60ns
Memory voltage5V
Memory packageSOJ 40-pin
1.6.3 Video Display Modes and Refresh Rates
Table 1- 6Display Modes and Refresh Rates for EDO DRAM
256 colors64K colors16.7M colors
Resolution1 MB2 MB1 MB2 MB1 MB2 MB
640 x 48010010010010090100
800 x 60010010090100—100
1024 x 768100100—100——
1152 x 8648080—80——
1280 x 1024*75————
* - 1280 x 1024 @ 16 colors is available at 75 Hz.
System Introduction1-23
Page 36
1.6.4 Parallel Port Configurations
The onboard parallel port interface supports a 25-pin D-type connector. The port functions in
different operation modes and is adjustable to select LPT1, LPT2, and LPT3 by changing the
CMOS settings in the BIOS Utility.
Table 1-7 lists the operation mode settings and their corresponding functions.
Table 1- 7Parallel Port Operation Mode Settings
SettingFunction
Standard Parallel Port (SPP)Allows normal speed operation but in
one direction only
Enhanced Parallel Port (EPP 1.7/1.9)Allows bidirectional parallel port
operation at maximum speed
Extended Capabilities Port (ECP)Allows parallel port to operate in
bidirectional mode and at a speed
higher than the maximum data
transfer rate
Standard and BidirectionalAllows normal speed operation in a
two-way mode
1.6.5 Serial Port Configurations
The system board has two high-speed 9-pin D-type serial ports. These ports are NS16C550compatible UARTs with 16-byte FIFO send/receive capability. The port functions are software
adjustable to select COM1, COM2, COM3, and COM4.
1.6.6 Memory Address Map
Table 1- 8Memory Address Map
AddressNameFunction
00000000 ~ 0009FFFF640 KB system memoryMain Memory
000A0000 ~ 000BFFFF128 KB video RAMGraphics display buffer
000C0000 ~ 000C7FFF32 KB I/O expansion ROMVideo BIOS
000C8000 ~ 000CFFFF32 KB I/O expansion ROMReserved for ROM on I/O adapters
000D0000 ~ 000DFFFF64 KB I/O expansion ROMReserved for ROM on I/O adapters
000E0000 ~ 000E7FFF32 KBSystem extended BIOS (SCSI BIOS)
000E8000 ~ 000EFFFF32 KBReserved for system extended BIOS
000F0000 ~ 000FFFFF64 KBSystem BIOS
00100000 ~ FFFFFFFFSystem memorySystem Memory
000 ~ 00FDMA controller-1
020 ~ 021Interrupt controller-1
022 ~ 023ESC (82374) configuration
040 ~ 043System timer-1
048 ~ 04BSystem timer-2
061NMI status and control
070NMI mask
080 ~ 08FDMA page register
092System control port
0A0 ~ 0A1Interrupt controller-2
0B2 ~ 0B3Advanced power management
0C0 ~ 0DEDMA controller-2
0F0Reset IRQ 13
1F0 ~ 1F7Hard disk
278 ~ 27FParallel port 2
2F8 ~ 2FFSerial port 2
378 ~ 37FParallel port 1
3B0 ~ 3BFMonochrome display
3C0 ~ 3CFEGA, VGA, SVGA
3D0 ~ 3DFCGA, VGA, SVGA
3F0 ~ 3F7Diskette drive controller
3F8 ~ 3FFSerial port 1
*4A0On board peripherals control
*4A1-4A3ASM control and status(1)
*4A4Redundant power supply status
*4A5ASM control and status(2)
*4A6RDM control and status
*4A7Backplane board status
*4A8-4AFASM control and status(3)
CF8PCI configuration address regulation
CFCPCI configuration data regulation
1-26Service Guide
Page 39
1.7 Block Diagrams
1.7.1 System Block Diagram
System Architecture
PPro-CPU1 &
512K/1M Cache
EISA Slot *3
Orion
DC/DP
533MB/s
MIC * 4
1
3
2
9
10
1/2/4 Way Interleave ECC Memory
IMM * 16: IMM 1 for Bank 0
for Bank 4
5
4
11
12
6
13
14
PPro-CPU1 &
512K/1M Cache
64bit @ 66MHz GTL+ PPro-MPBus 533MB/s
7
8
15
16
X3 Block Diagram
PPro-CPU2 &
512K/!M Cache
P6-MPBus/PCI Bridge
OPB
PCI Bus 133MB/s
PCI/EISA Bridge
PCEB ESC
VGA
EISA Bus 33MB/s
1
2
S I/O
3
1
2
A.S.M.
R.D.M.
PPro-CPU1 &
512K/1M Cache
P6-MPBus/PCI Bridge
OPB
PCI Bus 133MB/s
PCI Slot * 3PCI Slot *
1
3
4
2
3
Dual
SCSI-
Figure 1- 11System Block Diagram
System Introduction1-27
Page 40
1.7.2 Memory Controller Block Diagram
Two-bank, N : 1 - way Interleaved Memory Data Connections
To Memory Controller
72
82451 x 4
RegisterRegisterRegisterRegister
Multiplexer
Row 1
72
3636363636363636
BANK 1BANK 3BANK 2BANK4BANK 5BANK 6BANK 7BANK 8
BANK 9BANK 10BANK 11 BANK 15BANK 12 BANK 16BANK 13BANK 14
WAY 1WAY 4WAY 2WAY 3
1 : 1
72
Figure 1- 12Memory Controller Block Diagram
2 : 1
72
72
Row 2
4 : 1
1-28Service Guide
Page 41
1.7.3 Memory Interleaving Block Diagram
SLOT 15
SLOT 13
SLOT 11
SLOT 9
SLOT 7
SLOT 5
SLOT 3
SLOT 16
SLOT 14
SLOT 12
SLOT 10
SLOT 8
SLOT 6
SLOT 4
4 : 1 Interleaved
4 : 1 Interleaved
4 : 1 Interleaved
SLOT 1
SLOT 2
2 : 1 Interleaved
X3 Memory Board
DIMMs Population Rules of X3 Memory Board
Figure 1- 13Memory Interleaving Block Diagram
Remark
1. Please follow Table 1-4(page 1-21 and 1-22) for the memory configurations.
2. Every 4 consecutive DIMMs are set as a group.(e.g. Group 1 means slot 1~4, Group 2 means
slot 5~8, Group 3 means slot 9~12, Group 4 means slot 13~16)
3. Except for group 1, memory upgrade should be performed at least one group(4 DIMMs) at a
time.
4. All DIMMs in a group should be of the same DIMM size.
5. Adding 1 DIMM, 2 DIMMs or 4 DIMMs is allowed in group 1, but not in group 2, 3 and 4.
6. Please follow the DIMM QVL of this product when you select DIMM for upgrade. Using DIMM
out of the QVL is not quality assured.
The specific housing configuration of the X3 system determines the total power consumption
required by the system.
1.8.1 400W Power Supply for IDX-2
X3 systems with IDX-2 housing require two to three units of 400-watt power supply modules. The
following specifications are for a single 400-watt module.
1.8.1.1 Output Requirements
Table 1- 11400W SPS Output Rating
OutputNominalOutput PowerRipple and NoiseMinimumMaximum
When AC power is on, +12V should provide 10A surge current.
This regulation should be within -6% and +7%.
Regulation: The total voltage regulation for each level is calculated in terms of the hand of
voltage defined by the maximum positive and negative excursions (from nominal)
that occur.
System Introduction1-33
Page 46
1.8.1.2 Installation Requirements
Installing or removing power supply modules, battery, or charger while the AC power cord is
plugged-in may damage the whole power subsystem. Adhere to the standard procedure when
installing or removing power supply modules, battery, or charger.
Failure to follow the standard safety procedure when installing or
removing power modules may result to a fatal system damage.
Follow these steps to install or remove power supply modules, battery, or charger:
1.If the system is on, press the power switch to turn off the power.
2.Unplug the AC power cord from the power outlet.
3.Open the right panel of the system housing.
4.Remove or install power modules.
5.Close the housing then apply AC power.
1-34Service Guide
Page 47
1.9 Mechanical Specifications
1.9.1 IDX-2 Housing
Table 1- 12IDX-2 Housing Specifications
ItemDescription
Dimensions
Spacing between adapter cards
Weight
Basic Model
Full-load
Opening for expansion slots
Drive Bays
Major subassembly support
Circuit card support
Metal finish
Color/Paint
Case finish
740 mm (d) x 435 mm (w) x 700 mm (h)
0.8 inch
58 kg
82 kg (maximum)
12
One 3.5-inch external bays
Three 5.25-inch external bays
14 SCSI disk array drive bays
Major subassemblies are rigidly held in place by frame
components. Adequate clearances are provided so that
cards can be installed and removed without bending or
forcing. All other components such as SPS and FDD can
be assembled easily.
Circuit cards plugged into the system board are supported
by a card edge connector, the card end bracket, and by a
card edge guide supporting the card edge from the farthest
end bracket (if the card conforms to the full length).
All metal surfaces are plated or equivalent treatment
Lower case:metal
Left cover:metal
Frame:metal
Right cover:metal
Units delivered in specified MCS colors. Paint samples
supplied to the vendor as required.
All surfaces are textured or equivalent treatment
Figure 1-17 shows the IDX-2 housing.
System Introduction1-35
Page 48
Figure 1- 17IDX-2 Housing
1-36Service Guide
Page 49
1.10 Shipping Configuration
Figure 1-18 shows the basic model configuration for an X3 system with IDX-2 housing (AcerAltos
19000Pro4).
400W S.P.S.
400W S.P.S.
400W S.P.S.
Battery Box
Figure 1- 18Basic Model Configuration
P6
P6
OPB
OPB
P6
P6
System Introduction1-37
Page 50
1.11 Cable Connections
The following figures illustrate the cable connections for the different system components.
System Board Power Connections
Figure 1- 19System Board Power Cable Connections
1-38Service Guide
Page 51
Figure 1- 20System Boards and Power Subsystem Interconnections
Figure 1- 28Cable 7 Definition for Power Subsystem J7
Cable 9 - Power Subsystem J10
Part Number:
50.59912.011
PS_ON
5VSTNDBY
PS_ON
3P
Fm M.B.
CN1
1
2
3
p1 : NC
p2
p3
46 cm
15 cm
4P
To FDD/HDD/
Tape/CD-ROM/...
15 cm
1
+12V
2
GND
3
GND
4
+5V
1
+12V
2
GND
3
GND
4
+5V
4P
To Power Subs.
J10
p2
4
3
2
p3
1
4P
4P
GND
+5V Standby
Remote On/Off
N.C.
p1, p4: NC
“ C9/J10 ”
Figure 1- 29Cable 9 Definition for Power Subsystem J10
System Introduction1-43
Page 56
Cable 10 - Power Subsystem J14
Part Number:
50.59911.021
39 cm
2P
Fm Power
Subs. J14
28 cm
* Cable: 22 AWG, Black
82 cm
43 cm
Figure 1- 30Cable 10 Definition for Power Subsystem J14
Cable 11 - Front Panel BD
Part Number:
50.59904.021
34P
R.D.
S.W.
L.D.
S.W.
Fm M.B.
F.P. Connector
CN9
P16 - Key
“ CABLE 11 ”
length: 70 cm
Figure 1- 31Cable 11 Definition for the Front Panel Board
34P
To Front
Panel BD
CN6
P16 - Key
1-44Service Guide
Page 57
Cable 12 - Wide SCSI Cables
Part Number:
50.59906.001
68P
Wide-SCSI
Connector
Fm M.B.
SCSI
Connector
J20
length: 75 cm
Figure 1- 32Cable 12 Definition for the Wide SCSI Connectors
Cable 14 & 15 - SCSI B.P. BD
Part Number:
50.59903.041
14P
Fm F.P. BD
CN3
Connector
“ CABLE 14 ”
length: 68 cm
68P
Wide-SCSI
Connector
To SCSI
BP-W7 (L)
Channel-0/
Channel-1
14P
To SCSI
BP-W7 (L)
P4
Part Number:
50.59903.001
14P
Fm F.P. BD
CN4
Connector
“ CABLE 15 ”
14P
length: 52 cm
To SCSI
BP-W7 (R)
P4
Figure 1- 33Cables 14 and 15 Definition for the SCSI Backplane Board
System Introduction1-45
Page 58
Cable 16 & 17 - Floppy & IDE
Part Number:
50.59904.031
Part Number:
50.59905.011
Fm M.B.
IDE
Connector
CN12
“ CABLE 17 ”
34P
Fm M.B.
Floppy
Connector
CN10
Pin-5 : Key
“ CABLE 16 ”
40P
length: 47cm
* Note: Pin10 ~ Pin 16 must
be inverted for 3.25” FDD.
40P
length: 47 cm
To HDD
Figure 1- 34Cables 16 and 17 Definition for Diskette and IDE Drives
Cable 18 - Power Subsystem J7
34P
To FDD
40P
To HDD
Part Number:
50.59914.011
10P
Fm Power
Subs. J7
“ C18/J7 ”
GND
GND
GND
+5V
+5V
+12V
GND
GND
+5V
+5V
1
2
3
4
5
6
7
8
9
10
28 cm
Figure 1- 35Cable 18 Definition for Power Subsystem J7
1
2
3
4
5
6
7
8
9
10
+5V
+5V
+5V
+5V
+12V
GND
GND
GND
GND
GND
10P
To M.B.
CN2
* This end must add a
yellow caution label!
1-46Service Guide
Page 59
Cable 19 - Narrow SCSI Cable
Part Number:
50.58607.001
50P
Fm M.B.
N-SCSI
Connector
J17
“ C19/J17 ”
length: 75 cm
Figure 1- 36Cable 19 Definition for the Narrow SCSI Connector
50P
len.: 15 cm
50P
50P
To HDDs
len.: 15 cm
System Introduction1-47
Page 60
C h a p t e r 2C h a p t e r 2
Major Chipsets
This chapter describes the major chipsets used in the X3 system board and memory board. It
includes the chipset features, block diagram, and signal descriptions.
2.1 Pentium Pro processor (P6)
2.1.1 Features
The Pentium
microprocessors. The P6 processor maintains binary compatibility with the 8086/88, 80286,
Intel386, Intel486, and Pentium processors. It integrates the second-level cache, APIC, and
memory bus controller found in previous Intel processors as a single component.
The P6 processor provides significant performance improvements using the following internal
architectural improvements:
TM
Pro (P6) CPU is the next generation in the Intel386TM/lntel486TM family of
• Super-scalar model
• Super-pipelined model
• Register renaming
• Out-of-order execution
• Speculative execution
The expected P6 processor performance gains, over a Pentium processor using the same process
technology, are 1.5X (lnteger Specmark), 2.0X (Floating-point Specmark), and 1.75X (Aggregate
Specmark).
Increasing clock frequencies and processor performance can complicate system designs. To
counter this trend, a primary design goal of the P6 processor bus is to simplify system design as
much as possible while still providing advanced features and high performance. The P6 processor
provides all of the debug hooks on previous generation processors, and has increased the debug
information available directly from the bus. In addition, the P6 processor integrates several
system components and has a configurable bus frequency.
The external P6 bus design enables it to be multiprocessor ready. It integrates bus arbitration and
control, cache coherency circuitry, an MP interrupt controller, and other system-level functions into
the bus interface.
Major Chipsets2-1
Page 61
To relax timing constraints, the P6 implements a synchronous latched bus protocol to enable a full
clock cycle for signal transmission and a full-clock cycle for signal interpretation and generation.
This latched protocol simplifies interconnect timing requirements and supports higher frequency
system designs using inexpensive ASIC interconnect technology. The P6 bus uses low-voltage
swing gunning transceiver logic (GTL) I/O buffers, making high-frequency signal communication
easier.
The P6 processor component contains a processor core and a large second-level (L2) cache. The
high internal cache hit rate satisfies most of the CPU core's bandwidth and latency requirements.
The L2 cache reduces the P6 processor's external memory bandwidth requirement and makes the
processor's performance less sensitive to bus access latency. Eliminating external caches
removes some complexities in P6 processor system design.
The processor handles most of the P6 processor cache protocol complexity. A non-caching I/O
bridge on the P6 bus does not need to comprehend the cache protocol and does not need snoop
logic. The I/O bridge can issue standard memory accesses on the P6 bus, which are transparently
snooped by all P6 bus agents. If data is modified in a P6 processor cache, the processor
transparently provides data on the bus, instead of the memory controller. This functionality
eliminates the need for a back-off capability that existing I/O bridges require to enable cache writeback cycles. The memory controller must observe snoop response signals driven by the P6 bus
agents, absorb write-back data on a modified hit, and merge any write data.
The P6 processor integrates memory type range registers (MTRRs) that can replace the external
address decode logic used to decode cacheability attributes.
The P6 bus protocol enables a near linear increase in system performance with an increase in the
number of processors. The P6 processor interfaces to a multiprocessor system without any
support logic. This “glueless” interface enables a desktop P6 processor system to be built with an
upgrade socket for another P6 processor. The key design challenge in a P6 processor chipset is
to take advantage of the P6 bus protocol and adapt to the higher bandwidth requirements of
multiple processors.
The external P6 bus and P6 processor use a ratio clock design that provides modularity and
upgradability. The processor's internal clock frequency is a multiple of the bus clock frequency,
where the multiple is 2, 3, or 4. Only certain bus and processor frequency combinations are
supported. This specification reserves additional combinations to provide future upgrade paths.
The ratio clock approach reduces the tight coupling between the processor clock and the external
bus clock. For a fixed system-bus clock frequency, P6 processors introduced later with higher
processor clock frequencies can use the same support chipset at the same bus frequency. Faster
and slower P6 processors can co-exist in the same system. A customer's investment in a P6
processor chipset is protected for a longer time and for a greater range of processor frequencies.
The ratio clock approach also preserves system modularity, allowing a system's electrical topology
to determine the system bus clock frequency while process technology can determine the
processor clock frequency.
The P6 bus architecture provides a number of features to support high reliability and high
availability designs. Most of these additional features can be disabled, if necessary. For example,
the bus architecture allows the data bus to be unprotected with parity, or protected with an error
correcting code (ECC). Error detection and limited recovery are built into the bus protocol.
A P6 processor-based cluster can contain up to four P6 processors, and a combination of four
other loads consisting primarily of memory controllers, I/O bridges, and custom attachments.
2-2Service Guide
Page 62
In a four-processor system, the data bus is the most critical resource. To account for this situation
the P6 bus implements several features to maximize available bus bandwidth. These features
allow for pipelined transactions in which bus transactions in different phases overlap an increase in
transaction pipeline depth over previous generations, and support for deferring a transaction for
later completion.
A P6 processor system for the high-end server market can contain four-processor clusters, each
behind a level 2 (L2) cache controller. The P6 processor cache protocol provides flexibility in the
L2 cache design, enabling high-end servers to use their L2 cache design for a key difference in
performance. Non-latency L2 misses can be supported in a deferred reply mode without
preventing further transactions from being issued and completed.
The P6 bus architecture is therefore adaptable to various classes of systems. In desktop
multiprocessor systems, a subset of the bus features can be used. In the low-end server market,
the P6 bus provides an easier entry into low-end multiprocessing with linear increases in
performance as CPUs are added. Finally, the P6 bus meets the demands of the high-end server
marketplace, allowing P6 processor systems to be considered for applications currently being
downsized.
Major Chipsets2-3
Page 63
2.1.2 Pin Diagram
Figure 2- 1P6 Processor Pin Diagram
2-4Service Guide
Page 64
2.1.3 CPU ID
Top Markings Bottom Markings
ZZZ = Speed (MHz)
LLL = L2 Cache Size (Kbyte)
KB8Ø521EXZZZ QYYYY LLLK
intel
PENTIUM PRO
FFFFFFFF-XXXX
FFFFFFFF = FPO Lot #
‘94 ‘95INTEL
CM
R
R
XXXX = Serial Number
XXXXXXXXAA
YYYYYYYYAA
MALAY K
KB8Ø521EXZZZ
SYYYY LLLK
ZZZ = Speed (MHz)
LLL = L2 Cache Size (Kbyte)
SYYYY = S-Spec Number
CPU Alternative
Identification Number
Country of origin,
product designator
Figure 2- 2Pentium Pro CPU Identification Markings
Source: Intel Pentium Pro Processor Specification Update released on May 15, 1996.
Major Chipsets2-5
Page 65
2.1.4 Signal Types
The P6 processor has following signal types.
• 3.3V Tolerant Type (TTL-compatible)
• 5V Tolerant Type (TTL-compatible)
• GTL+ Type
• JTAG Type
2.1.4.1 GTL+ Type
Most of the P6 processor signals use a variation of the low-voltage Gunning Transceiver Logic
(GTL) signaling technology. The P6 processor bus specification is similar to the GTL specification
plus enhancements to provide larger noise margins and reduced ringing. This is accomplished by
increasing the termination voltage level and controlling the edge rates. Since this specification is
different from the standard GTL specification, it is referred to as GTL+ in this document.
The GTL+ signals are open-drain and require external termination to a supply that provides the
high-signal level. The GTL+ inputs use differential receivers that require a reference signal (V
Termination, usually a resistor on each end of the signal trace, is used to pull the bus up to the
high-voltage level and to control reflections on the stub-free transmission line. The receivers use
V
to determine if a signal is a logical 0 or a logical 1.
REF
REF
).
Table 2-1 lists the bus termination voltage specifications for GTL+.
Table 2- 1GTL+ Bus Termination Voltage Specifications
SymbolParameterVoltage (V)
V
TT
V
REF
Bus Termination Voltage
Input Reference Voltage
V
should be created from VTT by a voltage divider of 1%
REF
1.5±10%
2/3 VTT±2%
resistors.
2-6Service Guide
Page 66
There are eight V
pins on the P6 processor to ensure that internal noise does not affect the
REF
performance of the I/O buffers. Pins A1, C7, S7 and Y7 (V
pins A47, U41, AE47 and AG45 (V
[7:4]) must be tied together. The two groups may also be
REF
tied to each other if desired.
CPUASICCPUASIC
Figure 2- 3GTL+ Bus Topology
2.1.4.2 JTAG
[3:0]) must be tied together while
REF
1.5V1.5V
JTAG stands for Joint Test Action Group. This signal type is tolerant to 3.3V and especially used
for testing and debugging.
I/O, GTL+The A[35:3]# signals are the address signals.
They are driven during the two-clock request
phase by the request initiator. The signals in the
two clocks are referenced Aa[35:3]# and
Ab[35:3]#. During both clocks, A[35:24]# signals
are protected with the AP1# parity signal, and
A[23:3]# signals are protected with the APO#
parity signal.
The Aa[35:3]# signals are interpreted based on
information carried during the first request phase
clock on the REQa[4:0]# signals.
For memory transactions as defined by
REQa[4:0]# = {XX01X,XX10X,XX11X}, the
Aa[35:3]# signals define a 236-byte physical
memory address space. The cacheable agents in
the system observe the Aa[35:3]# signals and
begin an internal snoop. The memory agents in
the system observe the Aa[35:3]# signals and
begin address decode to determine if they are
responsible for the transaction completion.
Aa[4:3]# signals define the critical word, the first
data chunk to be transferred on the data bus.
For P6.0 DX IO transactions as defined by
REQa[4:0]# = 1000X, the signals Aa[16:3]#
define a 64K+3 byte physical IO space. The IO
agents in the system observe the signals and
begin address decode to determine if they are
responsible for the transaction completion.
Aa[35:17]# are always zero. Aa[16:3]# is zero
unless the IO space being accessed is the first
three bytes of a 64KB address range.
For deferred reply transactions as defined by
REQa[4:0]# = 00000, Aa[23:16]# carry the
deferred ID. This signal is the same deferred ID
supplied by the request initiator of the original
transaction on Ab[23:16]#/DID[7:0]# signals. P6
bus agents that support deferred replies sample
the deferred ID and perform an internal match
against any outstanding transactions waiting for
deferred replies. During a deferred reply,
Aa[35:24]# and Aa[15:3]# are reserved.
For the branch-trace message transaction as
defined by REQa[4:0]# = 01001 and for special
and interrupt acknowledge transactions, as
defined by REQa[4:0]# = 01000, the Aa[35:3]#
signals are reserved and undefined.
I/O, GTL+During the second clock of the request phase,
Ab[35:3]# signals perform identical signal
functions for all transactions. For ease of
description, these functions are described using
new signal names. Ab[31:24]# are renamed the
attribute signals ATTR[7:01#. Ab[23:16]# are
renamed the Deferred ID signals DID[7:0]#.
Ab[15: 8]# are renamed the eight-byte enable
signals BE[7:0]. Ab[7:3]# are renamed the
extended function signals EXF[4:0].
On the active-to-inactive transition of RESET#,
each P6 bus agent samples A[35:3]# signals to
determine its power-on configuration. Two clocks
after RESET# is sampled deasserted, these
signals begin normal operation.
in the PC Compatibility group. If the A20M# input
signal is asserted, the P6 processor masks
physical address bit 20 (A20#) before looking up
a line in any internal cache and before driving a
read/write transaction on the bus. Asserting
A20M# emulates the 8086 processor's address
wraparound at the one MB boundary. Only assert
A20M# when the processor is in real mode. The
effect of asserting A20M# in protected mode is
undefined and may be implemented differently in
future processors.
Snoop requests and cache-line write-back
transactions are unaffected by A20M# input.
Address 20 is not masked when the processor
samples external addresses to perform internal
snooping.
A20M# is an asynchronous input. However, to
guarantee recognition of this signal that follows
an I/O write instruction, A20M# must be valid with
active RS[2:0]# signals of the corresponding I/O
write bus transaction. In FRC mode, A20M# must
be synchronous to BCLK.
During active RESET#, the P6 processor begins
sampling the A20M# and IGNNE# values to
determine the ratio of core-clock frequency to
bus-clock frequency.
Major Chipsets2-9
Page 69
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
A20M#
continued
ADS#AE3I/O, GTL+The ADS# signal is the address strobe signal. It is
A11I, 3.3VThe following table shows the bus frequency to
core frequency ratio configuration:
Ratio of
Core Clock to
Bus ClockLINT[1]LINT[0]# IGNNE#SA20M#
asserted by the current bus owner for one clock
to indicate a new request phase. A new request
phase can only begin if the in-order queue has
less than the maximum number of entries defined
by the power-on configuration (1 or 8), the
request phase is not being stalled by an active
BNR# sequence and the ADS# associated with
the previous request phase is sampled inactive.
Along with the ADS#, the request initiator drives
A[35:3]#, REQ[4:0]#, AP[I:0]#, and RP# signals
for two clocks. During the second request phase
clock, ADS# must be inactive. RP# provides
parity protection for REQ[4:0]# and ADS# signals
during both clocks. If the transaction is part of a
bus locked operation, LOCK# must be active with
ADS#.
If the request initiator continues to own the bus
after the first request phase, it can issue a new
request every three clocks. If the request initiator
needs to release the bus ownership after the
request phase, it can deactivate its
BREQn#/BPRI# arbitration signal as early as with
the activation of ADS#.
All bus agents observe the ADS# activation to
begin parity checking, protocol checking, address
decode, internal snoop, or deferred reply ID
match operations associated with the new
transaction. On sampling the asserted ADS#, all
agents load the new transaction in the in-order
queue and update internal counters. The error,
snoop, response, and data phase of the
transaction are defined with respect to ADS#
assertion.
2-10Service Guide
Page 70
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
AERR#AE9I/O, GTL+The AERR# signal is the address parity error
signal. Assuming the AERR# driver is enabled
during the power-on configuration, a bus agent
can drive AERR# active for exactly one clock
during the error phase of a transaction. AERR#
must be inactive for a minimum of two clocks.
The error phase is always three clocks from the
beginning of the request phase.
On observing active ADS#, all agents begin parity
and protocol checks for the signals valid in the
two request phase clocks. Parity is checked on
AP[1:0]# and RP# signals. AP1# protects
A[35:24]#, AP0# protects A[23:3]# and RP#
protects REQ[4:0]#. A parity error without a
protocol violation is signaled by AERR#
assertion.
If AERR# observation is enabled during a poweron configuration, AERR# assertion in a valid error
phase aborts the transaction. All bus agents
remove the transaction from the in-order queue
and update internal counters. The snoop phase,
response phase, and data phase of the
transaction are aborted. Specifically if the snoop
phase associated with the aborted transaction is
driven in the next clock, the snoop results,
including a STALL condition (HIT# and HITM#
asserted for one clock), are ignored. All bus
agents must also begin an arbitration reset
sequence and deassert BREQn#/BPRI#
arbitration signals on sampling AERR# active. A
current bus owner in the middle of a bus lock
operation must keep LOCK# asserted and assert
its arbitration request BPRI#/BREQn# after
keeping it inactive for two clocks to retain its bus
ownership and guarantee lock atomicity. All other
agents, including the current bus owner not in the
middle of a bus lock operation, must wait at least
4 clocks before asserting BPRI#/BREQn# and
beginning a new arbitration.
If AERR# observation is enabled, the request
initiator can retry the transaction up to n times
until it reaches the retry limit defined by its
implementation. After n retries, the request
initiator treats the error as a hard error. The
request initiator asserts BERR# or enters the
machine check exception handler, as defined by
the system configuration.
If AERR# observation is disabled during a poweron configuration, AERR# assertion is ignored by
all bus agents except a central agent. Based on
the system machine check architecture, the
central agent can ignore AERR#, assert NMI to
execute NMI handler, or assert BINIT# to reset
Major Chipsets2-11
Page 71
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
the bus units of all agents and execute an MCE
handler.
AP[1:0]#S9, U1I/O, GTL+The AP[1:0]# signals are the address parity
signals. They are driven by the request initiator
during the two request phase clocks along with
ADS#, A[35:3]#, REQ[4:0]#, and RP#. AP1#
covers A[35:24]#. AP0# covers A[23:3]#. A
correct parity signal is high if an even number of
covered signals are low and low if an odd number
of covered signals are low. This rule allows parity
to be high when all the covered signals are high.
Provided "AERR# drive" is enabled during the
power-on configuration, all bus agents begin
parity checking on observing active ADS# and
determine if there is a parity error. On observing
a parity error on any one of the two request phase
clocks, the bus agent asserts AEPR# during the
error phase of the transaction.
BCLKA19I, 3.3VThe BCLK (clock) signal is the execution control
group input signal. It determines the bus
frequency. All agents drive their outputs and latch
their inputs on the BCLK rising edge.
The BCLK signal indirectly determines the P6
processor's internal clock frequency. Each P6
processor derives its internal clock from BCLK by
multiplying the BCLK frequency by 2, 3, or 4 as
defined and allowed by the power-on
configuration.
All external timing parameters are specified with
respect to the BCLK signal.
BERR#C5I/O, GTL+The BERR# signal is the error group bus error
signal. It indicates an unrecoverable error without
a bus protocol violation if asserted.
The BERR# protocol is as follows: If an agent
detects an unrecoverable error for which BERR#
is a valid error response and BERR# is sampled
inactive, it asserts BERR# for three clocks. An
agent can assert BERR# only after observing that
the signal is inactive. An agent asserting BERR#
must deassert the signal in two clocks if it
observes that another agent began asserting
BERR# in the previous clock.
BERR# assertion conditions are defined by the
system configuration. Configuration options
enable the BERR# driver as follows:
• enabled or disabled asserted optionally for internal
errors along with IERR# optionally asserted by the
request initiator of a bus transaction after it observed
an error
• asserted by any bus agent when it observes an error
2-12Service Guide
Page 72
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
in a bus transaction
BERR#
continued
BINIT#AC43I/O, GTL+The BINIT# signal is the bus initialization signal.
BNR#U7I/O, GTL+The BNR# signal is the block next request signal
C5I/O, GTL+BERR# sampling conditions are also defined by
the system configuration. Configuration options
enable the BERR# receiver to be enabled or
disabled. When the bus agent samples an active
BERR# signal and if MCE is enabled, the P6
processor enters the machine check handler. If
MCE is disabled, typically the central agent
forwards BERR# as an NMI to one of the
processors.
If the BINIT# driver is enabled during the power
on configuration, BINIT# is asserted to signal any
bus condition that prevents reliable future
information.
The BINIT# protocol is as follows: If an agent
detects an error for which BINIT# is a valid error
response, and BINIT# is sampled inactive, it
asserts BINIT# for three clocks. An agent can
assert BINIT# only after observing that the signal
is inactive. An agent asserting BINIT# must
deassert the signal in two clocks if it observes
that another agent began asserting BINIT# in the
previous clock.
If BINIT# observation is enabled during a poweron configuration, and BINIT# is sampled
asserted, all bus state machines are reset. All
agents reset their rotating ID for bus arbitration to
the state after reset, and internal count
information is lost. The L1 and L2 caches are not
affected.
If BINIT# observation is disabled during power-on
configuration, BINIT# is ignored by all bus agents
except a central agent that must handle the error
in a manner appropriate to the system
architecture.
in the arbitration group. The BNR# signal is used
to assert a bus stall by any bus agent who is
unable to accept new bus transactions to avoid
an internal transaction queue overflow. During a
bus stall, the current bus owner cannot issue any
new transactions.
Since multiple agents might need to request a
bus stall at the same time, BNR# is a wire-OR
signal. In order to avoid wire-OR glitches
associated with simultaneous edge transitions
driven by multiple drivers, BNR# is activated on
specific clock edges and sampled on specific
clock edges.
Major Chipsets2-13
Page 73
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
BNR#
continued
BP[3:2]#AC39, AE43I/O, GTL+The BP[3:2]# signals are the system support
BPM[1:0]#AA39, AC41I/O, GTL+The BPM[1:0]# signals are more system support
U7I/O, GTL+A valid bus stall involves assertion of BNR# for
one clock on a well-defined clock edge (T1),
followed by de-assertion of BNR# for one clock
on the next clock edge (T1+1). BNR# can first be
sampled on the second clock edge (T1+1) and
must always be ignored on the third clock edge
(T1+2). An extension of a bus stall requires one
clock active (T1+2), one clock inactive (T1+3)
BNR# sequence with BNR# sampling points
every two clocks (T1+1, T1+3,...).
After the RESET# active-to-inactive transition,
bus agents might need to perform hardware
initialization of their bus unit logic. Bus agents
intending to create a request stall must assert
BNR# in the clock after RESET# is sampled
inactive.
After BINIT# assertion, all bus agents go through
a similar hardware initialization and can create a
request stall by asserting BNR# four clocks after
BINIT# assertion is sampled.
On the first BNR# sampling clock that BNR# is
sampled inactive, the current bus owner is
allowed to issue one new request. Any bus agent
can immediately reassert BNR# (four clocks from
the previous assertion or two clocks from the
previous de-assertion) to create a new bus stall.
This throttling mechanism enables independent
control on every new request generation.
If BNR# is deasserted on two consecutive
sampling points, new requests can be freely
generated on the bus. After receiving a new
transaction, a bus agent can require an address
stall due to an anticipated transaction-queue
overflow condition. In response, the bus agent
can assert BNR#, three clocks from active ADS#
assertion and create a bus stall. Once a bus stall
is created, the bus remains stalled until BNR# is
sampled asserted on subsequent sampling
points.
group breakpoint signals. They are outputs from
the PE processor that indicates the status of
breakpoints.
group breakpoint and performance monitor
signals. They are outputs from the P6 processor
that indicates the status of breakpoints and
programmable counters used for monitoring P6
performance.
2-14Service Guide
Page 74
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
BPRI#U5I, GTL+The BPRI# is the priority-agent bus request
signal. The priority agent arbitrates for the bus by
asserting BPRI#. The priority agent is always the
next bus owner. Observing BPRI# active causes
the current symmetric owner to stop issuing new
requests unless the requests are part of an
ongoing locked operation.
If LOCK# is sampled inactive two clocks from
BPRI# driven asserted, the priority agent can
issue a new request within four clocks of
asserting BPRI#. The priority agent can further
reduce its arbitration latency to two clocks if it
samples active ADS# and inactive LOCK# on the
clock in which BPRI# was driven active. It can
reduce its arbitration latency to three clocks if it
samples active ADS# and inactive LOCK# on the
clock in which BPRI# was sampled active. If
LOCK# is sampled active, the priority agent must
wait for LOCK# to be sampled deasserted to gain
bus ownership in two clocks. The priority agent
can keep BPRI# asserted until all of its requests
are completed and can release the bus by
deasserting BPRI# at the same clock edge on
which it issued the last request.
On observation of active AERR#, RESET#, or
BINIT#. BPRI# must be deasserted in the next
clock. BPRI# can be reasserted in the clock after
sampling the RESET# active-to-inactive transition
or three clocks after sampling BINIT# active and
RESET# inactive. On AERR# assertion, if the
priority agent is in the middle of a bus-locked
operation, BPRI# must be reasserted after two
clocks, otherwise BPRI# must stay inactive for at
least 4 clocks.
After the RESET# inactive transition, P6 bus
agents begin BPRI# and BNR# sampling on
BNR# sample points. If both BNR# and BPRI#
are observed inactive on BNR# sampling points,
the P6 APIC units on a common APIC bus are
synchronized. In a system with multiple P6 bus
clusters sharing a common APIC bus, BPRI#
signals of all clusters must be asserted after
RESET# until BNR# is observed inactive on a
BNR# sampling point. The BPRI# signal on all P6
buses must then be deasserted within 100ns of
each other to accomplish APIC bus
synchronization across all processors.
CPUPRES#B2OtherCPUPRES# is a ground pin that allows a
designer to detect the presence of a processor in
a socket.
I/O, GTL+The D[63:0]# signals are the data signals. They
The BR[3:0]# pins are the physical bus request
pins that drive the BREQ[3:0]# signals in the
system. The BREQ[3:0]# signals are
interconnected in a rotating manner to individual
processor pins. Below are the rotating
interconnections between the processor and bus
signals.
During a power-up configuration, the central
agent must assert the BR0# bus signal. All
symmetric agents sample their BR[3:0]# pins on
active-to-inactive transition of RESET#. The pin
on which the agent samples an active level
determines its agent ID. All agents then configure
their pins to match the appropriate bus signal
protocol, as shown in below:
Pin sampled active on RESET#Agent ID
BR0#0
BR3#1
BR2#2
BR1#3
are driven during the data phase by the agent
responsible for driving the data. These signals
provide a 64-bit data path between various P6
bus agents. The 32-byte line transfers require
four data transfer clocks with valid data on all
eight bytes. Partial transfers require one data
transfer clock with valid data on the byte(s)
indicated by active byte enables BE[7:0]#. Data
signals not valid for a particular transfer must still
have correct ECC (if data bus ECC is selected). If
BE0# is asserted, D[7:03]# transfers the least
significant byte. If BE7# is asserted. D[63:56]#
transfers the most significant byte.
The data driver asserts DRDY# to indicate a valid
data transfer. If the data phase involves more
than one clock the data driver also asserts
DBSY# at the beginning of the data phase and
deasserts DBSY# on the same clock that it
performs the last data transfer.
2-16Service Guide
Page 76
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
DBSY#AA5I/O, GTL+The DBSY# signal is the data-bus busy signal. It
indicates that the data bus is busy. It is asserted
by the agent responsible for driving the data
during the data phase, provided the data phase
involves more than one clock. DBSY# is asserted
at the beginning of the data phase and is
deasserted on the same clock on which the last
data is driven.
When normal read data is being returned, the
data phase begins with the response phase. Thus
the agent returning read data can assert DBSY#
when the transaction reaches the top of the inorder queue and it is ready to return response on
RS[2:0]# signals. In response to a write request,
the agent driving the write data must drive
DBSY# active after the write transaction reaches
the top of the in-order queue and it sees active
TRDY# with inactive DBSY# indicating that the
target is ready to receive data. For an implicit
write-back response, the snoop agent must assert
DBSY# active after the target memory agent of
the implicit write-back asserts TRDY#. Implicit
write-back TRDY# assertion begins after the
transaction reaches the top of the in-order queue,
and TRDY# de-assertion associated with the write
portion of the transaction, if any, is completed. In
this case, the memory agent guarantees
assertion of implicit write-back response in the
same clock in which the snooping agent asserts
DBSY#.
DEFER#Y5I, GTL+The DEFER# signal is the defer signal. It is
asserted by an agent during the snoop phase to
indicate that the transaction cannot be
guaranteed in-order completion. Assertion of
DEFER# is normally the responsibility of the
addressed memory: agent or I/O agent. For
systems that involve resources on a system bus
other than the P6 bus, a bridge agent can accept
the DEFER# assertion responsibility on behalf of
the addressed agent.
DEFER# can only be asserted if DEN# is active
during the request phase. When HITM# and
DEFER# are both active during the snoop phase,
HITM# is given priority and the transaction must
be completed with implicit write-back response. If
HITM# is inactive, and DEFER# active, the agent
asserting DEFER# must complete the transaction
with a deferred or retry response.
Major Chipsets2-17
Page 77
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
DEFER#Y5I, GTL+If DEFER# is inactive, or HITM# is active, then
the transaction is committed for in-order
completion and snoop ownership is transferred
normally between the requesting agent, the
snooping agents, and the response agent.
If DEFER# is active with HITM# inactive, the
transaction commitment is deferred. If the defer
agent completes the transaction with a retry
response, the requesting agent must retry the
transaction. If the defer agent returns a deferred
response, the requesting agent must freeze
snoop state transitions associated with the
deferred transaction and issues of new orderdependent transactions until the corresponding
deferred reply transaction. In the meantime, the
ownership of the deferred address is transferred
to the defer agent and it must guarantee
management of conflicting transactions issued to
the same address.
If DEFER# is active in response to a newly issued
bus-lock transaction, the entire bus-locked
operation is re-initiated regardless of HITM#. This
feature is useful for a bridge agent in response to
a split bus-locked operation. It is recommended
that the bridge agent extend the snoop phase of
the first transaction in a split locked operation
until it can either guarantee ownership of all
system resources to enable successful
completion of the split sequence or assert
DEFER# followed by a retry response to abort the
split sequence.
DEP[7:0]#U39, Y45, AA47,
W41, A47, W39,
Y43, AC45
I/O, GTL+The DEP[7:0]# signals are the data bus
ECC/parity signals. They are driven during the
data phase by the agent responsible for driving
D[63:0]#. The DEP[7:0]# signals provide optional
ECC protection for the data bus. During power-on
configuration, DEP[7:0]# signals can be enabled
for either ECC checking or no checking.
The ECC error correcting code can detect and
correct single-bit errors and detect double-bit or
nibble errors.
DEP[7:0]# provide valid ECC for the entire data
bus on each data clock, regardless of which bytes
are valid If checking is enabled, receiving agents
check the ECC signals for all 64 data signals.
2-18Service Guide
Page 78
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
DRDY#AA3I/O, GTL+The DRDY# signal is the data phase data-ready
signal. The data driver asserts DRDY# on each
data transfer. indicating valid data on the data
bus. In a multi-cycle data transfer, DRDY# can be
deasserted to insert idle clocks in the data phase.
During a line transfer, DRDY# is active for four
clocks. During a partial 1-to-8 byte transfer.
DRDY# is active for one clock. Except for the last
data clock during a data phase, DRDY# and
DBSY# must both be active together. If a data
transfer is exactly one clock, then the entire data
phase consists of one clock active DRDY# and
inactive DBSY#.
FERR#C17O, 3.3VThe FERR# signal is the PC compatibility group
floating-point error signal. The P6 processor
asserts FERR# when it detects an unmasked
floating-point error. FERR# is similar to the
ERROR# signal on the Intel387 coprocessor.
FERR# is included for compatibility with systems
using DOS-type floating-point error reporting.
FLUSH#A15I, 3.3VWhen the FLUSH# input signal is asserted, the
P6 bus agent writes back all internal cache lines
in the modified state and invalidates all internal
cache lines. At the completion of a flush
operation, the P6 processor issues a flush
acknowledge transaction to indicate that the
cache flush operation is complete. The P6
processor stops caching any new data while the
FLUSH# signal remains asserted.
FLUSH# is an asynchronous input. However, to
guarantee recognition of this signal following an
I/O write instruction, FLUSH# must be valid along
with RS[2:0]# in the response phase of the
corresponding l/O Write bus transaction. In FRC
mode, FLUSH# must be synchronous to BCLK.
On active-to-inactive transition of RESET#, each
P6 bus agent samples FLUSH# signals to
determine its power-on configuration. Two clocks
after RESET# is sampled deasserted, these
signals begin normal operation.
Major Chipsets2-19
Page 79
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
FRCERRC9I/O, GTL+The FRCERR signal is the error group 7
functional-redundancy-check error signal. If two
P6 processors are configured in an FRC pair, as
a single "logical" processor, then the checker
processor asserts FRCERR if it detects a
mismatch between its internally sampled outputs
and the master processor's outputs. The
checker's FRCERR output pin is connected to the
master's FRCERR input pin.
For point-to-point connections, the checker
always compares against the master's outputs.
For bussed single-driver signals, the checker
compares against the signal when the master is
the only allowed driver. For bussed multiple-driver
wire-OR signals, the checker compares against
the signal only if the master is expected to drive
the signal low.
FRCERR is also toggled during the P6
processor's reset action. A P6 processor asserts
FRCERR for approximately 1 second after
RESET's active-to-inactive transition if it executes
its built-in self-test (BIST). When BIST execution
completes, the P6 processor de-asserts FRCERR
if BIST completed successfully and continues to
assert FRCERR if BIST fails. If the P6 processor
does not execute the BIST action, then it keeps
FRCERR asserted for approximately 20 clocks
and then deasserts it.
HIT#
HITM#
AC3
AA7
I/O, GTL+
I/O, GTL+
The HIT# and HITM# signals are snoop-hit and
hit-modified signals. They are snoop results
asserted by any P6 bus agent in the snoop
phase.
Any bus agent can assert both HIT# and HITM#
together for one clock in the snoop phase to
indicate that it requires a snoop stall. When a
stall condition is sampled, all bus agents extend
the snoop phase by two clocks The stall can be
continued by reasserting HIT# and HITM#
together every other clock for one clock.
A caching agent must assert HITM# for one clock
in the snoop phase if the transaction hits a
modified line, and the snooping agent must
perform an implicit write-back to update main
memory. The snooping agent with the modified
line makes a transition to shared state if the
original transaction is read line or read partial,
otherwise it transitions to invalid state. A deferred
reply transaction may have HITM# asserted to
indicate the return of unexpected data.
2-20Service Guide
Page 80
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
HIT#
HITM#
continued
IERR#C3O, 3.3VThe IERR# is the error group internal error signal.
IGNNE#A9I, 3.3VThe IGNNE# signal is the PC Compatibility group
AC3
AA7
I/O, GTL+
I/O, GTL+
A snooping agent must assert HIT# for one clock
during the snoop phase if the line does not hit a
modified line in its write-back cache and if at the
end of the transaction its plans to keep the line in
shared state. Multiple caching agents can assert
HIT# in the same snoop phase. If the requesting
agent observes HIT# active during the snoop
phase it can not cache the line in exclusive or
modified state.
On observing a snoop stall, the agents asserting
HIT# and HlTM# independently reassert the
signal after one inactive clock so that the correct
snoop result is available, in case the snoop phase
terminates after the two clock extension.
A P6 processor asserts IERR# when it observes
an internal error. It keeps IERR# asserted until it
is turned off as part of the machine check error or
the NMI handler in software, or with RESET#,
BINIT#, and INIT# assertion.
An internal error can be handled in several ways
inside the processor based on its power-on
configuration. If MCE is enabled, IERR# causes
an MCE entry. IERR# can also be directed on the
BERR# pin to indicate an error. Usually BERR# is
sampled back by all processors to enter MCE or it
can be redirected as an NMI by the central agent.
Ignore numeric error signal. If IGNNE# is
asserted, the P6 processor ignores a numeric
error and continues to execute non-control
floating-point instructions. If IGNNE# is
deasserted, the P6 processor freezes on a noncontrol floating-point instruction if previous
instruction caused an error.
IGNNE# has no effect when the NE bit in control
register o is set. IGNNE# is an asynchronous
input. However, to guarantee recognition of thus
signal following an I/O write instruction, IGNNE#
must be valid along with RS[2:0]# in the response
phase of the corresponding I/O Write bus
transaction. In FRC mode, IGNNE# must be
synchronous to BCLK. During active RESET#,
the P6 processor begins sampling the A20M# and
IGNNE# values to determine the ratio of coreclock frequency to bus-clock frequency. (See
A20M# signal description for details).
After the PLL-lock time, the core clock is
stabilized and locked to the external bus clock.
On the active-to-inactive transition of RESET#,
the P6 processor latches A20M# and IGNNE#
and freezes the frequency ratio internally. Normal
operation on the two signals continues two clocks
Major Chipsets2-21
Page 81
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
after sampling RESET# inactive.
INIT#C11I, 3.3VThe INIT# signal is the execution control group
initialization signal. Active INIT# input resets
integer registers inside all P6 processors without
affecting their internal (L1 or L2) caches or their
floating-point registers. Each P6 processor begins
execution at the power-on reset vector configured
during power-on configuration regardless of
whether INIT# has gone inactive. The processor
continues to handle snoop requests during INIT#
assertion.
INIT# can be used to help performance of DOS
extenders written for the Intel 80286 processor.
INIT# provides a method to switch from protected
mode to real mode while maintaining the contents
of the internal caches and floating-point state.
INIT# can not be used in lieu of RESET# after
power-up.
On active-to-inactive transition of RESET#, each
P6 bus agent samples INIT# signals to determine
its power-on configuration. Two clocks after
RESET# is sampled deasserted these signals
begin normal operation.
INIT# is an asynchronous input. In FRC mode,
INIT# must be synchronous to BCLK.
INTRAG43I, 3.3VThe INTR signal is the interrupt request signal. It
is the power-on default state of the LINT0 signal
in the APIC group. The INTR input indicates that
an external interrupt has been generated. The
interrupt is maskable using the IF bit in the
EFLAGS register. If the IF bit is set, the P6
processor vectors to the interrupt handler after
the current instruction execution is completed.
Upon recognizing the interrupt request, the P6
processor issues a single interrupt acknowledge
(INTA) bus transaction. INTK must remain active
until the INTA bus transaction to guarantee its
recognition.
INTR is sampled on every rising BCLK edge.
INTR is an asynchronous input but recognition of
INTR is guaranteed in a specific clock if it is
asserted synchronously and meets the setup and
hold times. INTR must also be deasserted for a
minimum of two clocks to guarantee its inactive
recognition. In FRC mode, INTR must be
synchronous to BCLK.
2-22Service Guide
Page 82
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
LlNT[1:0]AG41, AG43I, 3.3VThe LINT[1:0] signals are the execution control
group local interrupt signals. When APIC is
disabled, LINT0 signal defaults to the maskable
interrupt request signal INTR and LINT1 to a nonmaskable interrupt (NMI). INTR and NMl are
backward compatible with the same signals in the
Pentium processor. Both signals are
asynchronous inputs. In FRC mode, LINT[1:0]
must be synchronous to BCLK. LINT[1:0] signals
need to be programmed when APIC is enabled.
During active RESET#, P6 begins sampling the
A20M# and IGNNE# values to determine the ratio
of core-clock frequency to bus-clock frequency.
After the PLL-lock time, the core clock is
stabilized and locked to the external bus clock.
On the active-to-inactive transition of RESET#,
P6 latches A20M# and IGNNE#, and internally
freezes the frequency ratio. Normal operation on
the two signals continues two clocks after
sampling RESET# inactive.
LINT[1:0]# is used for core-to-bus frequency ratio
extensions of future processors. Use the pins in
the power-on configuration logic similar to the
A20M# and IGNNE# pins.
LOCK#AA9I/O, GTL+The LOCK# signal is the arbitration group bus
lock signal. For a locked transaction sequence,
LOCK# is asserted from the first transaction's
request phase through the last transaction's
response phase. A locked operation can be
prematurely aborted (and LOCK# deasserted) if
AERR# or DEFER# is asserted during the first
bus transaction of the sequence. The sequence
can also be prematurely aborted if a hard error
(such as a hard failure response or AERR#
assertion be beyond the retry limit) occurs on any
one of the transactions during the locked
operation.
When the priority agent asserts BPRI# to
arbitrate for bus ownership, it waits until it
observes LOCK# deasserted. This enables
symmetric agents to retain bus ownership
throughout the bus locked operation and
guarantee the atomicity of lock. If AERR# is
asserted up to the retry limit during an ongoing
locked operation, the arbitration protocol ensures
that the lock owner receives the bus ownership
after arbitration logic is reset. This is
accomplished by requiring the lock owner to
reactivate its arbitration request one clock ahead
of other agents' arbitration request. LOCK# is
kept asserted throughout the arbitration reset
sequence.
Major Chipsets2-23
Page 83
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
NMIAG41I, 3.3VThe NMI signal is the non-maskable interrupt
signal. It is the default state of the LINT1 signal.
Asserting NMI causes an interrupt with an
internally supplied vector value of 2. An external
interrupt-acknowledge transaction is not
generated. If NMI is asserted during the execution
of an NMI service routine, it remains pending and
is recognized after the IREI is executed by the
NMI service routine. At most, one assertion of
NMI is held pending.
NMI is rising-edge sensitive. Recognition of NMI
is guaranteed in a specific clock if it is asserted
synchronously and meets the setup and hold
times. If asserted asynchronously, active and
inactive pulse widths must be a minimum of two
clocks. In FRC mode, NMI must be synchronous
to BCLK.
PICCLKAA43I, 5VThe PICCLK signal is the execution control group
APIC Clock signal. It is an input clock to the P6
processor for synchronous operation of the APIC
bus. PICCLK must be synchronous to BCLK in
FR7 mode.
PICD[1:0]AE21, AA41I/O, 5VThe PICD[1:0] signals are the execution control
group APIC Data signals. They are used for
bidirectional serial message passing on the APIC
bus.
PLL[2:1]C23, C19OthersIsolated analog decoupling is required for the
internal phase lock loop (PLL). This should be
equivalent to 0.1uF of ceramic capacitance
across the PLL1 and PLLw pins.
PRDY#Y39O, GTL+The PRDY# signal is the system support group
probe ready signal. A P6 processor asserts PRDY
to indicate that it has entered probe mode and
that its test access port (TAP) is ready to accept
a boundary scan or probe mode command.
PREQ#AA45I, 3.3VThe PREQ# signal is the system support group
probe request signal. Asserting PREQ# stops
normal P6 processor execution and places the P6
processor in probe mode,: where it is capable of
executing probe instructions. Probe mode is
similar to ICE mode (in-circuit emulator mode) on
other Intel processors.
PWRGOODAG7I, 3.3VPWRGOOD is a 3.3V tolerant input. This signal
is a clean indication that clocks and the system
3.3V, 5V and VCCP supplies are stable and within
their specifications. PWRGOOD can be driven
inactive at any time but power and clocks must
be stable before the rising edge of PWRGOOD.
2-24Service Guide
Page 84
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
REQ[4:0]#W5, Y1, Y3, W7,W9I/O, GTL+The REQ[4:0]# signals are request command
signals. They are asserted by the current bus
owner in both clocks of the request phase. In the
first clock, the REQa[4:0]# signals define the
transaction type to a level sufficient to begin a
snoop request. In the second clock, REQb[4:0]#
signals carry additional information to define the
complete transaction type. REQb[4:2]# is
reserved. REQb[1:0]# signals transmit LEN[1:0]#
(the data transfer length information). In both
clocks, REQ[4:0]# and ADS# are protected by
parity RP#.
All receiving agents observe the REQ[4:0]#
signals to determine the transaction type and
participate in the transaction as necessary, as
shown in below:
RESET#Y41I, GTL+The RESET# signal is the execution control group
reset signal. Asserting RESET# resets all P6
processors to known states and invalidates their
L1 and L2 caches without writing back modified
(M state) lines. RESET# must remain active for
one microsecond for a "warm" reset. For a poweron type reset, RESET# must stay active for at
least one millisecond after VCC and CLK have
reached their proper DC and AC specifications.
On observing active RESET#, all bus agents
must deassert their outputs within two clocks.
LEN[1:0]#
LEN[1:0]#
Major Chipsets2-25
Page 85
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
RESET#
continued
RP#AC7I/O, GTL+The RP# signal is the request parity signal. The
RS[2:0]#AE7,AE5,AC9I, GTL+The RS[2:0]# signals are the response status
Y41I, GTL+A number of bus signals are sampled at the
active-to-inactive transition of RESET# for the
power-on configuration.
Unless its outputs are tristated during power-on
configuration, after active-to-inactive transition of
RESET#, the P6 processor optionally executes its
built-in self-test (BIST) and begins program
execution at reset-vector 00_000F_FFF0H or
00_FFFF FFF0H.
request initiator drives it in both clocks of the
request phase. RP# provides parity protection on
ADS# and REQ[4:0]#. When a P6 bus agent
observes an RP# parity error on any one of the
two request phase clocks, it must assert AERR#
in the error phase, provided "AERR# drive" is
enabled during the power-on configuration. A
correct parity signal is high if an even number of
covered signals are low. It is low if an odd
number of covered signals are low. Parity are
high when all covered signals are high.
signals. They are driven by the response agent
(the agent responsible for completion of the
transaction at the top of the in-order queue).
Assertion of RS[2.0]# to a non-zero value for one
clock completes the response phase for a
transaction. The response encodings are shown
in below. Only certain response combinations are
valid, based on the snoop result signaled during
the transaction's snoop phase.
RS[2:0]# HITM#DEFER#Description
000NANAIdle state.
00100Retry Response.
01001Defer Response.
01101Reserved
100XXHard Failure.
10100Normal without data
1101XImplicit write-back
11100Normal with data.
The transaction is
canceled and must be
retried by the initiator.
The transaction is
suspended. The defer
agent completes it with
a defer reply
The transaction
received a hard error.
Exception handling is
required.
response. Snooping
agent transfers the
modified cache line on
be data bus.
2-26Service Guide
Page 86
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
RS[2:0]#
continued
AE7, AE5, AC9I, GTL+The RS[2:0]# assertion for a transaction is
initiated when all of the following conditions are
met:
• All bus agents have observed the snoop phase
completion of the transaction.
• The transaction is at the top of in-order queue.
• RS[2:0]# are sampled in the idle state
The response driven depends on the following
transactions:
• The response agent returns a hard-failure response
for any transaction in which the response agent
observes a hard error.
• The response agent returns a normal with data
response for a read; transaction with HITM# and
DEFER# deasserted in the snoop phase, when the
addressed agent is ready to return data and samples
inactive DBSY#.
• The response agent returns a normal without data
response for a write transaction with HITM# and
DEFER# deasserted in the snoop phase, when the
addressed agent samples TRDY# active and DBSY#
inactive, and it is ready to complete the transaction.
• The response agent must return an implicit write-back
response in the next clock for a read transaction with
HITM# asserted in the snoop phase, when the
addressed agent samples TRDY# active and DBSY#
inactive.
The addressed agent must return an implicit
write-back response in the clock after the
following sequence is sampled for a write
transaction with HITM# asserted:
• TRDY# active and DBSY# inactive
• followed by TRDY# inactive
• followed by TRDY# active and DBSY# inactive
The defer agent can return a deferred, retry, or
split response anytime for a read transaction with
STM# deasserted and DEFER# asserted.
The defer agent returns deferred, retry, or split
response when it samples TRDY# active and
DBSY# inactive for a write transaction with
HITM# deasserted and DEFER# asserted.
Major Chipsets2-27
Page 87
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
SMI#W1I, 3.3VThe SMI# is the response system management
interrupt (SMI) signal. It is asserted by the
response agent in the response phase. The
addressed I/O agent asserts SMI# to signal a
synchronous I/O restart SMI in response to an I/O
transaction initiated by the processor to a
powered-down I/O device. In observing an active
RSMI# during the response phase, the P6
processor saves the current state and enters
SMM mode. It issues an SMI acknowledge bus
transaction then begins program execution from
the SMM handler. It is not protected under parity
and is an optional signal if the system does not
support any synchronous I/O restart capability.
RSP#U3I, GTL+The RSP# is the response parity signal. It is
driven by the response agent during the assertion
of RS[2:0]#. RSP# provides parity protection for
RS[2:0]#.
A correct parity signal is high if an even number
of covered signals are low. It is if an odd number
of covered signals are low. During idle state of
RS[2:0]# (RS[2:0]#=000), RSP# is also high since
it is not driven by any agent guaranteeing correct
parity.
P6 bus agents check RSP# at all times and if a
parity error is observed, treat it as a protocol
violation error. If the BINIT# driver is enabled
during configuration, the agent observing RSP#
parity error asserts BTNIT#.
STPCLK#A3I, 3.3VThe STPCLK# is the stop clock signal. When
asserted, P6 enters a low power stop-clock state.
The processor issues a stop clock acknowledge
special transaction and stops sending internal
clock signals to all units except the bus unit and
the APIC unit. P6 continues to snoop bus
transactions and service interrupts in the stop
clock state. When STPCLK# is deasserted, P6
restarts its internal lock to all units and resumes
execution. The assertion of STPCLK# has no
effect on the bus clock. STPCLK# is an
asynchronous input. In FRC mode, STPCLK#
must be synchronous to BCLK.
TCKA5I, JTAGThe TCK is the system support group test clock
signal. TCK provides the clock input for the test
bus (also known as the test access port). TCK
must be connected to a clock to ensure
initialization of the JTAG support.
TDIA13I, JTAGThe TDI is the system support group test-data-in
signal. TDI transfers serial test data into the P6
processor. TDI provides the serial input needed
for JTAG support.
2-28Service Guide
Page 88
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
TDOC13O, JTAGThe TDO is the system support group test-data-
out signal. TDO transfers serial test data out from
the P6 processor. TDO provides the serial output
needed for JTAG support.
TESTHIA23, A25, AE39OtherTESTHI pins should be tied to VCCP. A 10K pull-
up resistor may be used.
TESTLO
THERMTRIP#A17O, 3.3VThe P6 processor protects itself from catastrophic
TMSC15I, JTAGThe TMS is an additional system support group
TRDY#Y9I, GTL+The TRDY# is the target ready signal. It is
OtherTESTLO pins should be tied to VSS. A 1K pull-
down resistor may be used.
overheating through an internal thermal sensor.
This sensor is set way above the normal
operating temperature to ensure that there are no
false trips. The P6 stops all executions when the
junction temperature exceeds 135°C. This is
signaled to the system by the THERMTRIP# pin.
Once activated, the signal remains latched and
P6 stopped, until RESET# goes active. There is
no hysteresis built into the thermal sensor itself.
As long as the temperature drops below the trip
level, a RESET# pulse resets P6 and the
execution continues. If the temperature has not
dropped beyond the trip level, P6 continues to
drive THERMTRIP# and remain in inactivity.
JTAG-support signal.
asserted by the target in the response phase to
indicate that the target is ready to receive write or
implicit write-back data transfer. This enables the
request initiator or the snooping agent to begin
the appropriate data transfer. There is no data
transfer after a TRDY# assertion if a write has
zero length in the request phase. The data
transfer is optional if an implicit write-back occurs
for a transaction that writes a full cache line (P6.0
DX performs the implicit write-back). For a write
transaction, TRDY# is driven by the addressed
agent when:
• the transaction has a write or write-back data transfer
• it has a free buffer available to receive the write data
• there is a minimum of 3 clocks after ADS# for the
transaction
• the transaction reaches the top of the in-order queue
• there is a minimum of 1 clock after RS[2:0]# active
assertion for transaction "n-1” (after the transaction
reaches the top of in-order queue).
TRDY#
continued
Y9I, GTL+For an implicit write-back, TRDY# is driven by the
addressed agent when:
Major Chipsets2-29
Page 89
Table 2- 2P6 Processor Signal Descriptions
SignalPinTypeDescription
• transaction has an implicit write-back data transfer
indicated in the Snoop Result Phase.
• it has a free cache line buffer to receive the cache line
write-back if the transaction also has a request
initiated transfer, that the request initiated TRDY#
was asserted and then deasserted (EDY# must be
deasserted for at least one clock between the TRDY#
for the write and the TRDY# for the implicit writeback),
• a minimum of I clock after RS[2:0]# active assertion
for transaction "n-1". (After the transaction reaches
the top of in-order queue).
TRDY# for a write or an implicit write-back may
be deasserted when:
• inactive DBSY# and active TRDY# are observed
• DBSY# is observed inactive on the clock TRDY# is
asserted.
• a minimum of 3 clocks are guaranteed between two
TRDY# active-to-inactive transitions
• the response is driven on RS[2:0]#
• inactive DBSY# and active TRDY# are observed for a
write, and TRDY# is required for an implicit writeback
TRST#A7I, JTAGThe TRST# is an additional system support group
JTAG-support signal.
UP#AG3OtherThe upgrade present signal is an open in the P6
processor and tied to VSS in the OverDrive
processor. This prevents the operation of
voltage regulators that cause a potentially
harmful voltage to the OverDrive processor. It
also prevents a contention between onboard
regulator and OverDrive processor VRM.
VCC5PowerVCC5 is used by the OverDrive processor for
VID[3:0] are 4-voltage identification pins on the
P6. These pins support automatic selection of
power supply voltage. These pins are not signals
but are either an open circuit in the processor or a
short circuit to Vss.
The open and short circuits define the voltage
required by P6. This has been added to cleanly
support voltage specification variations on future
P6 processors. The following are the voltage
definition of these pins:
VID[3:0]Voltage SettingVID[3:0] Voltage Setting
00003.510002.7
00013.410012.6
00103.310102.5
00113.210112.4
01003.111002.3
01013.011012.2
01102.911102.1
01112.81111No CPU
The Memory Interface Component (MIC) provides part of a high-performance. low-cost memory
subsystem solution for P6.0 based systems by combining high-integration, high-performance
technology with an architecture that is capable of low-latency response and high throughput.
The MIC can connect directly to a Memory Controller with no external glue components. A typical
P6.0 system may be composed of one to four P6.0 processors and a PCI bridge. The system bus
is designed to support eight physical loads at 66.67 MHz so it may need an additional bridge,
memory controller, or other custom attachments. These may be connected to the system bus.
Additional loads may be supported at a lower bus frequency.
The four Memory Interface Component are used to interface the Memory Controller data path with
the Memory sub-system. Four MIC's handle one quad-word of data between the Memory
Controller and Memory. Three basic types of memory system are supported: a 4 way interleaved
DRAM system, a two-way interleaved DRAM system, and a one-way non-interleaved DRAM
system.
A rich set of features are provided by the MIC to meet the requirements of "state-of-the-art" highintegration desktop and server systems. In addition to the memory configurations described above,
the MIC handles data that includes ECC, two back-to-back cache line writes are supported for slow
DIMMs, and 3V/5V DIMMs are supported. Some power management features are included that
allow some I/O drivers to be put in standby when not in use.
2.2.1 Features
• Memory support
• Support for 4-way interleaved conventional DRAM
• Support for 2-way inlerleaved conventional DRAM
• Support for 1-way non-interleaved conventional DRAM
• Support for partial reads and partial writes
• Support for part line read and writes
• Any one of 4 interleaves can be populated
• Read rate programmable
• Volt and 5 Volt DIMMs are supported
• Standard 36 bit DIMMs are supported
• Device features
• 144-pin PQFP
• 0.5 um, 3.3 V CMOS gate array
• Maximum power dissipation of tbd (< 1.5 W)
• On-chip PLL
• JTAG support
Major Chipsets2-33
Page 93
2.2.2 S82451GX Pin Diagram
Figure 2- 4S82451GX Pin Diagram
2-34Service Guide
Page 94
2.2.3 S82451GX Signal Descriptions
Table 2- 3S82451GX Signal Descriptions
SignalPinTypeDescription
MIC Control Interface Signals
MICCMD#[6:0]03-09I
MICMWC#10I
MIC Data Path Interface Signals
MDE[17:10]
MDE[09-02]
MDE[01:00]
MD_RDY#11I
Memory Interface Signals
I0_D[17:16]
I0_D[15-11]
I0_D[10-07]
I0_D[06-01]
I1_D[17:13]
I1_D[12-07]
I1_D[06-03]
I1_D[02-00]
I2_D[17:13]
I2_D[12-07]
I2_D[06-01]
I2_D[00]
I3_D[17:12]
I3_D[11-06]
I3_D[05-00]
12-19
22-29
32-33
114,115
117-121
124-127
129-134
90-94
96-101
104-107
111-113
66-70
74-79
81-86
89
43-48
51-56
58-63
I/O
I/O
I/O
I/O
I/O
MIC Command. Used to receive a command
from the 82453GX to read data or write data,
or to write MIC configuration data.
MIC Write Command. Command from the
82453GX to write data held in the MIC to
memory.
Memory Data and ECC. ECC is computer
over 64-bit data words. MDE[17:0] is one
fourth of a Quad-Word.
Memory Data Ready. Asserted when input
data on the memory data bus is valid.
Memory Data ECC. ECC is computed over
64-bit data words. I0_D[17:0] is one fourth of
a Quad-Word that is connected to interleave
zero of the memory.
Memory Data and ECC. ECC is computed
over 64-bit data words. I1_D[17:0] is one
fourth of a Quad-Word that is connected to
interleave one of the memory.
Memory Data and ECC. ECC is computed
over 64-bit data words. I2_D[17:0] is one
fourth of a Quad-Word that is connected to
interleave two of the memory.
Memory Data and ECC. ECC is computed
over 64-bit data words. I3_D[17:0] is one
fourth of a Quad-Word that is connected to
interleave three of the memory.
Clock Support Signals
BCLK39I
System Support Signals
MI_RST#2ISystem Reset Control.
Test Interface
TCLK142IJTAG Test Clock.
TDI137IJTAG Test Data In.
TDO139OJTAG Test Data Out.
TMS135IJTAG Test Mode Select.
TRST#141IJTAG Test Reset.
PLL Reference Clock. This is the input to
the device.
Major Chipsets2-35
Page 95
2.3 Data Path Chipset (S82452GX)
The S82452GX, together with S82453GX, provides a high-performance, low-cost memory
subsystem solution for P6.0-based systems by combining high-integration, high-performance
technology with an architecture that is capable of low-latency response and high throughput.
The S82452GX can connect directly to a P6.0 system bus with no external glue components. A
typical P6.0 system may be composed of one to four P6.0 processors and a PCI bridge. The
system bus is designed to support eight physical loads at 66.67 MHz so the system bus may need
an additional bridge, memory controller, or other custom attachments. Additional loads may be
supported at lower bus frequencies.
The S82452GX and S82453GX act as interface between the P6.0 bus and the system memory.
The system supports three basic types of memory: a 4:1 interleaved DRAM system, a 2:1
interleaved DRAM system, and a non-interleaved DRAM system. The 4:1 interleaved DRAM
system supports a maximum memory size of 4 MB using 64-Mbit technology. For the 2:1
interleaved and non-interleaved DRAM system, the maximum memory sizes are 2 GB and 1 Byte,
respectively.
The S82452GX and S82453GX also have data integrity features that include ECC in the memory
array, support for memory scrubbing, and parity (control) and ECC (data) on the system bus.
These features, as well as a set of error reporting mechanisms, can be selected by configuring the
S82452GX and S82453GX.
2.3.1 Features
• Processor support
• Full support for the 64 bit P6.0 bus operating at 50.0 to 66.67 MHz (15ns).
• ECC protection for the P6.0 data bus.
• Parity protection for the P6.0 control bus.
• Support for 36 bit addresses.
• 8 or 1 deep in-order queue; 4 deep request queue.
• Four cache line read buffer, 4 cache line write buffer.
• Multiprocessor support (snarfing).
• Support for third party defer of transactions.
• GTL+ bus driver technology.
• Memory support
• Support for up to 4 GB of 4-way interleaved conventional DRAM, per controller, using 64
Mbit technology DRAMs.
• Support for 2-way interleaved and non-interleaved conventional DRAM.
• Support for 4 Mbit, 16 Mbit, and 64 Mbit devices.