Intel CORE PROCESSOR FAMILY MOBILE User Manual

2nd Generation Intel® Core™ Processor Family Mobile
Datasheet – Volume 1
Supporting Intel® Core™ i7 Mobile Extreme Edition Processor Series and
®
Intel
This is Volume 1 of 2
January 2011
Core™ i5 and i7 Mobile Processor Series
Document Number: 324692-001
Legal Lines and Disclaimers
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
This document contains information on products in the design phase of development.
All products, platforms, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. All dates specified are target dates, are provided for planning purposes only and are subject to change.
This document contains information on products in the design phase of development. Do not finalize a design with this information. Revised information will be published when the product is available. Verify with your local sales office that you have the latest datasheet before finalizing a design.
Enabling Execute Disable Bit functionality requires a PC with a processor with Execute Disable Bit capability and a supporting operating system. Check with your PC manufacturer on whether your system delivers Execute Disable Bit functionality.
Enhanced Intel SpeedStep® Technology; See the Processor Spec Finder or contact your Intel representative for more information.
64-bit computing on Intel architecture requires a computer system with a processor, chipset, BIOS, operating system, device drivers and applications enabled for Intel® 64 architecture. Performance will vary depending on your hardware and software configurations. Consult with your system vendor for more information.
No computer system can provide absolute security under all conditions. Intel a computer system with Intel Modules and an Intel TXT-compatible measured launched environment (MLE). The MLE could consist of a virtual machine monitor, an OS or an application. In addition, Intel TXT requires the system to contain a TPM v1.2, as defined by the Trusted Computing
®
Virtualization Technology, an Intel TXT-enabled processor, chipset, BIOS, Authenticated Code
®
Trusted Execution Technology (Intel® TXT) requires
Group and specific software for some uses. For more information, see http://www.intel.com/technology/security/
®
Virtualization Technology requires a computer system with an enabled Intel® processor, BIOS, virtual machine monitor
Intel (VMM) and, for some uses, certain computer system software enabled for it. Functionality, performance or other benefits will vary depending on hardware and software configurations and may require a BIOS update. Software applications may not be compatible with all operating systems. Please check with your application vendor.
®
Active Management Technology requires the computer system to have an Intel(R) AMT-enabled chipset, network hardware
Intel and software, as well as connection with a power source and a corporate network connection. Setup requires configuration by the purchaser and may require scripting with the management console or further integration into existing security frameworks to enable certain functionality. It may also require modifications of implementation of new business processes. With regard to notebooks, Intel AMT may not be available or certain capabilities may be limited over a host OS-based VPN or when connecting wirelessly, on battery power, sleeping, hibernating or powered off. For more information, see http://www.intel.com/technology/
platform-technology/intel-amt/
Intel® Turbo Boost Technology requires a PC with a processor with Intel Turbo Boost Technology capability. Intel Turbo Boost Technology performance varies depending on hardware, software and overall system configuration. Check with your PC manufacturer on whether your system delivers Intel Turbo Boost Technology.For more information, see http://www.intel.com/ technology/turboboost.
Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See www.intel.com/products/processor_number for details.
Code names featured are used internally within Intel to identify products that are in development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the user.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2011, Intel Corporation. All rights reserved.
2 Datasheet, Volume 1
Contents
1Introduction............................................................................................................ 11
1.1 Processor Feature Details ................................................................................... 13
1.1.1 Supported Technologies .......................................................................... 13
1.2 Interfaces ........................................................................................................ 13
1.2.1 System Memory Support ......................................................................... 13
1.2.2 PCI Express* ......................................................................................... 14
1.2.3 Direct Media Interface (DMI).................................................................... 15
1.2.4 Platform Environment Control Interface (PECI)........................................... 16
1.2.5 Processor Graphics ................................................................................. 16
1.2.6 Embedded DisplayPort* (eDP*)................................................................ 17
1.2.7 Intel
1.3 Power Management Support ............................................................................... 17
1.3.1 Processor Core....................................................................................... 17
1.3.2 System ................................................................................................. 17
1.3.3 Memory Controller.................................................................................. 17
1.3.4 PCI Express* ......................................................................................... 17
1.3.5 DMI...................................................................................................... 17
1.3.6 Processor Graphics Controller................................................................... 18
1.4 Thermal Management Support ............................................................................ 18
1.5 Package ........................................................................................................... 18
1.6 Terminology ..................................................................................................... 18
1.7 Related Documents ........................................................................................... 21
2Interfaces................................................................................................................ 23
2.1 System Memory Interface .................................................................................. 23
2.1.1 System Memory Technology Supported ..................................................... 23
2.1.2 System Memory Timing Support............................................................... 24
2.1.3 System Memory Organization Modes......................................................... 24
2.1.4 Rules for Populating Memory Slots............................................................ 25
2.1.5 Technology Enhancements of Intel
2.1.6 Memory Type Range Registers (MTRRs) Enhancement................................. 26
2.1.7 Data Scrambling .................................................................................... 26
2.1.8 DRAM Clock Generation........................................................................... 26
2.2 PCI Express* Interface....................................................................................... 27
2.2.1 PCI Express* Architecture ....................................................................... 27
2.2.2 PCI Express* Configuration Mechanism ..................................................... 29
2.2.3 PCI Express Graphics.............................................................................. 29
2.2.4 PCI Express Lanes Connection.................................................................. 30
2.3 Direct Media Interface (DMI)............................................................................... 30
2.3.1 DMI Error Flow....................................................................................... 30
2.3.2 Processor/PCH Compatibility Assumptions.................................................. 30
2.3.3 DMI Link Down ...................................................................................... 31
2.4 Processor Graphics Controller (GT) ...................................................................... 31
®
Flexible Display Interface (Intel® FDI) ............................................. 17
2.1.3.1 Single-Channel Mode................................................................. 24
2.1.3.2 Dual-Channel Mode – Intel
2.1.5.1 Just-in-Time Command Scheduling.............................................. 26
2.1.5.2 Command Overlap .................................................................... 26
2.1.5.3 Out-of-Order Scheduling............................................................ 26
2.2.1.1 Transaction Layer ..................................................................... 28
2.2.1.2 Data Link Layer ........................................................................ 28
2.2.1.3 Physical Layer .......................................................................... 28
®
Flex Memory Technology Mode ........... 24
®
Fast Memory Access (Intel® FMA).......... 26
Datasheet, Volume 1 3
2.4.1 3D and Video Engines for Graphics Processing ............................................32
2.4.1.1 3D Engine Execution Units..........................................................32
2.4.1.2 3D Pipeline...............................................................................32
2.4.1.3 Video Engine ............................................................................33
2.4.1.4 2D Engine ................................................................................33
2.4.2 Processor Graphics Display ......................................................................34
2.4.2.1 Display Planes ..........................................................................34
2.4.2.2 Display Pipes ............................................................................35
2.4.2.3 Display Ports ............................................................................35
2.4.2.4 Embedded DisplayPort (eDP) ......................................................35
2.4.3 Intel Flexible Display Interface..................................................................35
2.4.4 Multi-Graphics Controller Multi-Monitor Support ..........................................36
2.5 Platform Environment Control Interface (PECI) ......................................................36
2.6 Interface Clocking..............................................................................................36
2.6.1 Internal Clocking Requirements ................................................................36
3 Technologies............................................................................................................37
3.1 Intel® Virtualization Technology ..........................................................................37
3.1.1 Intel® VT-x Objectives ............................................................................37
3.1.2 Intel® VT-x Features...............................................................................38
3.1.3 Intel
®
VT-d Objectives ............................................................................38
3.1.4 Intel® VT-d Features...............................................................................39
3.1.5 Intel® VT-d Features Not Supported..........................................................39
®
3.2 Intel
Trusted Execution Technology (Intel® TXT) .................................................40
3.3 Intel® Hyper-Threading Technology .....................................................................40
3.4 Intel® Turbo Boost Technology ............................................................................41
3.4.1 Intel
®
Turbo Boost Technology Frequency...................................................41
3.4.2 Intel® Turbo Boost Technology Graphics Frequency.....................................42
3.5 Intel® Advanced Vector Extensions (AVX) .............................................................42
3.6 Advanced Encryption Standard New Instructions (AES-NI) ......................................42
3.6.1 PCLMULQDQ Instruction ..........................................................................43
®
3.7 Intel
64 Architecture x2APIC .............................................................................43
4 Power Management .................................................................................................45
4.1 ACPI States Supported .......................................................................................45
4.1.1 System States........................................................................................45
4.1.2 Processor Core/Package Idle States...........................................................45
4.1.3 Integrated Memory Controller States.........................................................46
4.1.4 PCIe Link States .....................................................................................46
4.1.5 DMI States ............................................................................................46
4.1.6 Processor Graphics Controller States .........................................................46
4.1.7 Interface State Combinations ...................................................................47
4.2 Processor Core Power Management......................................................................48
4.2.1 Enhanced Intel
®
SpeedStep® Technology ..................................................48
4.2.2 Low-Power Idle States.............................................................................48
4.2.3 Requesting Low-Power Idle States ............................................................50
4.2.4 Core C-states .........................................................................................51
4.2.4.1 Core C0 State ...........................................................................51
4.2.4.2 Core C1/C1E State ....................................................................51
4.2.4.3 Core C3 State ...........................................................................51
4.2.4.4 Core C6 State ...........................................................................51
4.2.4.5 Core C7 State ...........................................................................51
4.2.4.6 C-State Auto-Demotion..............................................................52
4.2.5 Package C-States ...................................................................................52
4.2.5.1 Package C0 ..............................................................................53
4.2.5.2 Package C1/C1E........................................................................54
4.2.5.3 Package C3 State ......................................................................54
4 Datasheet, Volume 1
4.2.5.4 Package C6 State...................................................................... 54
4.2.5.5 Package C7 State...................................................................... 55
4.2.5.6 Dynamic L3 Cache Sizing ........................................................... 55
4.3 IMC Power Management..................................................................................... 55
4.3.1 Disabling Unused System Memory Outputs ................................................ 55
4.3.2 DRAM Power Management and Initialization............................................... 56
4.3.2.1 Initialization Role of CKE............................................................ 57
4.3.2.2 Conditional Self-Refresh ............................................................ 57
4.3.2.3 Dynamic Power-down Operation ................................................. 58
4.3.2.4 DRAM I/O Power Management.................................................... 58
4.4 PCIe* Power Management .................................................................................. 58
4.5 DMI Power Management..................................................................................... 58
4.6 Graphics Power Management .............................................................................. 59
4.6.1 Intel
®
Rapid Memory Power Management (RMPM) (also know as CxSR)......... 59
4.6.2 Intel® Graphics Performance Modulation Technology(GPMT) ........................ 59
4.6.3 Graphics Render C-State ......................................................................... 59
4.6.4 Intel
®
Smart 2D Display Technology (Intel® S2DDT) .................................. 59
4.6.5 Intel® Graphics Dynamic Frequency.......................................................... 60
4.6.6 Display Power Savings Technology 6.0 (DPST) ........................................... 60
4.6.7 Automatic Display Brightness (ADB) ......................................................... 60
4.6.8 Seamless Display Refresh Rate Switching Technology (SDRRST)................... 61
4.7 Thermal Power Management............................................................................... 61
5 Thermal Management.............................................................................................. 63
5.1 Thermal Design Power (TDP) and Junction Temperature (Tj) ................................... 63
5.2 Thermal Considerations...................................................................................... 63
5.2.1 Intel
®
Turbo Boost Technology Power Control and Reporting........................ 64
5.2.2 Package Power Control............................................................................ 65
5.2.3 Power Plane Control................................................................................ 65
5.2.4 Turbo Time Parameter ............................................................................ 65
5.3 Thermal and Power Specifications........................................................................ 66
5.4 Thermal Management Features ........................................................................... 70
5.4.1 Processor Package Thermal Features......................................................... 70
5.4.1.1 Adaptive Thermal Monitor .......................................................... 70
5.4.1.2 Digital Thermal Sensor .............................................................. 72
5.4.1.3 PROCHOT# Signal..................................................................... 73
5.4.2 Processor Core Specific Thermal Features.................................................. 75
5.4.2.1 On-Demand Mode..................................................................... 75
5.4.3 Memory Controller Specific Thermal Features ............................................. 76
5.4.3.1 Programmable Trip Points .......................................................... 76
5.4.4 Platform Environment Control Interface (PECI)........................................... 76
5.4.4.1 Fan Speed Control with Digital Thermal Sensor ............................. 76
6 Signal Description ................................................................................................... 77
6.1 System Memory Interface .................................................................................. 78
6.2 Memory Reference and Compensation.................................................................. 79
6.3 Reset and Miscellaneous Signals.......................................................................... 80
6.4 PCI Express* Based Interface Signals................................................................... 80
6.5 Embedded DisplayPort (eDP) .............................................................................. 81
6.6 Intel
®
Flexible Display Interface Signals ............................................................... 81
6.7 DMI................................................................................................................. 81
6.8 PLL Signals....................................................................................................... 82
6.9 TAP Signals ...................................................................................................... 82
6.10 Error and Thermal Protection .............................................................................. 83
6.11 Power Sequencing ............................................................................................. 83
6.12 Processor Power Signals..................................................................................... 84
Datasheet, Volume 1 5
6.13 Sense Pins........................................................................................................84
6.14 Ground and NCTF ..............................................................................................85
6.15 Future Compatibility...........................................................................................85
6.16 Processor Internal Pull Up/Pull Down ....................................................................85
7 Electrical Specifications ...........................................................................................87
7.1 Power and Ground Pins.......................................................................................87
7.2 Decoupling Guidelines ........................................................................................87
7.2.1 Voltage Rail Decoupling ...........................................................................87
7.2.2 PLL Power Supply ...................................................................................87
7.3 Voltage Identification (VID).................................................................................88
7.4 System Agent (SA) V
VID ................................................................................92
CC
7.5 Reserved or Unused Signals ................................................................................92
7.6 Signal Groups ...................................................................................................93
7.7 Test Access Port (TAP) Connection .......................................................................95
7.8 Storage Condition Specifications ..........................................................................95
7.9 DC Specifications...............................................................................................96
7.9.1 Voltage and Current Specifications ............................................................97
7.10 Platform Environmental Control Interface (PECI) DC Specifications ......................... 103
7.10.1 PECI Bus Architecture............................................................................103
7.10.2 PECI DC Characteristics ......................................................................... 104
7.10.3 Input Device Hysteresis ......................................................................... 105
8 Processor Pin and Signal Information .................................................................... 107
8.1 Processor Pin As signments...............................................................................107
8.2 Package Mechanical Information ........................................................................ 157
9 DDR Data Swizzling................................................................................................169
6 Datasheet, Volume 1
Figures
1-1 2nd Generation Intel® Core™ Extreme Edition Processor Family Mobile
Platform ................................................................................................................ 12
2-1 Intel
2-2 PCI Express* Layering Diagram ................................................................................ 27
2-3 Packet Flow through the Layers ................................................................................ 28
2-4 PCI Express* Related Register Structures in the Processor............................................ 29
2-5 PCIe Typical Operation 16 lanes Mapping ................................................................... 30
2-6 Processor Graphics Controller Unit Block Diagram ....................................................... 31
2-7 Processor Display Block Diagram............................................................................... 34
4-1 Idle Power Management Breakdown of the Processor Cores .......................................... 49
4-2 Thread and Core C-State Entry and Exit..................................................................... 49
4-3 Package C-State Entry and Exit ................................................................................ 53
5-1 Package Power Control ............................................................................................ 65
5-2 Frequency and Voltage Ordering ............................................................................... 71
7-1 Example for PECI Host-clients Connection ................................................................ 104
7-2 Input Device Hysteresis ......................................................................................... 105
8-1 rPGA988B (Socket-G2) Pinmap (Top View, Upper-Left Quadrant) ................................ 108
8-2 rPGA988B (Socket-G2) Pinmap (Top View, Upper-Right Quadrant) .............................. 109
8-3 rPGA988B (Socket-G2) Pinmap (Top View, Lower-Left Quadrant) ................................ 110
8-4 rPGA988B (Socket-G2) Pinmap (Top View, Lower-Right Quadrant) .............................. 111
8-5 BGA1224 Ballmap (Top View, Upper-Left Quadrant) .................................................. 123
8-6 BGA1224 Ballmap (Top View, Upper-Right Quadrant) ................................................ 124
8-7 BGA1224 Ballmap (Top View, Lower-Left Quadrant) .................................................. 125
8-8 BGA1224 Ballmap (Top View, Lower-Right Quadrant) ................................................ 126
8-9 BGA1023 Ballmap (Top View, Upper-Left Quadrant) .................................................. 142
8-10 BGA1023 Ballmap (Top View, Upper-Right Quadrant) ................................................ 143
8-11 BGA1023 Ballmap (Top View, Lower-Left Quadrant) .................................................. 144
8-12 BGA1023 Ballmap (Top View, Lower-Right Quadrant) ................................................ 145
8-13 Processor rPGA988B 2C (GT2) Mechanical Package (Sheet 1 of 2) ............................... 157
8-14 Processor rPGA988B 2C (GT2) Mechanical Package (Sheet 2 of 2) ............................... 158
8-15 Processor rPGA988B 4C (GT2) Mechanical Package (Sheet 1 of 2) ............................... 159
8-16 Processor rPGA988B 4C (GT2) Mechanical Package (Sheet 2 of 2) ............................... 160
8-17 Processor BGA1023 2C (GT2) Mechanical Package (Sheet 1 of 2) ................................ 161
8-18 Processor BGA1023 2C (GT2) Mechanical Package (Sheet 2 of 2) ................................ 162
8-19 Processor BGA1224 4C (GT2) Mechanical Package (Sheet 1 of 2) ................................ 163
8-20 Processor BGA1224 4C (GT2) Mechanical Package (Sheet 2 of 2) ................................ 164
8-21 Processor rPGA988B 2C (GT1) Mechanical Package (Sheet 1 of 2) ............................... 165
8-22 Processor rPGA988B 2C (GT1) Mechanical Package (Sheet 2 of 2) ............................... 166
8-23 Processor BGA1023 2C (GT1) Mechanical Package (Sheet 1 of 2) ................................ 167
8-24 Processor BGA1023 2C (GT1) Mechanical Package (Sheet 2 of 2) ................................ 168
®
Flex Memory Technology Operation ................................................................. 25
Datasheet, Volume 1 7
Tables
1-1 PCIe Supported Configurations in Mobile Products.......................................................14
1-2 Related Documents.................................................................................................21
2-1 Supported SO-DIMM Module Configurations 1,2 ..........................................................23
2-2 DDR3 System Memory Timing Support ......................................................................24
2-3 Reference Clock......................................................................................................36
4-1 System States........................................................................................................45
4-2 Processor Core/Package State Support ......................................................................45
4-3 Integrated Memory Controller States.........................................................................46
4-4 PCIe Link States .....................................................................................................46
4-5 DMI States ............................................................................................................46
4-6 Processor Graphics Controller States .........................................................................46
4-7 G, S, and C State Combinations................................................................................47
4-8 D, S, and C State Combination .................................................................................47
4-9 Coordination of Thread Power States at the Core Level ................................................50
4-10 P_LVLx to MWAIT Conversion...................................................................................50
4-11 Coordination of Core Power States at the Package Level ..............................................53
4-12 Targeted Memory State Conditions............................................................................58
5-1 TDP Specifications ..................................................................................................67
5-2 Junction Temperature Specification ...........................................................................67
5-3 Package Turbo Parameters.......................................................................................68
5-4 Idle Power Specifications .........................................................................................69
6-1 Signal Description Buffer Types ................................................................................77
6-2 Memory Channel A..................................................................................................78
6-3 Memory Channel B..................................................................................................79
6-4 Memory Reference and Compensation .......................................................................79
6-5 Reset and Miscellaneous Signals ...............................................................................80
6-6 PCI Express* Graphics Interface Signals ....................................................................80
6-7 Embedded Display Port Signals.................................................................................81
6-8 Intel® Flexible Display Interface...............................................................................81
6-9 DMI - Processor to PCH Serial Interface .....................................................................81
6-10 PLL Signals ............................................................................................................82
6-11 TAP Signals............................................................................................................82
6-12 Error and Thermal Protection....................................................................................83
6-13 Power Sequencing ..................................................................................................83
6-14 Processor Power Signals ..........................................................................................84
6-15 Sense Pins.............................................................................................................84
6-16 Ground and NCTF ...................................................................................................85
6-17 Future Compatibility................................................................................................85
6-18 Processor Internal Pull Up/Pull Down .........................................................................85
7-1 IMVP7 Voltage Identification Definition ......................................................................89
7-2 VCCSA_VID configuration ........................................................................................92
7-3 Signal Groups1.......................................................................................................93
7-4 Storage Condition Ratings........................................................................................96
7-5 Processor Core (VCC) Active and Idle Mode DC Voltage and Current Specifications ..........97
7-6 Processor Uncore (VCCIO) Supply DC Voltage and Current Specifications .......................98
7-7 Memory Controller (VDDQ) Supply DC Voltage and Current Specifications ......................99
7-8 System Agent (VCCSA) Supply DC Voltage and Current Specifications ...........................99
7-9 Processor PLL (VCCPLL) Supply DC Voltage and Current Specifications...........................99
7-10 Processor Graphics (VAXG) Supply DC Voltage and Current Specifications .................... 100
7-11 DDR3 Signal Group DC Specifications ......................................................................101
7-12 Control Sideband and TAP Signal Group DC Specifications ..........................................102
7-13 PCI Express DC Specifications ................................................................................102
7-14 eDP DC Specifications ...........................................................................................103
8 Datasheet, Volume 1
7-15 PECI DC Electrical Limits ....................................................................................... 104
8-1 rPGA988B Processor Pin List by Pin Name................................................................ 112
8-2 BGA1224 Processor Ball List by Ball Name ............................................................... 127
8-3 BGA1023 Processor Ball List by Ball Name ............................................................... 146
9-1 DDR Data Swizzling Table – Channel A.................................................................... 170
9-2 DDR Data Swizzling Table – Channel B.................................................................... 171
Datasheet, Volume 1 9
Revision History
Revision
Number
001 • Initial Release January 2011
Description Date
§ §
10 Datasheet, Volume 1
Introduction

1 Introduction

The 2nd Generation Intel® Core™ processor family mobile is the next generation of 64­bit, multi-core mobile processor built on 32- nanometer process technology. Based on a new micro-architecture, the processor is designed for a two-chip platform. The two­chip platform consists of a processor and Platform Controller Hub (PCH). The platform enables higher performance, lower cost, easier validation, and improved x-y footprint. The processor includes Integrated Display Engine, Processor Graphics and Integrated Memory Controller and is designed for mobile platforms. The processor comes with either 6 or 12 Processor Graphics execution units (EU). The processor may be offered in a rPGA988B, BGA1224 or BGA1023 package. Figure 1-1 shows an example platform block diagram.
This document provides DC electrical specifications, signal integrity, differential signaling specifications, pinout and signal definitions, interface functional descriptions, thermal specifications, and additional feature information pertinent to the implementation and operation of the processor on its respective platform.
Note: Throughout this document, the 2nd Generation Intel
®
Core™ processor family mobile
may be referred to simply as “processor”.
Note: Throughout this document, the Intel
series refers to the Intel
Note: Throughout this document, the Intel
Note: Throughout this document, the Intel
®
Intel
Core™ i7-2820QM, i7-2720QM, and i7-2620M processors.
®
Core™ i5-2540M and i5-2520M processors.
Intel
®
Core™ i7-2920XM processor.
Note: Throughout this document, the Intel
®
Core™ i7 Extreme Edition mobile processor
®
Core™ i7 mobile processor series refers to the
®
Core™ i5 mobile processor series refers to the
®
6 Series Chipset Platform Controller Hub may
also be referred to as “PCH”.
Note: Throughout this document, 2nd Generation Intel
®
Core™ processor family desktop
may be referred to as simply the processor.
Note: Some processor features are not available on all platforms. Refer to the processor
specification update for details.
Datasheet, Volume 1 11
Introduction
I
n
t
e
l
®
F
l
e
x
i
b
l
e
D
i
s
p
l
a
y
I
n
t
e
r
f
a
c
e
DMI2 x4
Discrete Graphics
(PEG)
Analog CR T
G igabit
Network C onnection
USB 2.0
Intel® HD Audio
FWH
Super I/ O
Serial ATA
DDR3
PC I Express* 2.0
1 x16 or 2x 8
8 PCI Expr es s* 2.0 x 1
Ports
(5 G T/s)
SPI
Dig ital Dis p lay x 3
PC I Express*
SPI Flash x 2
LPC
SMBUS 2 .0
GPI O
LVDS Flat Panel
WiFi / WiMax
C ontroller Link 1
Em bedded Display
Port
Processor
PEC I
Platform
Controller
Hub (PCH)
Intel®
Management
Engine
Figure 1-1. 2nd Generation Intel® Core™ Extreme Edition Processor Family Mobile
Platform
12 Datasheet, Volume 1
Introduction

1.1 Processor Feature Details

• Four or two execution cores
• A 32-KB instruction and 32-KB data first-level cache (L1) for each core
• A 256-KB shared instruction/data second-level cache (L2) for each core
• Up to 8-MB shared instruction/data third-level cache (L3), shared among all cores

1.1.1 Supported Technologies

•Intel® Virtualization Technology for Directed I/O (Intel® VT-d)
•Intel® Virtualization Technology (Intel® VT-x)
•Intel® Active Management Technology 7.0 (Intel® AMT 7.0)
•Intel® Trusted Execution Technology (Intel® TXT)
•Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
•Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2)
•Intel® Hyper-Threading Technology
•Intel® 64 Architecture
• Execute Disable Bit
•Intel® Turbo Boost Technology
•Intel® Advanced Vector Extensions (Intel® AVX)
• Advanced Encryption Standard New Instructions (AES-NI)
• PCLMULQDQ Instruction

1.2 Interfaces

1.2.1 System Memory Support

• Two channels of DDR3 memory with a maximum of one SO-DIMM per channel
• Single-channel and dual-channel memory organization modes
• Data burst length of eight for all memory organization modes
• Memory DDR3 data transfer rates of 1066 MT/s, 1333 MT/s, and 1600 MT/s
• 64-bit wide channels
• DDR3 I/O Voltage of 1.5 V
• Non-ECC, unbuffered DDR3 SO-DIMMs only
• Theoretical maximum memory bandwidth of — 17.1 GB/s in dual-channel mode assuming DDR3 1066 MT/s
— 21.3 GB/s in dual-channel mode assuming DDR3 1333 MT/s — 25.6 GB/s in dual-channel mode assuming DDR3 1600 MT/s
• 1Gb, 2Gb, and 4Gb DDR3 DRAM technologies are supported for x8 and x16
devices.
— Using 4Gb device technologies, the largest memory capacity possible is 16 GB,
assuming dual-channel mode with two x8, dual-ranked, un-buffered, non-ECC, SO-DIMM memory configuration.
Datasheet, Volume 1 13
• Up to 32 simultaneous open pages, 16 per channel (assuming 4 Ranks of 8 Bank Devices)
• Memory organizations
— Single-channel modes
— Dual-channel modes - Intel® Flex Memory Technology:
- Dual-channel symmetric (Interleaved)
• Command launch modes of 1n/2n
• On-Die Termination (ODT)
• Asynchronous ODT
•Intel® Fast Memory Access (Intel® FMA)
— Just-in-Time Command Scheduling
—Command Overlap
— Out-of-Order Scheduling

1.2.2 PCI Express*

• The PCI Express* port(s) are fully-compliant to the PCI Express Base Specification, Revision 2.0.
Table 1-1. PCIe Supported Configurations in Mobile Products
• Processor with mobile PCH supported configurations
Introduction
Configuration Mobile
1x8 2x4
2x8
1x16
GFX,
I/O
GFX,
I/O
GFX,
I/O
• The port may negotiate down to narrower widths
— Support for x16/x8/x4/x1 widths for a single PCI Express mode
• 2.5 GT/s and 5.0 GT/s PCI Express* frequencies are supported
• Gen1 Raw bit-rate on the data pins of 2.5 GT/s, resulting in a real bandwidth per pair of 250 MB/s given the 8b/10b encoding used to transmit data across this interface. This also does not account for packet overhead and link maintenance.
• Maximum theoretical bandwidth on the interface of 4 GB/s in each direction simultaneously, for an aggregate of 8 GB/s when x16 Gen 1
• Gen 2 Raw bit-rate on the data pins of 5.0 GT/s, resulting in a real bandwidth per pair of 500 MB/s given the 8b/10b encoding used to transmit data across this interface. This also does not account for packet overhead and link maintenance.
• Maximum theoretical bandwidth on the interface of 8 GB/s in each direction simultaneously, for an aggregate of 16 GB/s when x16 Gen 2
• Hierarchical PCI-compliant configuration mechanism for downstream devices
• Traditional PCI style traffic (asynchronous snooped, PCI ordering)
14 Datasheet, Volume 1
Introduction
• PCI Express* extended configuration space. The first 256 bytes of configuration space aliases directly to the PCI Compatibility configuration space. The remaining portion of the fixed 4-KB block of memory-mapped space above that (starting at 100h) is known as extended configuration space.
• PCI Express* Enhanced Access Mechanism; accessing the device configuration space in a flat memory mapped fashion
• Automatic discovery, negotiation, and training of link out of reset
• Traditional AGP style traffic (asynchronous non-snooped, PCI-X Relaxed ordering)
• Peer segment destination posted write traffic (no peer-to-peer read traffic) in Virtual Channel 0
— DMI -> PCI Express* Port 0 — DMI -> PCI Express* Port 1 — PCI Express* Port 0 -> DMI — PCI Express* Port 1 -> DMI
• 64-bit downstream address format, but the processor never generates an address above 64 GB (Bits 63:36 will always be zeros)
• 64-bit upstream address format, but the processor responds to upstream read transactions to addresses above 64 GB (addresses where any of Bits 63:36 are nonzero) with an Unsupported Request response. Upstream write transactions to addresses above 64 GB will be dropped.
• Re-issues Configuration cycles that have been previously completed with the Configuration Retry status
• PCI Express* reference clock is 100-MHz differential clock
• Power Management Event (PME) functions
• Dynamic width capability
• Message Signaled Interrupt (MSI and MSI-X) messages
• Polarity inversion
• Dynamic lane numbering reversal as defined by the PCI Express Base Specification.
• Static lane numbering reversal
— Does not support dynamic lane reversal, as defined (optional) by the PCI
Express Base Specification.
• Supports Half Swing “low-power/low-voltage” mode.
Note: The processor does not support PCI Express* Hot-Plug.

1.2.3 Direct Media Interface (DMI)

• DMI 2.0 support
• Four lanes in each direction
• 5 GT/s point-to-point DMI interface to PCH is supported
• Raw bit-rate on the data pins of 5.0 GB/s, resulting in a real bandwidth per pair of 500 MB/s given the 8b/10b encoding used to transmit data across this interface. Does not account for packet overhead and link maintenance.
• Maximum theoretical bandwidth on interface of 2 GB/s in each direction simultaneously, for an aggregate of 4 GB/s when DMI x4
• Shares 100-MHz PCI Express* reference clock
Datasheet, Volume 1 15
Introduction
• 64-bit downstream address format, but the processor never generates an address above 64 GB (Bits 63:36 will always be zeros)
• 64-bit upstream address format, but the processor responds to upstream read transactions to addresses above 64 GB (addresses where any of Bits 63:36 are nonzero) with an Unsupported Request response. Upstream write transactions to addresses above 64 GB will be dropped.
• Supports the following traffic types to or from the PCH
—DMI -> DRAM — DMI -> processor core (Virtual Legacy Wires (VLWs), Resetwarn, or MSIs only) — Processor core -> DMI
• APIC and MSI interrupt messaging support
— Message Signaled Interrupt (MSI and MSI-X) messages
• Downstream SMI, SCI and SERR error indication
• Legacy support for ISA regime protocol (PHOLD/PHOLDA) required for parallel port DMA, floppy drive, and LPC bus masters
• DC coupling – no capacitors between the processor and the PCH
• Polarity inversion
• PCH end-to-end lane reversal across the link
• Supports Half Swing “low-power/low-voltage”

1.2.4 Platform Environment Control Interface (PECI)

The PECI is a one-wire interface that provides a communication channel between a PECI client (the processor) and a PECI master. The processors support the PECI 3.0 Specification.

1.2.5 Processor Graphics

• The Processor Graphics contains a refresh of the sixth generation graphics core enabling substantial gains in performance and lower power consumption. Up to 12 EU Support.
• Next Generation Intel Clear Video Technology HD support is a collection of video playback and enhancement features that improve the end user’s viewing experience.
— Encode/transcode HD content — Playback of high definition content including Blu-ray Disc* — Superior image quality with sharper, more colorful images — Playback of Blu-ray disc S3D content using HDMI (V.1.4 with 3D)
• DirectX* Video Acceleration (DXVA) support for accelerating video processing
— Full AVC/VC1/MPEG2 HW Decode
• Advanced Scheduler 2.0, 1.0, XPDM support
• Windows* 7, XP, Windows Vista*, OSX, Linux OS Support
• DX10.1, DX10, DX9 support
•OGL 3.0 support
16 Datasheet, Volume 1
Introduction

1.2.6 Embedded DisplayPort* (eDP*)

• Stand alone dedicated port (unlike previous generation processor that shared pins with PCIe interface)

1.2.7 Intel® Flexible Display Interface (Intel® FDI)

• For SKUs with graphics, carries display traffic from the Processor Graphics in the processor to the legacy display connectors in the PCH
• Based on DisplayPort standard
• Two independent links – one for each display pipe
• Four unidirectional downstream differential transmitter pairs
— Scalable down to 3X, 2X, or 1X based on actual display bandwidth
requirements
— Fixed frequency 2.7 GT/s data rate
• Two sideband signals for Display synchronization
— FDI_FSYNC and FDI_LSYNC (Frame and Line Synchronization)
• One Interrupt signal used for various interrupts from the PCH
— FDI_INT signal shared by both Intel FDI Links
• PCH supports end-to-end lane reversal across both links
• Common 100-MHz reference clock

1.3 Power Management Support

1.3.1 Processor Core

• Full support of ACPI C-states as implemented by the following processor C-states
— C0, C1, C1E, C3, C6, C7
• Enhanced Intel SpeedStep® Tec h nol o gy

1.3.2 System

• S0, S3, S4, S5

1.3.3 Memory Controller

• Conditional self-refresh (Intel® Rapid Memory Power Management (Intel® RMPM))
• Dynamic power-down

1.3.4 PCI Express*

• L0s and L1 ASPM power management capability

1.3.5 DMI

• L0s and L1 ASPM power management capability
Datasheet, Volume 1 17

1.3.6 Processor Graphics Controller

• Rapid Memory Power Management RMPM – CxSR
• Graphics Performance Modulation Technology (GPMT)
• Intel Smart 2D Display Technology (Intel S2DDT)
• Graphics Render C-State (RC6)
•Intel® Seamless Display Refresh Rate Switching with eDP port

1.4 Thermal Management Support

• Digital Thermal Sensor
•Intel® Adaptive Thermal Monitor
• THERMTRIP# and PROCHOT# support
• On-Demand Mode
• Open and Closed Loop Throttling
• Memory Thermal Throttling
• External Thermal Sensor (TS-on-DIMM and TS-on-Board)
• Render Thermal Throttling
• Fan speed control with DTS
Introduction

1.5 Package

• The processor is available on two packages:
— A 37.5 x 37.5 mm rPGA package (rPGA988B)
— A 31 x 24 mm BGA package (BGA1023 or BGA1224)

1.6 Terminology

Term Description
BLT Block Level Transfer
CRT Cathode Ray Tube
DDR3 Third-generation Double Data Rate SDRAM memory technology
DMA Direct Memory Access
DMI Direct Media Interface
DP DisplayPort*
DTS Digital Thermal Sensor
eDP* Embedded DisplayPort*
Enhanced Intel SpeedStep
®
Technology
Technology that provides power management capabilities to laptops.
18 Datasheet, Volume 1
Introduction
Term Description
The Execute Disable bit allows memory to be marked as executable or non­executable, when combined with a supporting operating system. If code
Execute Disable Bit
attempts to run in non-executable memory the processor raises an error to the operating system. This feature can prevent some classes of viruses or worms that exploit buffer overrun vulnerabilities and can thus help improve the overall security of the system. See the Intel
®
64 and IA-32 Architectures Software
Developer's Manuals for more detailed information.
IMC Integrated Memory Controller
®
Intel
64 Technology 64-bit memory extensions to the IA-32 architecture
®
Intel
DPST Intel® Display Power Saving Technology
®
Intel
FDI Intel® Flexible Display Interface
®
Intel
TXT Intel® Trusted Execution Technology
®
Intel
Virtualization
Technology
®
VT-d
Intel
Processor virtualization which when used in conjunction with Virtual Machine Monitor software enables multiple, robust independent software environments inside a single platform.
®
Virtualization Technology (Intel® VT) for Directed I/O. Intel VT-d is a
Intel hardware assist, under system software (Virtual Machine Manager or OS) control, for enabling I/O device virtualization. Intel VT-d also brings robust security by providing protection from errant DMAs by using DMA remapping, a key feature of Intel VT-d.
IOV I/O Virtualization
ITPM Integrated Trusted Platform Module
LCD Liquid Crystal Display
LVDS
NCTF
Low Voltage Differential Signaling. A high speed, low power data transmission standard used for display connections to LCD panels.
Non-Critical to Function. NCTF locations are typically redundant ground or non­critical reserved, so the loss of the solder joint continuity at end of life conditions will not affect the overall product functionality.
Platform Controller Hub. The new, 2009 chipset with centralized platform
PCH
capabilities including the main I/O interfaces along with display connectivity, audio features, power management, manageability, security and storage features.
PECI Platform Environment Control Interface
PEG
PCI Express* Graphics. External Graphics using PCI Express* Architecture. A high-speed serial interface whose configuration is software compatible with the existing PCI specifications.
Processor The 64-bit, single-core or multi-core component (package).
The term “processor core” refers to Si die itself which can contain multiple
Processor Core
Processor Graphics Intel
Rank
execution cores. Each execution core has an instruction cache, data cache, and 256-KB L2 cache. All execution cores share the L3 cache.
®
Processor Graphics
A unit of DRAM corresponding four to eight devices in parallel, ignoring ECC. These devices are usually, but not always, mounted on a single side of a SO­DIMM.
SCI System Control Interrupt. Used in ACPI protocol.
A non-operational state. The processor may be installed in a platform, in a tray, or loose. Processors may be sealed in packaging or exposed to free air. Under
Storage Conditions
these conditions, processor landings should not be connected to any supply voltages, have any I/Os biased or receive any clocks. Upon exposure to “free air” (that is, unsealed packaging or a device removed from packaging material) the processor must be handled in accordance with moisture sensitivity labeling (MSL) as indicated on the packaging material.
TAC Thermal Averaging Constant.
TDP Thermal Design Power.
Datasheet, Volume 1 19
Term Description
V
AXG
V
V
V
V
V
CC
CCIO
CCPLL
CCSA
DDQ
Graphics core power supply.
Processor core power supply.
High Frequency I/O logic power supply
PLL power supply
System Agent (memory controller, DMI, PCIe controllers, and display engine) power supply
DDR3 power supply.
VLD Variable Length Decoding.
V
SS
Processor ground.
x1 Refers to a Link or Port with one Physical Lane.
x16 Refers to a Link or Port with sixteen Physical Lanes.
x4 Refers to a Link or Port with four Physical Lanes.
x8 Refers to a Link or Port with eight Physical Lanes.
Introduction
20 Datasheet, Volume 1
Introduction

1.7 Related Documents

Refer to Table 1-2 for additional information.
Table 1-2. Related Documents
Document Document Number/ Location
2nd Generation Intel 2
2nd Generation Intel Update
®
Intel
6 Series Chipset Datasheet www.intel.com/Assets/PDF/datas
®
Intel
6 Series Chipset Thermal Mechanical Specifications and Design
Guidelines Advanced Configuration and Power Interface Specification 3.0 http://www.acpi.info/ PCI Local Bus Specification 3.0 http://www.pcisig.com/specifica-
PCI Express* Base Specification 2.0 http://www.pcisig.com DDR3 SDRAM Specification http://www.jedec.org DisplayPort* Specification http://www.vesa.org
®
Intel
64 and IA-32 Architectures Software Developer's Manuals http://www.intel.com/products/pr
Volume 1: Basic Architecture 253665 Volume 2A: Instruction Set Reference, A-M 253666 Volume 2B: Instruction Set Reference, N-Z 253667 Volume 3A: System Programming Guide 253668 Volume 3B: System Programming Guide 253669
®
Core™ Processor Family Mobile Datasheet, Volume
®
Core™ Processor Family Mobile Specification
www.intel.com/Assets/PDF/datas
heet/324803.pdf
www.intel.com/Assets/PDF/specu
pdate/324693.pdf
heet/324645.pdf
www.intel.com/Assets/PDF/desig
nguide/324647.pdf
tions
ocessor/manuals/index.htm
§ §
Datasheet, Volume 1 21
Introduction
22 Datasheet, Volume 1
Interfaces

2 Interfaces

This chapter describes the interfaces supported by the processor.

2.1 System Memory Interface

2.1.1 System Memory Technology Supported

The Integrated Memory Controller (IMC) supports DDR3 protocols with two independent, 64-bit wide channels each accessing one DIMM. It supports a maximum of one unbuffered non-ECC DDR3 DIMM per-channel; thus, allowing up to two device ranks per-channel.
• DDR3 Data Transfer Rates
— 1066 MT/s (PC3-8500), 1333 MT/s (PC3-10600), 1600 MT/s (PC-12800)
• DDR3 SO-DIMM Modules
— Raw Card A – Dual Ranked x16 unbuffered non-ECC
— Raw Card B – Single Ranked x8 unbuffered non-ECC
— Raw Card C – Single Ranked x16 unbuffered non-ECC
— Raw Card F – Dual Ranked x8 (planar) unbuffered non-ECC
• DDR3 DRAM Device Technology
Standard 1-Gb, 2-Gb, and 4-Gb technologies and addressing are supported for x16 and x8 devices. There is no support for memory modules with different technologies or capacities on opposite sides of the same memory module. If one side of a memory module is populated, the other side is either identical or empty.
Table 2-1. Supported SO-DIMM Module Configurations
Raw Card
Version
A
B
C
F
DIMM
Capacity
1 GB 1 Gb 64 M x 16 8 2 13/10 8 8K
2 GB 2 Gb 128 M x 16 8 2 14/10 8 8K
1 GB 1 Gb 128 M x 8 8 1 14/10 8 8K
2 GB 2 Gb 256 M x 8 8 1 15/10 8 8K
512 MB 1 Gb 64 M x 16 4 1 13/10 8 8K
1 GB 2 Gb 128 M x 16 4 1 14/10 8 8K
2 GB 1 Gb 128 M x 8 16 2 14/10 8 8K
4 GB 2 Gb 256 M x 8 16 2 15/10 8 8K
8 GB 4 Gb 512 M x 8 16 2 16/ 10 8 8K
DRAM Device
Technology
Notes:
1. System memory configurations are based on availability and are subject to change.
2. Interface does not support ULV/LV memory modulates or ULV/LV DIMMs.
DRAM
Organization
# of
DRAM
Devices
# of
Physical
Device
Ranks
1,2
# of
Row/Col
Address Bits
# of Banks
Inside
DRAM
Page Size
Datasheet, Volume 1 23

2.1.2 System Memory Timing Support

The IMC supports the following DDR3 Speed Bin, CAS Write Latency (CWL), and command signal mode timings on the main memory interface:
•tCL = CAS Latency
•t
•tRP = PRECHARGE Command Period
• CWL = CAS Write Latency
• Command Signal modes = 1n indicates a new command may be issued every clock
Table 2-2. DDR3 System Memory Timing Support
= Activate Command to READ or WRITE Command delay
RCD
and 2n indicates a new command may be issued every 2 clocks. Command launch mode programming depends on the transfer rate and memory configuration.
Interfaces
Segment
Extreme
Edition (XE)
and
Quad Core SV
Dual Core SV,
Low voltage
and Ultra low
voltage
Notes:
1. System memory timing support is based on availability and is subject to change.
Transfer
Rate
(MT/s)
1066 7 7 7 6 1n/2n
1333 9 9 9 7 1n/2n
1600 11 11 11 8 1n/2n
1066
1333 9 9 9 7 1n/2n
tCL
(tCK)
77761n/2n
88861n/2n
tRCD (tCK)
tRP
(tCK)
CWL
(tCK)
CMD
Mode

2.1.3 System Memory Organization Modes

The IMC supports two memory organization modes—single-channel and dual-channel. Depending upon how the DIMM Modules are populated in each memory channel, a number of different configurations can exist.
2.1.3.1 Single-Channel Mode
In this mode, all memory cycles are directed to a single-channel. Single-channel mode is used when either Channel A or Channel B DIMM connectors are populated in any order, but not both.
2.1.3.2 Dual-Channel Mode – Intel® Flex Memory Technology Mode
Notes
1
The IMC supports Intel Flex Memory Technology Mode. Memory is divided into a symmetric and an asymmetric zone. The symmetric zone starts at the lowest address in each channel and is contiguous until the asymmetric zone begins or until the top address of the channel with the smaller capacity is reached. In this mode, the system runs with one zone of dual-channel mode and one zone of single-channel mode, simultaneously, across the whole memory array.
Note: Channels A and B can be mapped for physical channels 0 and 1 respectively or vice
versa; however, channel A size must be greater or equal to channel B size.
24 Datasheet, Volume 1
Interfaces
CH BCH A
B B
C
B
B
C
Non interleaved access
Dual channel interleaved access
TOM
B – The largest physical memory amount of the sm aller size memory module C – The remaining physical mem ory amount of the larger size mem ory module
Figure 2-1. Intel
®
Flex Memory Technology Operation
2.1.3.2.1 Dual-Channel Symmetric Mode
Dual-Channel Symmetric mode, also known as interleaved mode, provides maximum performance on real world applications. Addresses are ping-ponged between the channels after each cache line (64-byte boundary). If there are two requests, and the second request is to an address on the opposite channel from the first, that request can be sent before data from the first request has returned. If two consecutive cache lines are requested, both may be retrieved simultaneously since they are ensured to be on opposite channels. Use Dual-Channel Symmetric mode when both Channel A and Channel B DIMM connectors are populated in any order, with the total amount of memory in each channel being the same.
When both channels are populated with the same memory capacity and the boundary between the dual channel zone and the single channel zone is the top of memory, IMC operates completely in Dual-Channel Symmetric mode.
Note: The DRAM device technology and width may vary from one channel to the other.

2.1.4 Rules for Populating Memory Slots

In all modes, the frequency of system memory is the lowest frequency of all memory modules placed in the system, as determined through the SPD registers on the memory modules. The system memory controller supports only one DIMM connector per channel. The usage of DIMM modules with different latencies is allowed. For dual­channel modes, both channels must have an DIMM connector populated. For single­channel mode, only a single-channel can have a DIMM connector populated.
Datasheet, Volume 1 25
Interfaces

2.1.5 Technology Enhancements of Intel® Fast Memory Access (Intel® FMA)

The following sections describe the Just-in-Time Scheduling, Command Overlap, and Out-of-Order Scheduling Intel FMA technology enhancements.
2.1.5.1 Just-in-Time Command Scheduling
The memory controller has an advanced command scheduler where all pending requests are examined simultaneously to determine the most efficient request to be issued next. The most efficient request is picked from all pending requests and issued to system memory Just-in-Time to make optimal use of Command Overlapping. Thus, instead of having all memory access requests go individually through an arbitration mechanism forcing requests to be executed one at a time, they can be started without interfering with the current request allowing for concurrent issuing of requests. This allows for optimized bandwidth and reduced latency while maintaining appropriate command spacing to meet system memory protocol.
2.1.5.2 Command Overlap
Command Overlap allows the insertion of the DRAM commands between the Activate, Precharge, and Read/Write commands normally used, as long as the inserted commands do not affect the currently executing command. Multiple commands can be issued in an overlapping manner, increasing the efficiency of system memory protocol.
2.1.5.3 Out-of-Order Scheduling
While leveraging the Just-in-Time Scheduling and Command Overlap enhancements, the IMC continuously monitors pending requests to system memory for the best use of bandwidth and reduction of latency. If there are multiple requests to the same open page, these requests would be launched in a back to back manner to make optimum use of the open memory page. This ability to reorder requests on the fly allows the IMC to further reduce latency and increase bandwidth efficiency.

2.1.6 Memory Type Range Registers (MTRRs) Enhancement

The processor has 2 additional MTRRs (total 10 MTRRs). These additional MTRRs are specially important in supporting larger system memory beyond 4 GB.

2.1.7 Data Scrambling

The memory controller incorporates a DDR3 Data Scrambling feature to minimize the impact of excessive di/dt on the platform DDR3 VRs due to successive 1s and 0s on the data bus. Past experience has demonstrated that traffic on the data bus is not random and can have energy concentrated at specific spectral harmonics creating high di/dt that is generally limited by data patterns that excite resonance between the package inductance and on-die capacitances. As a result, the memory controller uses a data scrambling feature to create pseudo-random patterns on the DDR3 data bus to reduce the impact of any excessive di/dt.

2.1.8 DRAM Clock Generation

Every supported DIMM has two differential clock pairs. There are a total of four clock pairs driven directly by the processor to two DIMMs.
26 Datasheet, Volume 1
Interfaces

2.2 PCI Express* Interface

This section describes the PCI Express interface capabilities of the processor. See the
PCI Express Base Specification for details of PCI Express. The processor has one PCI Express controller that can support one external x16 PCI
Express Graphics Device. The primary PCI Express Graphics port is referred to as PEG 0.

2.2.1 PCI Express* Architecture

Compatibility with the PCI addressing model is maintained to ensure that all existing applications and drivers operate unchanged.
The PCI Express configuration uses standard mechanisms as defined in the PCI Plug-and-Play specification. The initial recovered clock speed of 1.25 GHz results in
2.5 Gb/s/direction that provides a 250 MB/s communications channel in each direction (500 MB/s total). That is close to twice the data rate of classic PCI. The fact that 8b/10b encoding is used accounts for the 250 MB/s where quick calculations would imply 300 MB/s. The external graphics ports support Gen2 speed as well. At 5.0 GT/s, Gen 2 operation results in twice as much bandwidth per lane as compared to Gen 1 operation. When operating with two PCIe controllers, each controller can be operating at either 2.5 GT/s or 5.0 GT/s.
The PCI Express architecture is specified in three layers—Transaction Layer, Data Link Layer, and Physical Layer. The partitioning in the component is not necessarily along these same boundaries. Refer to Figure 2-2 for the PCI Express Layering Diagram.
Figure 2-2. PCI Express* Layering Diagram
PCI Express uses packets to communicate information between components. Packets are formed in the Transaction and Data Link Layers to carry the information from the transmitting component to the receiving component. As the transmitted packets flow through the other layers, they are extended with additional information necessary to handle packets at those layers. At the receiving side, the reverse process occurs and
Datasheet, Volume 1 27
packets get transformed from their Physical Layer representation to the Data Link Layer representation and finally (for Transaction Layer Packets) to the form that can be processed by the Transaction Layer of the receiving device.
Figure 2-3. Packet Flow through the Layers
2.2.1.1 Transaction Layer
The upper layer of the PCI Express architecture is the Transaction Layer. The Transaction Layer's primary responsibility is the assembly and disassembly of Transaction Layer Packets (TLPs). TLPs are used to communicate transactions, such as read and write, as well as certain types of events. The Transaction Layer also manages flow control of TLPs.
Interfaces
2.2.1.2 Data Link Layer
The middle layer in the PCI Express stack, the Data Link Layer, serves as an intermediate stage between the Transaction Layer and the Physical Layer. Responsibilities of Data Link Layer include link management, error detection, and error correction.
The transmission side of the Data Link Layer accepts TLPs assembled by the Transaction Layer, calculates and applies data protection code and TLP sequence number, and submits them to Physical Layer for transmission across the Link. The receiving Data Link Layer is responsible for checking the integrity of received TLPs and for submitting them to the Transaction Layer for further processing. On detection of TLP error(s), this layer is responsible for requesting retransmission of TLPs until information is correctly received, or the Link is determined to have failed. The Data Link Layer also generates and consumes packets that are used for Link management functions.
2.2.1.3 Physical Layer
The Physical Layer includes all circuitry for interface operation, including driver and input buffers, parallel-to-serial and serial-to-parallel conversion, PLL(s), and impedance matching circuitry. It also includes logical functions related to interface initialization and maintenance. The Physical Layer exchanges data with the Data Link Layer in an implementation-specific format, and is responsible for converting this to an appropriate serialized format and transmitting it across the PCI Express Link at a frequency and width compatible with the remote device.
28 Datasheet, Volume 1
Interfaces
PCI-PCI
Bridge
representing
root PCI
Express ports
(Device 1 and
Device 6)
PCI
Compatible
Host Bridge
Device
(Device 0)
PCI
Express
Device
PEG0
DMI

2.2.2 PCI Express* Configuration Mechanism

The PCI Express (external graphics) link is mapped through a PCI-to-PCI bridge structure.
Figure 2-4. PCI Express* Related Register Structures in the Processor
PCI Express extends the configuration space to 4096 bytes per-device/function, as compared to 256 bytes allowed by the Conventional PCI Specification. PCI Express configuration space is divided into a PCI-compatible region (that consists of the first 256 bytes of a logical device's configuration space) and an extended PCI Express region (that consists of the remaining configuration space). The PCI-compatible region can be accessed using either the mechanisms defined in the PCI specification or using the enhanced PCI Express configuration access mechanism described in the PCI Express Enhanced Configuration Mechanism section.
The PCI Express Host Bridge is required to translate the memory-mapped PCI Express configuration space accesses from the host processor to PCI Express configuration cycles. To maintain compatibility with PCI configuration addressing mechanisms, it is recommended that system software access the enhanced configuration space using
32-bit operations (32-bit aligned) only. See the PCI Express Base Specification for
details of both the PCI-compatible and PCI Express Enhanced configuration mechanisms and transaction rules.

2.2.3 PCI Express Graphics

The external graphics attach (PEG) on the processor is a single, 16-lane (x16) port. The
PEG port is being designed to be compliant with the PCI Express Base Specification,
Revision 2.0.
Datasheet, Volume 1 29

2.2.4 PCI Express Lanes Connection

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1 X 16 Controller
Lane 0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Lane 1
Lane 2
Lane 3
Lane 4
Lane 5
Lane 6
Lane 7
Lane 8
Lane 9
Lane 10
Lane 11
Lane 12
Lane 13
Lane 14
Lane 15
0
1
2
3
4
5
6
7
1 X 8 Controller
0
1
2
3
1 X 4 Controller
Figure 2-5 demonstrates the PCIe lanes mapping.
Figure 2-5. PCIe Typical Operation 16 lanes Mapping
Interfaces

2.3 Direct Media Interface (DMI)

Direct Media Interface (DMI) connects the processor and the PCH. Next generation DMI2 is supported. The DMI is similar to a four-lane PCI Express supporting up to 1 GB/s of bandwidth in each direction.
Note: Only DMI x4 configuration is supported.

2.3.1 DMI Error Flow

DMI can only generate SERR in response to errors, never SCI, SMI, MSI, PCI INT, or

2.3.2 Processor/PCH Compatibility Assumptions

30 Datasheet, Volume 1
GPE. Any DMI related SERR activity is associated with Device 0.
The processor is compatible with the Intel
compatible with any previous PCH products.
®
6 Series Chipset PCH. The processor is not
Interfaces

2.3.3 DMI Link Down

The DMI link going down is a fatal, unrecoverable error. If the DMI data link goes to data link down, after the link was up, then the DMI link hangs the system by not allowing the link to retrain to prevent data corruption. This link behavior is controlled by the PCH.
Downstream transactions that had been successfully transmitted across the link prior to the link going down may be processed as normal. No completions from downstream, non-posted transactions are returned upstream over the DMI link after a link down event.

2.4 Processor Graphics Controller (GT)

New Graphics Engine Architecture includes 3D compute elements, Multi-format hardware-assisted decode/encode Pipeline, and Mid-Level Cache (MLC) for superior high definition playback, video quality, and improved 3D performance and Media.
Display Engine in the Uncore handles delivering the pixels to the screen. GSA (Graphics in System Agent) is the primary Channel interface for display memory accesses and “PCI-like” traffic in and out.
Figure 2-6. Processor Graphics Controller Unit Block Diagram
Datasheet, Volume 1 31

2.4.1 3D and Video Engines for Graphics Processing

The 3D graphics pipeline architecture simultaneously operates on different primitives or on different portions of the same primitive. All the cores are fully programmable, increasing the versatility of the 3D Engine. The Gen 6.0 3D engine provides the following performance and power-management enhancements:
• Up to 12 Execution units (EUs)
• Hierarchal-Z
• Video quality enhancements
2.4.1.1 3D Engine Execution Units
• Supports up to 12 EUs. The EUs perform 128-bit wide execution per clock.
• Support SIMD8 instructions for vertex processing and SIMD16 instructions for pixel processing.
2.4.1.2 3D Pipeline
2.4.1.2.1 Vertex Fetch (VF) Stage
Interfaces
The VF stage executes 3DPRIMITIVE commands. Some enhancements have been included to better support legacy D3D APIs as well as SGI OpenGL*.
2.4.1.2.2 Vertex Shader (VS) Stage
The VS stage performs shading of vertices output by the VF function. The VS unit produces an output vertex reference for every input vertex reference received from the VF unit, in the order received.
2.4.1.2.3 Geometry Shader (GS) Stage
The GS stage receives inputs from the VS stage. Compiled application-provided GS programs, specifying an algorithm to convert the vertices of an input object into some output primitives. For example, a GS shader may convert lines of a line strip into polygons representing a corresponding segment of a blade of grass centered on the line. Or it could use adjacency information to detect silhouette edges of triangles and output polygons extruding out from the edges.
2.4.1.2.4 Clip Stage
The Clip stage performs general processing on incoming 3D objects. However, it also includes specialized logic to perform a Clip Test function on incoming objects. The Clip Test optimizes generalized 3D Clipping. The Clip unit examines the position of incoming vertices, and accepts/rejects 3D objects based on its Clip algorithm.
2.4.1.2.5 Strips and Fans (SF) Stage
The SF stage performs setup operations required to rasterize 3D objects. The outputs from the SF stage to the Windower stage contain implementation-specific information required for the rasterization of objects and also supports clipping of primitives to some extent.
32 Datasheet, Volume 1
Interfaces
2.4.1.2.6 Windower/IZ (WIZ) Stage
The WIZ unit performs an early depth test, which removes failing pixels and eliminates unnecessary processing overhead.
The Windower uses the parameters provided by the SF unit in the object-specific rasterization algorithms. The WIZ unit rasterizes objects into the corresponding set of pixels. The Windower is also capable of performing dithering, whereby the illusion of a higher resolution when using low-bpp channels in color buffers is possible. Color dithering diffuses the sharp color bands seen on smooth-shaded objects.
2.4.1.3 Video Engine
The Video Engine handles the non-3D (media/video) applications. It includes support for VLD and MPEG2 decode in hardware.
2.4.1.4 2D Engine
The 2D Engine contains BLT (Block Level Transfer) functionality and an extensive set of 2D instructions. To take advantage of the 3D during engine’s functionality, some BLT functions make use of the 3D renderer.
2.4.1.4.1 Processor Graphics VGA Registers
The 2D registers consists of original VGA registers and others to support graphics modes that have color depths, resolutions, and hardware acceleration features that go beyond the original VGA standard.
2.4.1.4.2 Logical 128-Bit Fixed BLT and 256 Fill Engine
This BLT engine accelerates the GUI of Microsoft Windows* operating systems. The 128-bit BLT engine provides hardware acceleration of block transfers of pixel data for many common Windows operations. The BLT engine can be used for the following:
• Move rectangular blocks of data between memory locations
• Data alignment
• To perform logical operations (raster ops)
The rectangular block of data does not change, as it is transferred between memory locations. The allowable memory transfers are between: cacheable system memory and frame buffer memory, frame buffer memory and frame buffer memory, and within system memory. Data to be transferred can consist of regions of memory, patterns, or solid color fills. A pattern is always 8 x 8 pixels wide and may be 8, 16, or 32 bits per pixel.
The BLT engine expands monochrome data into a color depth of 8, 16, or 32 bits. BLTs can be either opaque or transparent. Opaque transfers move the data specified to the destination. Transparent transfers compare destination color to source color and write according to the mode of transparency selected.
Data is horizontally and vertically aligned at the destination. If the destination for the BLT overlaps with the source memory location, the BLT engine specifies which area in memory to begin the BLT transfer. Hardware is included for all 256 raster operations (source, pattern, and destination) defined by Microsoft, including transparent BLT.
The BLT engine has instructions to invoke BLT and stretch BLT operations, permitting software to set up instruction buffers and use batch processing. The BLT engine can perform hardware clipping during BLTs.
Datasheet, Volume 1 33

2.4.2 Processor Graphics Display

Memory Host Interface
(Outside of Display Engine)
Display
Arbiter
Display
Planes & VGA
Display
Pipe A
Display
Pipe B
Display
Port
Control
A
Display
Port
Control
B
Intel
FDI
(Tx
Side)
eDP
DMI
PCH Display Engine
The Processor Graphics controller display pipe can be broken down into three components:
•Display Planes
• Display Pipes
• Embedded DisplayPort* and Intel® FDI
Figure 2-7. Processor Display Block Diagram
Interfaces
2.4.2.1 Display Planes
A display plane is a single displayed surface in memory and contains one image (desktop, cursor, overlay). It is the portion of the display hardware logic that defines the format and location of a rectangular region of memory that can be displayed on display output device and delivers that data to a display pipe. This is clocked by the Core Display Clock.
2.4.2.1.1 Planes A and B
Planes A and B are the main display planes and are associated with Pipes A and B respectively. The two display pipes are independent, allowing for support of two independent display streams. They are both double-buffered, which minimizes latency and improves visual quality.
2.4.2.1.2 Sprite A and B
Sprite A and Sprite B are planes optimized for video decode, and are associated with Planes A and B respectively. Sprite A and B are also double-buffered.
34 Datasheet, Volume 1
Interfaces
2.4.2.1.3 Cursors A and B
Cursors A and B are small, fixed-sized planes dedicated for mouse cursor acceleration, and are associated with Planes A and B respectively. These planes support resolutions up to 256 x 256 each.
2.4.2.1.4 VGA
VGA is used for boot, safe mode, legacy games, etc. It can be changed by an application without OS/driver notification, due to legacy requirements.
2.4.2.2 Display Pipes
The display pipe blends and synchronizes pixel data received from one or more display planes and adds the timing of the display output device upon which the image is displayed. This is clocked by the Display Reference clock inputs.
The display pipes A and B operate independently of each other at the rate of 1 pixel per clock. They can attach to any of the display ports. Each pipe sends display data to the PCH over the Intel Flexible Display Interface (Intel
2.4.2.3 Display Ports
®
FDI).
The display ports consist of output logic and pins that transmit the display data to the associated encoding logic and send the data to the display device (that is, LVDS, HDMI*, DVI, SDVO, etc.). All display interfaces connecting external displays are now repartitioned and driven from the PCH with the exception of the eDP DisplayPort.
2.4.2.4 Embedded DisplayPort (eDP)
The Processor Graphics supports the Embedded Display Port (eDP) interface, intended for display devices that are integrated into the system (such as laptop LCD panel).
The DisplayPort (abbreviated DP) is different than the generic term display port. The DisplayPort specification is a VESA standard. DisplayPort consolidates internal and external connection methods to reduce device complexity, support cross industry applications, and provide performance scalability. The eDP interface supports link­speeds of 1.62 Gbps and 2.7 Gbps on 1, 2, or 4 data lanes. The eDP supports -0.5% SSC and non-SSC clock settings.

2.4.3 Intel Flexible Display Interface

The Intel Flexible Display Interface (Intel® FDI) is a proprietary link for carrying display traffic from the Processor Graphics controller to the PCH display I/Os. Intel supports two independent channels—one for pipe A and one for pipe B.
• Each channel has four transmit (Tx) differential pairs used for transporting pixel and framing data from the display engine.
• Each channel has one single-ended LineSync and one FrameSync input (1-V CMOS signaling).
• One display interrupt line input (1-V CMOS signaling).
•Intel® FDI may dynamically scalable down to 2X or 1X based on actual display bandwidth requirements.
• Common 100-MHz reference clock.
• Each channel transports at a rate of 2.7 Gbps.
• PCH supports end-to-end lane reversal across both channels (no reversal support required in the processor)
®
FDI
Datasheet, Volume 1 35

2.4.4 Multi-Graphics Controller Multi-Monitor Support

The processor supports simultaneous use of the Processor Graphics Controller (GT) and a x16 PCI Express Graphics (PEG) device.
The processor supports a maximum of 2 displays connected to the PEG card in parallel with up to 2 displays connected to the processor and PCH.
Interfaces
Note: When supporting Multi Graphics controllers Multi-Monitors, “drag and drop” between
monitors and the 2x8 PEG is not supported.

2.5 Platform Environment Control Interface (PECI)

The PECI is a one-wire interface that provides a communication channel between a PECI client (processor) and a PECI master. The processor implements a PECI interface to:
• Allow communication of processor thermal and other information to the PECI master.
• Read averaged Digital Thermal Sensor (DTS) values for fan speed control.

2.6 Interface Clocking

2.6.1 Internal Clocking Requirements

Table 2-3. Reference Clock
Reference Input Clock Input Frequency Associated PLL
BCLK/BCLK# 100 MHz Processor/Memory/Graphics/PCIe/DMI/FDI
DPLL_REF_CLK/DPLL_REF_CLK# 120 MHz Embedded DisplayPort (eDP)
§ §
36 Datasheet, Volume 1
Technologies

3 Technologies

This chapter provides a high-level description of Intel technologies implemented in the processor.
The implementation of the features may vary between the processor SKUs.
Details on the different technologies of Intel processors and other relevant external notes are located at the Intel technology web site: http://www.intel.com/technology/

3.1 Intel® Virtualization Technology

Intel Virtualization Technology (Intel VT) makes a single system appear as multiple independent systems to software. This allows multiple, independent operating systems to run simultaneously on a single system. Intel VT comprises technology components to support virtualization of platforms based on Intel architecture microprocessors and chipsets. Intel Virtualization Technology (Intel VT-x) added hardware support in the processor to improve the virtualization performance and robustness. Intel Virtualization Technology for Directed I/O (Intel VT-d) adds chipset hardware implementation to support and improve I/O virtualization performance and robustness.
Intel VT-x specifications and functional descriptions are included in the Intel IA-32 Architectures Software Developer’s Manual, Volume 3B and is available at:
®
64 and
http://www.intel.com/products/processor/manuals/index.htm
The Intel VT-d specification and other VT documents can be referenced at:
http://www.intel.com/technology/virtualization/index.htm

3.1.1 Intel® VT-x Objectives

Intel VT-x provides hardware acceleration for virtualization of IA platforms. Virtual Machine Monitor (VMM) can use Intel VT-x features to provide improved a reliable virtualized platform. By using Intel VT-x, a VMM is:
Robust: VMMs no longer need to use paravirtualization or binary translation. This
means that they will be able to run off-the-shelf OSs and applications without any special steps.
Enhanced: Intel VT enables VMMs to run 64-bit guest operating systems on IA x86
processors.
More reliable: Due to the hardware support, VMMs can now be smaller, less
complex, and more efficient. This improves reliability and availability and reduces the potential for software conflicts.
More secure: The use of hardware transitions in the VMM strengthens the isolation
of VMs and further prevents corruption of one VM from affecting others on the same system.
Datasheet, Volume 1 37

3.1.2 Intel® VT-x Features

The processor core supports the following Intel VT-x features:
• Extended Page Tables (EPT)
— EPT is hardware assisted page table virtualization
— It eliminates VM exits from guest OS to the VMM for shadow page-table
maintenance
• Virtual Processor IDs (VPID)
— Ability to assign a VM ID to tag processor core hardware structures (such as
TLBs)
— This avoids flushes on VM transitions to give a lower-cost VM transition time
and an overall reduction in virtualization overhead.
• Guest Preemption Timer
— Mechanism for a VMM to preempt the execution of a guest OS after an amount
of time specified by the VMM. The VMM sets a timer value before entering a guest
— The feature aids VMM developers in flexibility and Quality of Service (QoS)
assurances
• Descriptor-Table Exiting
— Descriptor-table exiting allows a VMM to protect a guest OS from internal
(malicious software based) attack by preventing relocation of key system data structures like IDT (interrupt descriptor table), GDT (global descriptor table), LDT (local descriptor table), and TSS (task segment selector).
— A VMM using this feature can intercept (by a VM exit) attempts to relocate
these data structures and prevent them from being tampered by malicious software.
Technologies

3.1.3 Intel® VT-d Objectives

The key Intel VT-d objectives are domain-based isolation and hardware-based virtualization. A domain can be abstractly defined as an isolated environment in a platform to which a subset of host physical memory is allocated. Virtualization allows for the creation of one or more partitions on a single system. This could be multiple partitions in the same operating system, or there can be multiple operating system instances running on the same system – offering benefits such as system consolidation, legacy migration, activity partitioning, or security.
38 Datasheet, Volume 1
Technologies

3.1.4 Intel® VT-d Features

The processor supports the following Intel VT-d features:
• Memory controller and Processor Graphics comply with Intel® VT-d 1.2 specification.
•Two VT-d DMA remap engines.
—iGFX DMA remap engine
—DMI/PEG
• Support for root entry, context entry, and default context
• 39-bit guest physical address and host physical address widths
• Support for 4K page sizes only
• Support for register-based fault recording only (for single entry only) and support for MSI interrupts for faults
• Support for both leaf and non-leaf caching
• Support for boot protection of default page table
• Support for non-caching of invalid page table entries
• Support for hardware based flushing of translated but pending writes and pending reads, on IOTLB invalidation
• Support for page-selective IOTLB invalidation
• MSI cycles (MemWr to address FEEx_xxxxh) not translated
— Translation faults result in cycle forwarding to VBIOS region (byte enables
masked for writes). Returned data may be bogus for internal agents, PEG/DMI interfaces return unsupported request status
• Interrupt Remapping is supported
• Queued invalidation is supported.
• VT-d translation bypass address range is supported (Pass Through)
Note: Intel VT-d Technology may not be available on all SKUs.

3.1.5 Intel® VT-d Features Not Supported

The following features are not supported by the processor with Intel VT-d:
• No support for PCISIG endpoint caching (ATS)
• No support for Intel VT-d read prefetching/snarfing (that is, translations within a cacheline are not stored in an internal buffer for reuse for subsequent translations).
• No support for advance fault reporting
• No support for super pages
• No support for Intel VT-d translation bypass address range (such usage models need to be resolved with VMM help in setting up the page tables correctly)
Datasheet, Volume 1 39
Technologies

3.2 Intel® Trusted Execution Technology (Intel® TXT)

Intel Trusted Execution Technology (Intel TXT) defines platform-level enhancements that provide the building blocks for creating trusted platforms.
The Intel TXT platform helps to provide the authenticity of the controlling environment such that those wishing to rely on the platform can make an appropriate trust decision. The Intel TXT platform determines the identity of the controlling environment by accurately measuring and verifying the controlling software.
Another aspect of the trust decision is the ability of the platform to resist attempts to change the controlling environment. The Intel TXT platform will resist attempts by software processes to change the controlling environment or bypass the bounds set by the controlling environment.
Intel TXT is a set of extensions designed to provide a measured and controlled launch of system software that will then establish a protected environment for itself and any additional software that it may execute.
These extensions enhance two areas:
• The launching of the Measured Launched Environment (MLE)
• The protection of the MLE from potential corruption
The enhanced platform provides these launch and control interfaces using Safer Mode Extensions (SMX).
The SMX interface includes the following functions:
• Measured/Verified launch of the MLE
• Mechanisms to ensure the above measurement is protected and stored in a secure location
• Protection mechanisms that allow the MLE to control attempts to modify itself
For more information, refer to the Intel® TXT Measured Launched Environment Developer’s Guide in http://www.intel.com/technology/security.

3.3 Intel® Hyper-Threading Technology

The processor supports Intel® Hyper-Threading Technology (Intel® HT Technology), that allows an execution core to function as two logical processors. While some execution resources (such as caches, execution units, and buses) are shared, each logical processor has its own architectural state with its own set of general-purpose registers and control registers. This feature must be enabled using the BIOS and requires operating system support.
Intel recommends enabling Hyper-Threading Technology with Microsoft Windows 7*, Microsoft Windows Vista*, Microsoft Windows* XP Professional/Windows* XP Home, and disabling Hyper-Threading Technology using the BIOS for all previous versions of Windows operating systems. For more information on Hyper-Threading Technology, see
http://www.intel.com/technology/platform-technology/hyper-threading/.
40 Datasheet, Volume 1
Technologies

3.4 Intel® Turbo Boost Technology

Compared with previous generation products, Intel Turbo Boost Technology will increase the ratio of application power to TDP. Thus, thermal solutions and platform cooling that are designed to less than thermal design guidance might experience thermal and performance issues since more applications will tend to run at the maximum power limit for significant periods of time.
Note: Intel Turbo Boost Technology may not be available on all SKUs.
Intel Turbo Boost Technology is a feature that allows the processor to opportunistically and automatically run faster than its rated operating core and/or render clock frequency when there is sufficient power headroom, and the product is within specified temperature and current limits. The Intel Turbo Boost Technology feature is designed to increase performance of both multi-threaded and single-threaded workloads. The processor supports a Turbo mode where the processor can use the thermal capacity associated with package and run at power levels higher than TDP power for short durations. This improves the system responsiveness for short, bursty usage conditions. The turbo feature needs to be properly enabled by BIOS for the processor to operate with maximum performance. Since the turbo feature is configurable and dependent on many platform design limits outside of the processor control, the maximum performance cannot be ensured.
Turbo Mode availability is independent of the number of active cores; however, the Turbo Mode frequency is dynamic and dependent on the instantaneous application power load, the number of active cores, user configurable settings, operating environment, and system design.

3.4.1 Intel®Turbo Boost Technology Frequency

The processor's rated frequency assumes that all execution cores are active and are at the sustained thermal design power (TDP). However, under typical operation not all cores are active or at executing a high power workload. Therefore, most applications are consuming less than the TDP at the rated frequency. Intel Turbo Boost Technology takes advantage of the available TDP headroom and active cores are able to increase their operating frequency.
To determine the highest performance frequency amongst active cores, the processor takes the following into consideration to recalculate turbo frequency during runtime:
• The number of cores operating in the C0 state.
• The estimated core current consumption.
• The estimated package prior and present power consumption.
• The package temperature.
Any of these factors can affect the maximum frequency for a given workload. If the power, current, or thermal limit is reached, the processor will automatically reduce the frequency to stay with its TDP limit.
Note: Intel Turbo Technology processor frequencies are only active if the operating system is
requesting the P0 state. For more information on P-states and C-states refer to
Chapter 4, “Power Management”.
Datasheet, Volume 1 41
Technologies

3.4.2 Intel® Turbo Boost Technology Graphics Frequency

The graphics render frequency is selected dynamically based on graphics workload demand as permitted by the processor turbo control. The processor can optimize both processor and Processor Graphics performance through power sharing. The processor cores and the processor graphics core share a package power limit. If the graphics core is not consuming enough power to reach the package power limit, the cores can increase frequency to take advantage of the unused thermal power headroom. The opposite can happen when the processor cores are not consuming enough power to reach the package power limit. For the Processor Graphics, this could mean an increase in the render core frequency (above its rated frequency) and increased graphics performance. Both the processor core(s) and the graphics render core can increase frequency higher than possible without power sharing.
Note: Processor utilization of turbo graphic frequencies requires that the Intel Graphics driver
to be properly installed. Turbo graphic frequencies are not dependent on the operating system processor P-state requests and may turbo while the processor is in any processor P-states.

3.5 Intel® Advanced Vector Extensions (AVX)

Intel® Advanced Vector Extensions (AVX) is the latest expansion of the Intel instruction set. It extends the Intel 256-bit vectors. Intel AVX addresses the continued need for vector floating-point performance in mainstream scientific and engineering numerical applications, visual processing, recognition, data-mining/synthesis, gaming, physics, cryptography and other areas of applications. The enhancement in Intel AVX allows for improved performance due to wider vectors, new extensible syntax, and rich functionality including the ability to better manage, rearrange, and sort data. For more information on Intel AVX, see http://www.intel.com/software/avx
®
Streaming SIMD Extensions (SSE) from 128-bit vectors into

3.6 Advanced Encryption Standard New Instructions (AES-NI)

The processor supports Advanced Encryption Standard New Instructions (AES-NI) that are a set of Single Instruction Multiple Data (SIMD) instructions that enable fast and secure data encryption and decryption based on the Advanced Encryption Standard (AES). AES-NI are valuable for a wide range of cryptographic applications; such as, applications that perform bulk encryption/decryption, authentication, random number generation, and authenticated encryption. AES is broadly accepted as the standard for both government and industry applications, and is widely deployed in various protocols.
®
AES-NI consists of six Intel AESENCLAST, AESDEC, and AESDELAST facilitate high performance AES encryption and decryption. The other two, AESIMC and AESKEYGENASSIST, support the AES key expansion procedure. Together, these instructions provide a full hardware for supporting AES, offering security, high performance, and a great deal of flexibility.
SSE instructions. Four instructions, AESENC,
42 Datasheet, Volume 1
Technologies

3.6.1 PCLMULQDQ Instruction

The processor supports the carry-less multiplication instruction, PCLMULQDQ. PCLMULQDQ is a Single Instruction Multiple Data (SIMD) instruction that computes the 128-bit carry-less multiplication of two, 64-bit operands without generating and propagating carries. Carry-less multiplication is an essential processing component of several cryptographic systems and standards. Hence, accelerating carry-less multiplication can significantly contribute to achieving high speed secure computing and communication.

3.7 Intel® 64 Architecture x2APIC

The x2APIC architecture extends the xAPIC architecture that provides a key mechanism for interrupt delivery. This extension is intended primarily to increase processor addressability.
Specifically, x2APIC:
• Retains all key elements of compatibility to the xAPIC architecture
— delivery modes
— interrupt and processor priorities
— interrupt sources
— interrupt destination types
• Provides extensions to scale processor addressability for both the logical and physical destination modes
• Adds new features to enhance performance of interrupt delivery
• Reduces complexity of logical destination mode interrupt delivery on link based architectures
The key enhancements provided by the x2APIC architecture over xAPIC are the following:
• Support for two modes of operation to provide backward compatibility and extensibility for future platform innovations
— In xAPIC compatibility mode, APIC registers are accessed through a memory
mapped interface to a 4
— In x2APIC mode, APIC registers are accessed through Model Specific Register
(MSR) interfaces. In this mode, the x2APIC architecture provides significantly increased processor addressability and some enhancements on interrupt delivery.
• Increased range of processor addressability in x2APIC mode
— Physical xAPIC ID field increases from 8 bits to 32 bits, allowing for interrupt
processor addressability up to 4G-1 processors in physical destination mode. A processor implementation of x2APIC architecture can support fewer than 32­bits in a software transparent fashion.
— Logical xAPIC ID field increases from 8 bits to 32 bits. The 32-bit logical x2APIC
ID is partitioned into two sub-fields—a 16-bit cluster ID and a 16-bit logical ID within the cluster. Consequently, ((2^20) -16) processors can be addressed in logical destination mode. Processor implementations can support fewer than 16
bits in the cluster ID sub-field and logical ID sub-field in a software agnostic
fashion.
KB page, identical to the xAPIC architecture.
Datasheet, Volume 1 43
• More efficient MSR interface to access APIC registers
— To enhance inter-processor and self directed interrupt delivery as well as the
ability to virtualize the local APIC, the APIC register set can be accessed only through MSR based interfaces in the x2APIC mode. The Memory Mapped IO (MMIO) interface used by xAPIC is not supported in the x2APIC mode.
• The semantics for accessing APIC registers have been revised to simplify the programming of frequently-used APIC registers by system software. Specifically, the software semantics for using the Interrupt Command Register (ICR) and End Of Interrupt (EOI) registers have been modified to allow for more efficient delivery and dispatching of interrupts.
The x2APIC extensions are made available to system software by enabling the local x2APIC unit in the “x2APIC” mode. To benefit from x2APIC capabilities, a new Operating System and a new BIOS are both needed, with special support for the x2APIC mode.
The x2APIC architecture provides backward compatibility to the xAPIC architecture and forward extendibility for future Intel platform innovations.
Note: Intel x2APIC technology may not be available on all processor SKUs.
For more information, refer to the Intel® 64 Architecture x2APIC Specification at
http://www.intel.com/products/processor/manuals/
Technologies
§ §
44 Datasheet, Volume 1
Power Management

4 Power Management

This chapter provides information on the following power management topics:
•ACPI States
• Processor Core
• Integrated Memory Controller (IMC)
• PCI Express*
• Direct Media Interface (DMI)
• Processor Graphics Controller

4.1 ACPI States Supported

The ACPI states supported by the processor are described in this section.

4.1.1 System States

Table 4-1. System States
State Description
G0/S0 Full On
G1/S3-Cold Suspend-to-RAM (STR). Context saved to memory (S3-Hot is not supported by the processor).
G1/S4 Suspend-to-Disk (STD). All power lost (except wakeup on PCH).
G2/S5 Soft off. All power lost (except wakeup on PCH). Total reboot.
G3 Mechanical off. All power (AC and battery) removed from system.

4.1.2 Processor Core/Package Idle States

Table 4-2. Processor Core/Package State Support
State Description
C0 Active mode, processor executing code.
C1 AutoHALT state.
C1E AutoHALT state with lowest frequency and voltage operating point.
C3
C6 Execution cores in this state save their architectural state before removing core voltage.
C7
Execution cores in C3 flush their L1 instruction cache, L1 data cache, and L2 cache to the L3 shared cache. Clocks are shut off to each core.
Execution cores in this state behave similarly to the C6 state. If all execution cores request C7, L3 cache ways are flushed until it is cleared.
Datasheet, Volume 1 45

4.1.3 Integrated Memory Controller States

Table 4-3. Integrated Memory Controller States
State Description
Power up CKE asserted. Active mode.
Pre-charge
Power-down
Active Power-
Down
Self-Refresh CKE de-asserted using device self-refresh.
CKE de-asserted (not self-refresh) with all banks closed.
CKE de-asserted (not self-refresh) with minimum one bank active.

4.1.4 PCIe Link States

Table 4-4. PCIe Link States
State Description
L0 Full on – Active transfer state.
L0s First Active Power Management low power state – Low exit latency.
L1 Lowest Active Power Management – Longer exit latency.
L3 Lowest power state (power-off) – Longest exit latency.
Power Management

4.1.5 DMI States

Table 4-5. DMI States
State Description
L0 Full on – Active transfer state.
L0s First Active Power Management low power state – Low exit latency.
L1 Lowest Active Power Management – Longer exit latency.
L3 Lowest power state (power-off) – Longest exit latency.

4.1.6 Processor Graphics Controller States

Table 4-6. Processor Graphics Controller States
State Description
D0 Full on, display active.
D3 Cold Power-off.
46 Datasheet, Volume 1
Power Management

4.1.7 Interface State Combinations

Table 4-7. G, S, and C State Combinations
Global (G)
State
G0 S0 C0 Full On On Full On
G0 S0 C1/C1E Auto-Halt On Auto-Halt
G0 S0 C3 Deep Sleep On Deep Sleep
G0 S0 C6/C7
G1 S3 Power off Off, except RTC Suspend to RAM
G1 S4 Power off Off, except RTC Suspend to Disk
G2 S5 Power off Off, except RTC Soft Off
G3 NA Power off Power off Hard off
Sleep
(S) State
Processor
Package
(C) State
Table 4-8. D, S, and C State Combination
Graphics Adapter
(D) State
D0 S0 C0 Full On, Displaying
D0 S0 C1/C1E Auto-Halt, Displaying
D0 S0 C3 Deep sleep, Displaying
D0 S0 C6/C7 Deep Power Down, Displaying
D3 S0 Any Not displaying
D3 S3 N/A
D3 S4 N/A Not displaying, suspend to disk
Sleep (S) State Package (C) State Description
Processor
State
Deep Power-
down
System Clocks Description
On Deep Power-down
Not displaying, Graphics Core is powered off
Datasheet, Volume 1 47

4.2 Processor Core Power Management

While executing code, Enhanced Intel SpeedStep Technology optimizes the processor’s frequency and core voltage based on workload. Each frequency and voltage operating point is defined by ACPI as a P-state. When the processor is not executing code, it is idle. A low-power idle state is defined by ACPI as a C-state. In general, lower power C-states have longer entry and exit latencies.

4.2.1 Enhanced Intel® SpeedStep® Technology

The following are the key features of Enhanced Intel SpeedStep Technology:
• Multiple frequency and voltage points for optimal performance and power efficiency. These operating points are known as P-states.
• Frequency selection is software controlled by writing to processor MSRs. The voltage is optimized based on the selected frequency and the number of active processor cores.
— If the target frequency is higher than the current frequency, VCC is ramped up
in steps to an optimized voltage. This voltage is signaled by the SVID bus to the voltage regulator. Once the voltage is established, the PLL locks on to the target frequency.
— If the target frequency is lower than the current frequency, the PLL locks to the
target frequency, then transitions to a lower voltage by signaling the target voltage on SVID bus.
— All active processor cores share the same frequency and voltage. In a multi-
core processor, the highest frequency P-state requested amongst all active cores is selected.
— Software-requested transitions are accepted at any time. If a previous
transition is in progress, the new transition is deferred until the previous transition is completed.
• The processor controls voltage ramp rates internally to ensure glitch-free transitions.
• Because there is low transition latency between P-states, a significant number of transitions per-second are possible.
Power Management

4.2.2 Low-Power Idle States

When the processor is idle, low-power idle states (C-states) are used to save power. More power savings actions are taken for numerically higher C-states. However, higher C-states have longer exit and entry latencies. Resolution of C-states occur at the thread, processor core, and processor package level. Thread-level C-states are available if Intel Hyper-Threading Technology is enabled.
Caution: Long term reliability cannot be assured unless all the Low Power Idle States are
enabled.
48 Datasheet, Volume 1
Power Management
Processor Package State
Core 1 State
Thread 1Thread 0
Core 0 State
Thread 1Thread 0
C1 C1 E C7C6C3
C0
MWAIT(C1), HLT
C0
MWAIT(C7),
P_LVL4 I/O Read
MWAIT(C6),
P_LVL3 I/O Read
MWAIT(C3),
P_LVL2 I/O Read
MWAIT(C1), HLT
(C1E Enabled)
Figure 4-1. Idle Power Management Breakdown of the Processor Cores
Entry and exit of the C-States at the thread and core level are shown in Figure 4-2.
Figure 4-2. Thread and Core C-State Entry and Exit
While individual threads can request low power C-states, power saving actions only take place once the core C-state is resolved. Core C-states are automatically resolved by the processor. For thread and core C-states, a transition to and from C0 is required before entering any other C-state.
Datasheet, Volume 1 49
Table 4-9. Coordination of Thread Power States at the Core Level
Processor Core
C-State
C0 C0 C0 C0 C0 C0
C1 C0 C1
Thread 0
Note: If enabled, the core C-state will be C1E if all cores have resolved a core C1 state or higher.
C3 C0 C1
C6 C0 C1
C7 C0 C1
C0 C1 C3 C6 C7
1
1
1
1
Thread 1
1
C1
C3 C3 C3
C3 C6 C6
C3 C6 C7

4.2.3 Requesting Low-Power Idle States

The primary software interfaces for requesting low power idle states are through the MWAIT instruction with sub-state hints and the HLT instruction (for C1 and C1E). However, software may make C-state requests using the legacy method of I/O reads from the ACPI-defined processor clock control registers, referred to as P_LVLx. This method of requesting C-states provides legacy support for operating systems that initiate C-state transitions using I/O reads.
Power Management
C1
1
C1
1
For legacy operating systems, P_LVLx I/O reads are converted within the processor to the equivalent MWAIT C-state request. Therefore, P_LVLx reads do not directly result in I/O reads to the system. The feature, known as I/O MWAIT redirection, must be enabled in the BIOS.
Note: The P_LVLx I/O Monitor address needs to be set up before using the P_LVLx I/O read
interface. Each P-LVLx is mapped to the supported MWAIT(Cx) instruction as shown in
Tab le 4-10.
Table 4-10. P_LVLx to MWAIT Conversion
P_LVLx MWAIT(Cx) Notes
P_LVL2 MWAIT(C3)
P_LVL3 MWAIT(C6) C6. No sub-states allowed.
P_LVL4 MWAIT(C7) C7. No sub-states allowed.
P_LVL5+ MWAIT(C7) C7. No sub-states allowed.
The BIOS can write to the C-state range field of the PMG_IO_CAPTURE MSR to restrict the range of I/O addresses that are trapped and emulate MWAIT like functionality. Any P_LVLx reads outside of this range does not cause an I/O redirection to MWAIT(Cx) like request. They fall through like a normal I/O instruction.
Note: When P_LVLx I/O instructions are used, MWAIT substates cannot be defined. The
MWAIT substate is always zero if I/O MWAIT redirection is used. By default, P_LVLx I/O redirections enable the MWAIT 'break on EFLAGS.IF’ feature that triggers a wakeup on an interrupt, even if interrupts are masked by EFLAGS.IF.
50 Datasheet, Volume 1
Power Management

4.2.4 Core C-states

The following are general rules for all core C-states, unless specified otherwise:
• A core C-State is determined by the lowest numerical thread state (such as Thread 0 requests C1E while Thread 1 requests C3, resulting in a core C1E state). See
Ta b le 4-7.
• A core transitions to C0 state when:
— An interrupt occurs — There is an access to the monitored address if the state was entered using an
MWAIT instruction
• For core C1/C1E, core C3, and core C6/C7, an interrupt directed toward a single thread wakes only that thread. However, since both threads are no longer at the same core C-state, the core resolves to C0.
• A system reset re-initializes all processor cores.
4.2.4.1 Core C0 State
The normal operating state of a core where code is being executed.
4.2.4.2 Core C1/C1E State
C1/C1E is a low power state entered when all threads within a core execute a HLT or MWAIT(C1/C1E) instruction.
A System Management Interrupt (SMI) handler returns execution to either Normal
state or the C1/C1E state. See the Intel
Developer’s Manual, Volume 3A/3B: System Programmer’s Guide for more information.
While a core is in C1/C1E state, it processes bus snoops and snoops from other threads. For more information on C1E, see “Package C1/C1E”.
4.2.4.3 Core C3 State
Individual threads of a core can enter the C3 state by initiating a P_LVL2 I/O read to the P_BLK or an MWAIT(C3) instruction. A core in C3 state flushes the contents of its L1 instruction cache, L1 data cache, and L2 cache to the shared L3 cache, while maintaining its architectural state. All core clocks are stopped at this point. Because the core’s caches are flushed, the processor does not wake any core that is in the C3 state when either a snoop is detected or when another core accesses cacheable memory.
4.2.4.4 Core C6 State
Individual threads of a core can enter the C6 state by initiating a P_LVL3 I/O read or an MWAIT(C6) instruction. Before entering core C6, the core will save its architectural state to a dedicated SRAM. Once complete, a core will have its voltage reduced to zero volts. During exit, the core is powered on and its architectural state is restored.
4.2.4.5 Core C7 State
Individual threads of a core can enter the C7 state by initiating a P_LVL4 I/O read to the P_BLK or by an MWAIT(C7) instruction. The core C7 state exhibits the same behavior as the core C6 state unless the core is the last one in the package to enter the C7 state. If it is, that core is responsible for flushing L3 cache ways. The processor supports the C7s substate. When an MWAIT(C7) command is issued with a C7s sub-state hint, the entire L3 cache is flushed in one step as opposed to flushing the L3 cache in multiple steps.
®
64 and IA-32 Architecture Software
Datasheet, Volume 1 51
4.2.4.6 C-State Auto-Demotion
In general, deeper C-states such as C6 or C7 have long latencies and have higher energy entry/exit costs. The resulting performance and energy penalties become significant when the entry/exit frequency of a deeper C-state is high. Therefore, incorrect or inefficient usage of deeper C-states have a negative impact on battery life idle. To increase residency and improve battery life idle in deeper C-states, the processor supports C-state auto-demotion.
There are two C-State auto-demotion options:
• C7/C6 to C3
• C7/C6/C3 To C1
The decision to demote a core from C6/C7 to C3 or C3/C6/C7 to C1 is based on each core’s immediate residency history. Upon each core C6/C7 request, the core C-state is demoted to C3 or C1 until a sufficient amount of residency has been established. At that point, a core is allowed to go into C3/C6 or C7. Each option can be run concurrently or individually.
This feature is disabled by default. BIOS must enable it in the PMG_CST_CONFIG_CONTROL register. The auto-demotion policy is also configured by this register.
Power Management

4.2.5 Package C-States

The processor supports C0, C1/C1E, C3, C6, and C7 power states. The following is a summary of the general rules for package C-state entry. These apply to all package C­states unless specified otherwise:
• A package C-state request is determined by the lowest numerical core C-state amongst all cores.
• A package C-state is automatically resolved by the processor depending on the core idle power states and the status of the platform components.
— Each core can be at a lower idle power state than the package if the platform
does not grant the processor permission to enter a requested package C-state.
— The platform may allow additional power savings to be realized in the
processor.
— For package C-states, the processor is not required to enter C0 before entering
any other C-state.
The processor exits a package C-state when a break event is detected. Depending on the type of break event, the processor does the following:
• If a core break event is received, the target core is activated and the break event message is forwarded to the target core.
— If the break event is not masked, the target core enters the core C0 state and
the processor enters package C0.
• If the break event was due to a memory access or snoop request.
— But the platform did not request to keep the processor in a higher package C-
state, the package returns to its previous C-state.
— And the platform requests a higher power C-state, the memory access or snoop
request is serviced and the package remains in the higher power C-state.
52 Datasheet, Volume 1
Power Management
C0
C1
C6
C7
C3
Table 4-11 shows package C-state resolution for a dual-core processor. Figure 4-3
summarizes package C-state transitions.
Table 4-11. Coordination of Core Power States at the Package Level
Package C-State
C0 C0 C0 C0 C0 C0
C1 C0 C1
Core 0
Note: If enabled, the package C-state will be C1E if all cores have resolved a core C1 state or higher.
C3 C0 C1
C6 C0 C1
C7 C0 C1
C0 C1 C3 C6 C7
Figure 4-3. Package C-State Entry and Exit
Core 1
1
1
1
1
1
C1
C3 C3 C3
C3 C6 C6
C3 C6 C7
C1
1
C1
1
4.2.5.1 Package C0
This is the normal operating state for the processor. The processor remains in the normal state when at least one of its cores is in the C0 or C1 state or when the platform has not granted permission to the processor to go into a low power state. Individual cores may be in lower power idle states while the package is in C0.
Datasheet, Volume 1 53
4.2.5.2 Package C1/C1E
No additional power reduction actions are taken in the package C1 state. However, if the C1E sub-state is enabled, the processor automatically transitions to the lowest supported core clock frequency, followed by a reduction in voltage.
The package enters the C1 low power state when:
• At least one core is in the C1 state.
• The other cores are in a C1 or lower power state.
The package enters the C1E state when:
• All cores have directly requested C1E using MWAIT(C1) with a C1E sub-state hint.
• All cores are in a power state lower that C1/C1E but the package low power state is limited to C1/C1E using the PMG_CST_CONFIG_CONTROL MSR.
• All cores have requested C1 using HLT or MWAIT(C1) and C1E auto-promotion is enabled in IA32_MISC_ENABLES.
No notification to the system occurs upon entry to C1/C1E.
4.2.5.3 Package C3 State
Power Management
A processor enters the package C3 low power state when:
• At least one core is in the C3 state.
• The other cores are in a C3 or lower power state, and the processor has been granted permission by the platform.
• The platform has not granted a request to a package C6/C7 state but has allowed a package C6 state.
In package C3-state, the L3 shared cache is valid.
4.2.5.4 Package C6 State
A processor enters the package C6 low power state when:
• At least one core is in the C6 state.
• The other cores are in a C6 or lower power state, and the processor has been granted permission by the platform.
• The platform has not granted a package C7 request but has allowed a C6 package state.
In package C6 state, all cores have saved their architectural state and have had their core voltages reduced to zero volts. The L3 shared cache is still powered and snoopable in this state. The processor remains in package C6 state as long as any part of the L3 cache is active.
54 Datasheet, Volume 1
Power Management
4.2.5.5 Package C7 State
The processor enters the package C7 low power state when all cores are in the C7 state and the L3 cache is completely flushed. The last core to enter the C7 state begins to shrink the L3 cache by N-ways until the entire L3 cache has been emptied. This allows further power savings.
Core break events are handled the same way as in package C3 or C6. However, snoops are not sent to the processor in package C7 state because the platform, by granting the package C7 state, has acknowledged that the processor possesses no snoopable information. This allows the processor to remain in this low power state and maximize its power savings.
Upon exit of the package C7 state, the L3 cache is not immediately re-enabled. It re-enables once the processor has stayed out of C6 or C7 for an preset amount of time. Power is saved since this prevents the L3 cache from being re-populated only to be immediately flushed again.
4.2.5.6 Dynamic L3 Cache Sizing
Upon entry into the package C7 state, the L3 cache is reduced by N-ways until it is completely flushed. The number of ways, N, is dynamically chosen per concurrent C7 entry. Similarly, upon exit, the L3 cache is gradually expanded based on internal heuristics.

4.3 IMC Power Management

The main memory is power managed during normal operation and in low-power ACPI Cx states.

4.3.1 Disabling Unused System Memory Outputs

Any system memory (SM) interface signal that goes to a memory module connector in which it is not connected to any actual memory devices (such as SO-DIMM connector is unpopulated, or is single-sided) is tri-stated. The benefits of disabling unused SM signals are:
• Reduced power consumption.
• Reduced possible overshoot/undershoot signal quality issues seen by the processor I/O buffer receivers caused by reflections from potentially un-terminated transmission lines.
When a given rank is not populated, the corresponding chip select and CKE signals are not driven.
At reset, all rows must be assumed to be populated, until it can be proven that they are not populated. This is due to the fact that when CKE is tristated with an SO-DIMM present, the SO-DIMM is not ensured to maintain data integrity.
SCKE tri-state should be enabled by BIOS where appropriate, since at reset all rows must be assumed to be populated.
Datasheet, Volume 1 55

4.3.2 DRAM Power Management and Initialization

The processor implements extensive support for power management on the SDRAM interface. There are four SDRAM operations associated with the Clock Enable (CKE) signals that the SDRAM controller supports. The processor drives four CKE pins to perform these operations.
The CKE is one of the power-save means. When CKE is off the internal DDR clock is disabled and the DDR power is reduced. The power-saving differs according the selected mode and the DDR type used. For more information, please refer to the IDD table in the DDR specification.
The DDR specification defines 3 levels of power-down that differ in power-saving and in wakeup time:
1. Active power-down (APD): This mode is entered if there are open pages when de­asserting CKE. In this mode the open pages are retained. Power-saving in this mode is the lowest. Power consumption of DDR is defined by IDD3P. Exiting this mode is fined by tXP – small number of cycles.
2. Precharged power-down (PPD): This mode is entered if all banks in DDR are precharged when de-asserting CKE. Power-saving in this mode is intermediate – better than APD, but less than DLL-off. Power consumption is defined by IDD2P1. Exiting this mode is defined by tXP. Difference from APD mode is that when waking­up all page-buffers are empty
3. DLL-off: In this mode the data-in DLLs on DDR are off. Power-saving in this mode is the best among all power-modes. Power consumption is defined by IDD2P1. Exiting this mode is defined by tXP, but also tXPDLL (10 – 20 according to DDR type) cycles until first data transfer is allowed.
Power Management
The processor supports 5 different types of power-down. The different modes are the power-down modes supported by DDR3 and combinations of these. The type of CKE power-down is defined by the configuration. The are options are:
1. No power-down
2. APD: The rank enters power-down as soon as idle-timer expires, no matter what is the bank status
3. PPD: When idle timer expires the MC sends PRE-all to rank and then enters power­down
4. DLL-off: same as option (2) but DDR is configured to DLL-off
5. APD, change to PPD (APD-PPD): Begins as option (1), and when all page-close timers of the rank are expired, it wakes the rank, issues PRE-all, and returns to PPD APD, change to DLL-off (APD_DLLoff) – Begins as option (1), and when all page­close timers of the rank are expired, it wakes the rank, issues PRE-all and returns to DLL-off power-down
The CKE is determined per rank when it is inactive. Each rank has an idle-counter. The idle-counter starts counting as soon as the rank has no accesses, and if it expires, the rank may enter power-down while no new transactions to the rank arrive to queues. Note that the idle-counter begins counting at the last incoming transaction arrival.
It is important to understand that since the power-down decision is per rank, the MC can find many opportunities to power-down ranks even while running memory intensive applications, and savings are significant (may be a few watts, according to the DDR specification). This is significant when each channel is populated with more ranks.
56 Datasheet, Volume 1
Power Management
Selection of power modes should be according to power-performance or thermal trade­offs of a given system:
• When trying to achieve maximum performance and power or thermal consideration
• In a system that tries to minimize power-consumption, try to use the deepest
• In high-performance systems with dense packaging (that is, complex thermal
Control of the power-mode through CRB-BIOS: The BIOS selects by default no-power­down. There are knobs to change the power-down selected mode.
Another control is the idle timer expiration count. This is set through PM_PDWN_config bits 7:0 (MCHBAR +4CB0). As this timer is set to a shorter time, the MC will have more opportunities to put DDR in power-down. The minimum recommended value for this register is 15. There is no BIOS hook to set this register. Customers who choose to change the value of this register can do it by changing the BIOS. For experiments, this register can be modified in real time if BIOS did not lock the MC registers.
Note that in APD, APD-PPD, and APD-DLLoff there is no point in setting the idle-counter in the same range of page-close idle timer.
is not an issue: use no power-down.
power-down mode possible – DLL-off or APD_DLLoff.
design) the power-down mode should be considered in order to reduce the heating and avoid DDR throttling caused by the heating.
Another option associated with CKE power-down is the S_DLL-off. When this option is enabled, the SBR I/O slave DLLs go off when all channel ranks are in power-down. (Do not confuse it with the DLL-off mode, in which the DDR DLLs are off). This mode requires to define the I/O slave DLL wakeup time.
4.3.2.1 Initialization Role of CKE
During power-up, CKE is the only input to the SDRAM that has its level recognized (other than the DDR3 reset pin) once power is applied. It must be driven LOW by the DDR controller to make sure the SDRAM components float DQ and DQS during power­up. CKE signals remain LOW (while any reset is active) until the BIOS writes to a configuration register. Using this method, CKE is ensured to remain inactive for much longer than the specified 200 micro-seconds after power and clocks to SDRAM devices are stable.
4.3.2.2 Conditional Self-Refresh
Intel Rapid Memory Power Management (Intel RMPM) conditionally places memory into self-refresh in the package C3, C6, and C7 low-power states. RMPM functionality depends on the graphics/display state (relevant only when processor graphics is being used), as well as memory traffic patterns generated by other connected I/O devices. The target behavior is to enter self-refresh as long as there are no memory requests to service.
When entering the S3 – Suspend-to-RAM (STR) state or S0 conditional self-refresh, the processor core flushes pending cycles and then enters all SDRAM ranks into self­refresh. The CKE signals remain LOW so the SDRAM devices perform self-refresh.
Datasheet, Volume 1 57
Table 4-12. Targeted Memory State Conditions
Mode Memory State with Processor Graphics Memory State with External Graphics
C0, C1, C1E
C3, C6, C7
S3 Self-Refresh Mode. Self-Refresh Mode.
S4 Memory power down (contents lost). Memory power down (contents lost)
Dynamic memory rank power down based on idle conditions.
If the Processor Graphics engine is idle and there are no pending display requests, then enter self-refresh. Otherwise, use dynamic memory rank power down based on idle conditions.
4.3.2.3 Dynamic Power-down Operation
Dynamic power-down of memory is employed during normal operation. Based on idle conditions, a given memory rank may be powered down. The IMC implements aggressive CKE control to dynamically put the DRAM devices in a power-down state.
The processor core controller can be configured to put the devices in active power- down (CKE de-assertion with open pages) or precharge power-down (CKE de-assertion
with all pages closed). Precharge power-down provides greater power savings but has a bigger performance impact, since all pages will first be closed before putting the devices in power-down mode.
Power Management
Dynamic memory rank power down based on idle conditions.
If there are no memory requests, then enter self-refresh. Otherwise, use dynamic memory rank power down based on idle conditions.
If dynamic power-down is enabled, all ranks are powered up before doing a refresh cycle and all ranks are powered down at the end of refresh.
4.3.2.4 DRAM I/O Power Management
Unused signals should be disabled to save power and reduce electromagnetic interference. This includes all signals associated with an unused memory channel. Clocks can be controlled on a per SO-DIMM basis. Exceptions are made for per SO­DIMM control signals such as CS#, CKE, and ODT for unpopulated SO-DIMM slots.
The I/O buffer for an unused signal should be tri-stated (output driver disabled), the input receiver (differential sense-amp) should be disabled, and any DLL circuitry related ONLY to unused signals should be disabled. The input path must be gated to prevent spurious results due to noise on the unused signals (typically handled automatically when input receiver is disabled).

4.4 PCIe* Power Management

• Active power management support using L0s, and L1 states.
• All inputs and outputs disabled in L2/L3 Ready state.

4.5 DMI Power Management

• Active power management support using L0s/L1 state.
58 Datasheet, Volume 1
Power Management

4.6 Graphics Power Management

4.6.1 Intel® Rapid Memory Power Management (RMPM) (also know as CxSR)

The Intel® Rapid Memory Power Management puts rows of memory into self refresh mode during C3/C6/C7 to allow the system to remain in the lower power states longer. Mobile processors routinely save power during runtime conditions by entering the C3, C6, or C7 state. Intel significant effect on the system as a whole.

4.6.2 Intel® Graphics Performance Modulation Technology(GPMT)

Intel® Graphics Power Modulation Technology (Intel GPMT) is a method for saving power in the graphics adapter while continuing to display and process data in the adapter. This method will switch the render frequency and/or render voltage dynamically between higher and lower power states supported on the platform based on render engine workload. When the system is running in battery mode, and if the end user launches applications such as 3D or Video, the graphics software may switch the render frequency dynamically between higher and lower power/performance states depending on the render engine workload.
In products where Intel Technology) is supported and enabled, the functionality of Intel maintained by Intel® Graphics Dynamic Frequency (also known as Turbo Boost Technology).
®
RMPM is an indirect method of power saving that can have a
®
Graphics Dynamic Frequency (also known as Turbo Boost
®
GPMT will be

4.6.3 Graphics Render C-State

Render C-State (RC6) is a technique designed to optimize the average power to the graphics render engine during times of idleness of the render engine. Render C-state is entered when the graphics render engine, blitter engine and the video engine have no workload being currently worked on and no outstanding graphics memory transactions. When the idleness condition is met, the Processor Graphics will program the VR into a low voltage state (0-~0.4 V) through the SVID bus.

4.6.4 Intel® Smart 2D Display Technology (Intel® S2DDT)

Intel S2DDT reduces display refresh memory traffic by reducing memory reads required for display refresh. Power consumption is reduced by less accesses to the IMC. S2DDT is only enabled in single pipe mode.
Intel S2DDT is most effective with:
• Display images well suited to compression, such as text windows, slide shows, and so on. Poor examples are 3D games.
• Static screens such as screens with significant portions of the background showing 2D applications, processor benchmarks, and so on, or conditions when the processor is idle. Poor examples are full-screen 3D games and benchmarks that flip the display image at or near display refresh rates.
Datasheet, Volume 1 59

4.6.5 Intel® Graphics Dynamic Frequency

Intel® Graphics Dynamic Frequency Technology is the ability of the processor and graphics cores to opportunistically increase frequency and/or voltage above the ensured processor and graphics frequency for the given part. Intel® Graphics Dynamic Frequency Technology is a performance feature that makes use of unused package power and thermals to increase application performance. The increase in frequency is determined by how much power and thermal budget is available in the package, and the application demand for additional processor or graphics performance. The processor core control is maintained by an embedded controller. The graphics driver dynamically adjusts between P-States to maintain optimal performance, power, and thermals. The graphics driver will always place the graphics engine in its lowest possible P-State; thereby, acting in the same capacity as Intel
®
GPMT.

4.6.6 Display Power Savings Technology 6.0 (DPST)

This is a mobile only supported power management feature.
The Intel® DPST technique achieves backlight power savings while maintaining a good visual experience. This is accomplished by adaptively enhancing the displayed image while decreasing the backlight brightness simultaneously. The goal of this technique is to provide equivalent end-user-perceived image quality at a decreased backlight power level.
1. The original (input) image produced by the operating system or application is analyzed by the Intel generated whenever a meaningful change in the image attributes is detected. (A meaningful change is when the Intel® DPST software algorithm determines that enough brightness, contrast, or color change has occurred to the displaying images that the image enhancement and backlight control needs to be altered.)
2. Intel® DPST subsystem applies an image-specific enhancement to increase image contrast, brightness, and other attributes.
3. A corresponding decrease to the backlight brightness is applied simultaneously to produce an image with similar user-perceived quality (such as brightness) as the original image.
®
DPST subsystem. An interrupt to Intel® DPST software is
Power Management
Intel® DPST 5.0 has improved the software algorithms and has minor hardware changes to better handle backlight phase-in and ensures the documented and validated method to interrupt hardware phase-in.

4.6.7 Automatic Display Brightness (ADB)

This is a mobile only supported power management feature.
®
Automatic Display Brightness feature dynamically adjusts the backlight
Intel brightness based upon the current ambient light environment. This feature requires an additional sensor to be on the panel front. The sensor receives the changing ambient light conditions and sends the interrupts to the Intel Graphics driver. As per the change in Lux, (current ambient light illuminance), the new backlight setting can be adjusted through BLC (see section 11). The converse applies for a brightly lit environment.
®
Automatic Display Brightness increases the back light setting.
Intel
60 Datasheet, Volume 1
Power Management

4.6.8 Seamless Display Refresh Rate Switching Technology (SDRRST)

This is a mobile only supported power management feature.
®
When a Local Flat Panel (LFP) supports multiple refresh rates, the Intel Refresh Rate Switching power conservation feature can be enabled. The higher refresh rate will be used when on plugged in power or when the end user has not selected/enabled this feature. The graphics software will automatically switch to a lower refresh rate for maximum battery life when the notebook is on battery power and when the user has selected/enabled this feature.
®
There are two distinct implementations of Intel Intel® DRRS method uses a mode change to assign the new refresh rate. The seamless
®
DRRS method is able to accomplish the refresh rate assignment without a mode
Intel change and therefore does not experience some of the visual artifacts associated with the mode change (SetMode) method.
DRRS—static and seamless. The static
Display

4.7 Thermal Power Management

See Section 4.6 for all graphics thermal power management-related features.
§ §
Datasheet, Volume 1 61
Power Management
62 Datasheet, Volume 1
Thermal Management

5 Thermal Management

The thermal solution provides both the component-level and the system-level thermal management. To allow for the optimal operation and long-term reliability of Intel processor-based systems, the system/processor thermal solution should be designed so that the processor:
• Remains below the maximum junction temperature (T maximum thermal design power (TDP).
• Conforms to system constraints, such as system acoustics, system skin­temperatures, and exhaust-temperature requirements.
) specification at the
j,Max
Caution: Thermal specifications given in this chapter are on the component and package level
and apply specifically to the 2nd Generation Intel Operating the processor outside the specified limits may result in permanent damage to the processor and potentially other components in the system.
®
Core™ processor family mobile.

5.1 Thermal Design Power (TDP) and Junction Temperature (Tj)

The processor TDP is the maximum sustained power that should be used for design of the processor thermal solution. TDP represents an expected maximum sustained power from realistic applications. TDP may be exceeded for short periods of time or if running a “power virus” workload. Due to Intel Turbo Boost Technology, applications are expected to run closer to TDP more often as the processor attempts to take advantage of available headroom in the platform to maximize performance.
The processor may also exceed the TDP for short durations after a period of lower power operation due to its turbo feature. This feature is intended to take advantage of available thermal capacitance in the thermal solution for momentary high power operation. The duration and time of such operation can be limited by platform runtime configurable registers within the processor.
The processor integrates multiple proccesor and graphics cores on a single die. This may result in differences in the power distribution across the die and must be considered when designing the thermal solution.

5.2 Thermal Considerations

Intel Turbo Boost Technology allows processor cores and Processor Graphics cores to run faster than the baseline frequency. During a turbo event, the processor can exceed its TDP power for brief periods. Turbo is invoked opportunistically and automatically as long as the processor is conforming to its temperature, power delivery, and current specification limits. Thus, thermal solutions and platform cooling that are designed to be less than thermal design guidance may experience thermal and performance issues since more applications will tend to run at or near the maximum power limit for significant periods of time.
Datasheet, Volume 1 63
Thermal Management

5.2.1 Intel® Turbo Boost Technology Power Control and Reporting

When operating in the turbo mode, the processor will monitor its own power and adjust the turbo frequency to maintain the average power within limits over a thermally significant time period. The package, processor core, and graphic core powers are estimated using architectural counters and do not rely on any input from the platform.
The behavior of turbo is dictated by the following controls that are accessible using MSR, MMIO, or PECI interfaces:
POWER_LIMIT_1: TURBO_POWER_LIMIT, MSR 610h, bits 14:0. This value sets
the exponentially weighted moving average power limit over a long time period. This is normally aligned to the TDP of the part and steady-state cooling capability of the thermal solution. This limit may be set lower than TDP, real-time, for specific needs, such as responding to a thermal event. If set lower than TDP, the processor may not be able to honor this limit for all workloads since this control only applies in the turbo frequency range; a very high powered application may exceed POWER_LIMIT_1, even at non-turbo frequencies. The default value is the TDP for the SKU.
POWER_LIMIT_1_TIME: TURBO _POWER_LIMIT, MSR 610h, bits 23:17. This
value is a time parameter that adjusts the algorithm behavior. The exponentially weighted moving average turbo algorithm will use this parameter to maintain time averaged power at or below POWER_LIMIT_1. The default value is 1 second; however, 28
POWER_LIMIT_2: TURBO_POWER_LIMIT, MSR 610h, bits 46:32. This value
establishes the upper power limit of turbo operation above TDP, primarily for platform power supply considerations. Power may exceed this limit for up to
mS. The default for this limit is 1.25 x TDP.
10
seconds is recommended for most mobile applications.
The following considerations and limitations apply to the power monitoring feature:
• Calibration applies to the processor family and is not conducted on a part-by-part basis. Therefore, some difference between actual and reported power may be observed.
• Power monitoring is calibrated with a variety of common, realistic workloads near Tj_max. Workloads with power characteristic markedly different from those used during the calibration process or lower temperatures may result in increased differences between actual and estimated power.
• In the event an uncharacterized workload or power “virus” application were to result in exceeding programmed power limits, the processor Thermal Control Circuitry (TCC) will protect the processor when properly enabled. Adaptive Thermal Monitor must be enabled for the processor to remain within specification.
Illustration of Intel Turbo Boost Technology power control is shown in the following sections and figures. Multiple controls operate simultaneously allowing for customization for multiple system thermal and power limitations. These controls allow for turbo optimizations within system constraints.
64 Datasheet, Volume 1
Thermal Management
System Thermal Response Time
Turbo Algorithm Response Time

5.2.2 Package Power Control

The package power control allows for customization to implement optimal turbo within platform power delivery and package thermal solution limitations.
Figure 5-1. Package Power Control

5.2.3 Power Plane Control

The processor core and graphics core power plane controls allow for customization to implement optimal turbo within voltage regulator thermal limitations. It is possible to use these power plane controls to protect the voltage regulator from overheating due to extended high currents. Power limiting per plane cannot be ensured in all usages. This function is similar to the package level long duration window control.

5.2.4 Turbo Time Parameter

'Turbo Time Parameter' is a mathematical parameter (units in seconds) that controls the processor turbo algorithm using an exponentially weighted moving average of energy usage. During a maximum power turbo event of about 1.25 x TDP, the processor could sustain Power_Limit_2 for up to approximately 1.5 the Turbo Time Parameter. If the power value and/or ‘Turbo Time Parameter’ is changed during runtime, it may take a period of time (possibly up to approximately 3 to 5 times the ‘Turbo Time Parameter’, depending on the magnitude of the change and other factors) for the algorithm to settle at the new control limits.
Datasheet, Volume 1 65

5.3 Thermal and Power Specifications

The following notes apply to Table 5-1, Table 5-2, Table 5-3, and Table 5-4.
Notes Description
1
2
3
4
5
6
7 At Tj of Tj,max
8 At Tj of 50 ºC
9 At Tj of 35 ºC
10 Can be modified at runtime by MSR writes, with MMIO and with PECI commands
11
12
13
14 This is a hardware default setting and not a behavioral characteristic of the part.
15 For controllable turbo workloads, the limit may be exceeded for up to 10 ms.
16 Tj
The TDPs given are not the maximum power the processor can generate. Analysis indicates that real applications are unlikely to cause the processor to consume the theoretical maximum power dissipation for sustained periods of time.
TDP workload may consist of a combination of a CPU-core intensive and a graphics-core intensive applications.
The thermal solution needs to ensure that the processor temperature does not exceed the maximum junction temperature (Tj,max) limit, as measured by the DTS and the critical temperature bit.
The processor junction temperature is monitored by Digital Temperature Sensors (DTS). For DTS accuracy, refer to Section 5.4.1.2.1.
Digital Thermal Sensor (DTS) based fan speed control is required to achieve optimal thermal performance. Intel recommends full cooling capability well before the DTS reading reaches Tj,Max. An example of this is Tj,Max – 10 ºC.
The idle power specifications are not 100% tested. These power specifications are determined by the characterization at higher temperatures and extrapolating the values for the junction temperature indicated.
'Turbo Time Parameter' is a mathematical parameter (unit in seconds) that controls the processor turbo algorithm using a moving average of energy usage. Avoid setting the Turbo Time Parameter to a value less than 0.1 seconds. Refer to Section 5.2.4 for further information.
Shown limit is a time averaged power, based upon the Turbo Time Parameter. Absolute product power may exceed the set limits for short durations or under virus or uncharacterized workloads.
Processor will be controlled to specified power limit as described in Section 5.2.1. If the power value and/or ‘Turbo Time Parameter’ is changed during runtime, it may take a short period of time (approximately 3 to 5 times the ‘Turbo Time Parameter’) for the algorithm to settle at the new control limits.
may vary between processor SKUs.
MAX
Thermal Management
66 Datasheet, Volume 1
Thermal Management
Table 5-1. TDP Specifications
Segment State
Extreme
Edition (XE)
Quad Core SV
CPU Core
Frequency
HFM
2.5 GHz up to
3.5 GHz
LFM 800 MHz
HFM
2.2 GHz
up to 3.4 GHz
Processor Graphics
LFM 800 MHz
HFM
Dual Core SV
2.5 GHz
up to 3.4 GHz
LFM 800 MHz
HFM
Low Voltage
2.1 GHz
up to 3.2 GHz
LFM 800 MHz
1.4 GHz
up to 2.7 GHz
Ultra Low
Voltage
HFM
LFM 800 MHz
Table 5-2. Junction Temperature Specification
Segment Symbol
Extreme
Edition (XE)
Quad Core SV T
Dual Core SV T
Low Voltage T
Ultra Low
Voltage
T
J
J
J
J
T
J
Package Turbo
Parameter
Junction temperature limit 0 100 °C 3, 4, 5, 16
Junction temperature limit 0 100 °C 3, 4, 5, 16
Junction temperature limit 0 100 °C 3, 4, 5, 16
Junction temperature limit 0 100 °C 3, 4, 5, 16
Junction temperature limit 0 100 °C 3, 4, 5, 16
Core frequency
650 MHz up to
1300 MHz
650 MHz up to
1300 MHz
650 MHz up to
1300 MHz
650 MHz up to
1300 MHz
650 MHz up to
1300 MHz
650 MHz up to
1300 MHz
500 MHz up to
1100 MHz
500 MHz up to
1100 MHz
350 MHz up to
1000 MHz
350 MHz up to
1000 MHz
Min Default Max Units Notes
Thermal Design
Power
55
36
45
33
35
26
25
12
17
10
Units Notes
W1, 2, 7
W1, 2, 7
W1, 2, 7
W1, 2, 7
W1, 2, 7
Datasheet, Volume 1 67
Table 5-3. Package Turbo Parameters (Sheet 1 of 2)
Thermal Management
Segment Symbol Package Turbo Parameter Min
Processor turbo long duration time window
(POWER_LIMIT_1_TIME in TURBO_POWER_LIMIT MSR 0610h bits [23:17])
'Long duration' turbo power limit (POWER_LIMIT_1 in
TURBO_POWER_LIMIT MSR 0610h bits [14:0])
'Short duration' turbo power limit (POWER_LIMIT_2 in
TURBO_POWER_LIMIT MSR 0610h bits [46:32])
Processor turbo long duration time window
(POWER_LIMIT_1_TIME in TURBO_POWER_LIMIT MSR 0610h bits [23:17])
'Long duration' turbo power limit (POWER_LIMIT_1 in
TURBO_POWER_LIMIT MSR 0610h bits [14:0])
'Short duration' turbo power limit (POWER_LIMIT_2 in
TURBO_POWER_LIMIT MSR 0610h bits [46:32])
Processor turbo long duration time window
(POWER_LIMIT_1_TIME in TURBO_POWER_LIMIT MSR 0610h bits [23:17])
'Long duration' turbo power limit (POWER_LIMIT_1 in
TURBO_POWER_LIMIT MSR 0610h bits [14:0])
'Short duration' turbo power limit (POWER_LIMIT_2 in
TURBO_POWER_LIMIT MSR 0610h bits [46:32])
Processor turbo long duration time window
(POWER_LIMIT_1_TIME in TURBO_POWER_LIMIT MSR 0610h bits [23:17])
'Long duration' turbo power limit (POWER_LIMIT_1 in
TURBO_POWER_LIMIT MSR 0610h bits [14:0])
'Short duration' turbo power limit (POWER_LIMIT_2 in
TURBO_POWER_LIMIT MSR 0610h bits [46:32])
N/A 1 N/A s
N/A 55 N/A W
N/A 1.25 x 55 N/A W
0.001 1 64 S
40 45 48 W
40 1.25 x 45 60 W
0.001 1 64 S
28 35 36 W
28 1.25 x 35 44 W
0.001 1 32 S
24 25 28 W
24 1.25 x 25 36 W
Extreme
Edition
(XE)
Quad Core
SV
Dual Core
SV
Low
Voltag e
Tur bo Ti me
Parameter (package)
Long P
(package)
Short P
(package)
Tur bo Ti me
Parameter (package)
Long P
(package)
Short P
(package)
Tur bo Ti me
Parameter (package)
Long P
(package)
Short P
(package)
Tur bo Ti me
Parameter (package)
Long P
(package)
Short P
(package)
H/W
Default
Max Units Notes
10, 11,
14
10,12,13,
14
10, 14,
15
10, 11,
14
10,12,13,
14
10, 14,
15
10, 11,
14
10, 12,
13, 14
10, 14,
15
10, 11,
14
10, 12,
13, 14
10, 14,
15
68 Datasheet, Volume 1
Thermal Management
Table 5-3. Package Turbo Parameters (Sheet 2 of 2)
Segment Symbol Package Turbo Parameter Min
Processor turbo long duration time window
(POWER_LIMIT_1_TIME in TURBO_POWER_LIMIT MSR 0610h bits [23:17])
'Long duration' turbo power limit (POWER_LIMIT_1 in
TURBO_POWER_LIMIT MSR 0610h bits [14:0])
Ultra Low
Voltage
Tu rb o T im e
Parameter (package)
Long P
(package)
'Short duration' turbo power limit
Short P
(package)
(POWER_LIMIT_2 in TURBO_POWER_LIMIT MSR 0610h bits [46:32])
Table 5-4. Idle Power Specifications
Segment Symbol
Idle power in the Package C1e state
Idle power in the Package
C6
C6 state
Idle power in the Package
C7
C7state
Idle power in the Package C1e state
Idle power in the Package
C6
C6 state
Idle power in the Package
C7
C7state
Idle power in the Package C1e state
Idle power in the Package
C6
C6 state
Idle power in the Package
C7
C7state
Idle power in the Package C1e state
Idle power in the Package
C6
C6 state
Idle power in the Package
C7
C7state
Idle power in the Package C1e state
Idle power in the Package
C6
C6 state
Idle power in the Package
C7
C7state
Extreme
Edition (XE)
Quad Core SV
Dual Core SV
Low Voltage
Ultra Low
Voltag e
P
C1E
P
P
P
C1E
P
P
P
C1E
P
P
P
C1E
P
P
P
C1E
P
P
H/W
Default
Max Units Notes
0.001 1 32 S 10,11,14
16 17 20 W
16 1.25 x 17 24 W 10,14,15
Idle Parameter Min Typ Max
12.5 W 6, 8
—— 4 W6, 9
——3.85W6, 9
——11 W6, 8
3.9 W 6, 9
3.8 W 6, 9
8.8 W 6, 8
3.1 W 6, 9
——2.95W6, 9
6.4 W 6, 8
2.5 W 6, 9
——2.35W6, 9
5.8 W 6, 8
2.3 W 6, 9
2.2 W 6, 9
10,12, 13, 14
Units Notes
Datasheet, Volume 1 69

5.4 Thermal Management Features

This section covers thermal management features for the processor.

5.4.1 Processor Package Thermal Features

This section covers thermal management features for the entire processor complex (including the processor core, the graphics core, and integrated memory controller hub), and will be referred to as processor package, or by simply the package.
Occasionally the package will operate in conditions that exceed its maximum allowable operating temperature. This can be due to internal overheating or due to overheating in the entire system. To protect itself and the system from thermal failure, the package is capable of reducing its power consumption and thereby its temperature to attempt to remain within normal operating limits using the Adaptive Thermal Monitor.
The Adaptive Thermal Monitor can be activated when any package temperature, monitored by a digital thermal sensor (DTS), meets or exceeds its maximum junction temperature specification (T circuit (TCC) can be activated prior to T assertion of PROCHOT# activates the thermal control circuit (TCC), and causes both the processor core and graphics core to reduce frequency and voltage adaptively. The TCC will remain active as long as any package temperature exceeds its specified limit. Therefore, the Adaptive Thermal Monitor will continue to reduce the package frequency and voltage until the TCC is de-activated.
) and asserts PROCHOT#. Note that thermal control
J,max
by use of the TCC activation offset. The
J,max
Thermal Management
Caution: The Adaptive Thermal Monitor must be enabled for the processor to remain within
specification.
5.4.1.1 Adaptive Thermal Monitor
The purpose of the Adaptive Thermal Monitor is to reduce processor core power consumption and temperature until it operates at or below its maximum operating temperature (according for TCC activation offset). Processor core power reduction is achieved by:
• Adjusting the operating frequency (using the core ratio multiplier) and input voltage (using the SVID bus).
• Modulating (starting and stopping) the internal processor core clocks (duty cycle).
The temperature at which the Adaptive Thermal Monitor activates the Thermal Control Circuit is factory calibrated and is not user configurable. The default value is software visible in the TEMPERATURE_TARGET (1A2h) MSR, Bits 23:16. The Adaptive Thermal Monitor does not require any additional hardware, software drivers, or interrupt handling routines. Note that the Adaptive Thermal Monitor is not intended as a mechanism to maintain processor TDP. The system design should provide a thermal solution that can maintain TDP within its intended usage range.
70 Datasheet, Volume 1
Thermal Management
5.4.1.1.1 Frequency/Voltage Control
Upon TCC activation, the processor core attempts to dynamically reduce processor core power by lowering the frequency and voltage operating point. The operating points are automatically calculated by the processor core itself and do not require the BIOS to program them as with previous generations of Intel processors. The processor core will scale the operating points such that:
• The voltage will be optimized according to the temperature, the core bus ratio, and number of cores in deep C-states.
• The core power and temperature are reduced while minimizing performance degradation.
A small amount of hysteresis has been included to prevent an excessive amount of operating point transitions when the processor temperature is near its maximum operating temperature. Once the temperature has dropped below the maximum operating temperature and the hysteresis timer has expired, the operating frequency and voltage transition back to the normal system operating point. This is illustrated in
Figure 5-2.
Figure 5-2. Frequency and Voltage Ordering
Once a target frequency/bus ratio is resolved, the processor core will transition to the new target automatically.
• On an upward operating point transition, the voltage transition precedes the frequency transition.
• On a downward transition, the frequency transition precedes the voltage transition.
Datasheet, Volume 1 71
Thermal Management
When transitioning to a target core operating voltage, a new VID code to the voltage regulator is issued. The voltage regulator must support dynamic VID steps to support this method.
During the voltage change:
• It will be necessary to transition through multiple VID steps to reach the target operating voltage.
• Each step is 5 mV for Intel MVP-7.0 compliant VRs.
• The processor continues to execute instructions. However, the processor will halt instruction execution for frequency transitions.
If a processor load-based Enhanced Intel SpeedStep Technology/P-state transition (through MSR write) is initiated while the Adaptive Thermal Monitor is active, there are two possible outcomes:
• If the P-state target frequency is higher than the processor core optimized target frequency, the p-state transition will be deferred until the thermal event has been completed.
• If the P-state target frequency is lower than the processor core optimized target frequency, the processor will transition to the P-state operating point.
5.4.1.1.2 Clock Modulation
If the frequency/voltage changes are unable to end an Adaptive Thermal Monitor event, the Adaptive Thermal Monitor will use clock modulation. Clock modulation is done by alternately turning the clocks off and on at a duty cycle (ratio between clock “on” time and total time) specific to the processor. The duty cycle is factory configured to 25% on and 75% off and cannot be modified. The period of the duty cycle is configured to 32 microseconds when the TCC is active. Cycle times are independent of processor frequency. A small amount of hysteresis has been included to prevent excessive clock modulation when the processor temperature is near its maximum operating temperature. Once the temperature has dropped below the maximum operating temperature, and the hysteresis timer has expired, the TCC goes inactive and clock modulation ceases. Clock modulation is automatically engaged as part of the TCC activation when the frequency/voltage targets are at their minimum settings. Processor performance will be decreased by the same amount as the duty cycle when clock modulation is active. Snooping and interrupt processing are performed in the normal manner while the TCC is active.
5.4.1.2 Digital Thermal Sensor
Each processor execution core has an on-die Digital Thermal Sensor (DTS) that detects the core’s instantaneous temperature. The DTS is the preferred method of monitoring processor die temperature because:
• It is located near the hottest portions of the die.
• It can accurately track the die temperature and ensure that the Adaptive Thermal Monitor is not excessively activated.
Temperature values from the DTS can be retrieved through:
• A software interface using processor Model Specific Register (MSR).
• A processor hardware interface as described in Section 5.4.4.
72 Datasheet, Volume 1
Thermal Management
Note: When temperature is retrieved by processor MSR, it is the instantaneous temperature
of the given core. When temperature is retrieved using PECI, it is the average of the highest DTS temperature in the package over a 256 ms time window. Intel recommends using the PECI reported temperature for platform thermal control that benefits from averaging, such as fan speed control. The average DTS temperature may not be a good indicator of package Adaptive Thermal Monitor activation or rapid increases in temperature that triggers the Out of Specification status bit within the PACKAGE_THERM_STATUS MSR 01B1h and IA32_THERM_STATUS MSR 19Ch.
Code execution is halted in C1–C7. Therefore, temperature cannot be read using the processor MSR without bringing a core back into C0. However, temperature can still be monitored through PECI in lower C-states except for C7.
Unlike traditional thermal devices, the DTS outputs a temperature relative to the maximum supported operating temperature of the processor (T
), regardless of
j,max
TCC activation offset. It is the responsibility of software to convert the relative temperature to an absolute temperature. The absolute reference temperature is readable in the TEMPERATURE_TARGET MSR 1A2h. The temperature returned by the DTS is an implied negative integer indicating the relative offset from T does not report temperatures greater than T
j,max
.
j,max
. The DTS
The DTS-relative temperature readout directly impacts the Adaptive Thermal Monitor trigger point. When a package DTS indicates that it has reached the TCC activation (a reading of 0h, except when the TCC activation offset is changed), the TCC will activate and indicate a Adaptive Thermal Monitor event. A TCC activation will lower both IA core and graphics core frequency, voltage or both.
Changes to the temperature can be detected using two programmable thresholds located in the processor thermal MSRs. These thresholds have the capability of
generating interrupts using the core's local APIC. Refer to the Intel
®
64 and IA-32
Architectures Software Developer's Manuals for specific register and programming
details.
5.4.1.2.1 Digital Thermal Sensor Accuracy (Taccuracy)
The error associated with DTS measurement will not exceed ±5 °C at Tj,max. The DTS measurement within the entire operating range will meet a ±5 °C accuracy.
5.4.1.3 PROCHOT# Signal
PROCHOT# (processor hot) is asserted when the processor core temperature has reached its maximum operating temperature (T diagram of the PROCHOT# signal assertion relative to the Adaptive Thermal Response. Only a single PROCHOT# pin exists at a package level. When any core arrives at the TCC activation point, the PROCHOT# signal will be asserted. PROCHOT# assertion policies are independent of Adaptive Thermal Monitor enabling.
Note: Bus snooping and interrupt latching are active while the TCC is active.
). See Figure 5-2 for a timing
j,max
Datasheet, Volume 1 73
5.4.1.3.1 Bi-Directional PROCHOT#
By default, the PROCHOT# signal is defined as an output only. However, the signal may be configured as bi-directional. When configured as a bi-directional signal, PROCHOT# can be used for thermally protecting other platform components should they overheat as well. When PROCHOT# is driven by an external device:
• the package will immediately transition to the minimum operation points (voltage and frequency) supported by the processor and graphics cores. This is contrary to the internally-generated Adaptive Thermal Monitor response.
• Clock modulation is not activated.
The TCC will remain active until the system de-asserts PROCHOT#. The processor can be configured to generate an interrupt upon assertion and de-assertion of the PROCHOT# signal.
5.4.1.3.2 Voltage Regulator Protection
PROCHOT# may be used for thermal protection of voltage regulators (VR). System designers can create a circuit to monitor the VR temperature and activate the TCC when the temperature limit of the VR is reached. By asserting PROCHOT# (pulled-low) and activating the TCC, the VR will cool down as a result of reduced processor power consumption. Bi-directional PROCHOT# can allow VR thermal designs to target thermal design current (I
) instead of maximum current. Systems should still provide
CCTDC
proper cooling for the VR and rely on bi-directional PROCHOT# only as a backup in case of system cooling failure. Overall, the system thermal design should allow the power delivery circuitry to operate within its temperature specification even while the processor is operating at its TDP.
Thermal Management
5.4.1.3.3 Thermal Solution Design and PROCHOT# Behavior
With a properly designed and characterized thermal solution, it is anticipated that PROCHOT# will only be asserted for very short periods of time when running the most power intensive applications. The processor performance impact due to these brief periods of TCC activation is expected to be so minor that it would be immeasurable.
However, an under-designed thermal solution that is not able to prevent excessive assertion of PROCHOT# in the anticipated ambient environment may:
• Cause a noticeable performance loss.
• Result in prolonged operation at or above the specified maximum junction temperature and affect the long-term reliability of the processor.
• May be incapable of cooling the processor even when the TCC is active continuously (in extreme situations).
74 Datasheet, Volume 1
Thermal Management
5.4.1.3.4 Low-Power States and PROCHOT# Behavior
If the processor enters a low-power package idle state such as C3 or C6/C7 with PROCHOT# asserted, PROCHOT# will remain asserted until:
• The processor exits the low-power state
• The processor junction temperature drops below the thermal trip point.
For the package C7 state, PROCHOT# may de-assert for the duration of C7 state residency even if the processor enters the idle state operating at the TCC activation temperature. Note that the PECI interface is fully operational during all C-states and it is expected that the platform continues to manage processor (“package”) core thermals even during idle states by regularly polling for thermal data over PECI.
5.4.1.3.5 THERMTRIP# Signal
Regardless of enabling the automatic or on-demand modes, in the event of a catastrophic cooling failure, the package will automatically shut down when the silicon has reached an elevated temperature that risks physical damage to the product. At this point the THERMTRIP# signal will go active.
5.4.1.3.6 Critical Temperature Detection
Critical Temperature detection is performed by monitoring the package temperature. This feature is intended for graceful shutdown before the THERMTRIP# is activated; however, the processor execution is not ensured between critical temperature and THERMTRIP#. If the package's Adaptive Thermal Monitor is triggered and the temperature remains high, a critical temperature status and sticky bit are latched in the PACKAGE_THERM_STATUS MSR 1B1h and also generates a thermal interrupt if
enabled. For more details on the interrupt mechanism, refer to the Intel
Architectures Software Developer's Manuals.
®
64 and IA-32

5.4.2 Processor Core Specific Thermal Features

5.4.2.1 On-Demand Mode
The processor provides an auxiliary mechanism that allows system software to force the processor to reduce its power consumption using clock modulation. This mechanism is referred to as “On-Demand” mode and is distinct from Adaptive Thermal Monitor and bi-directional PROCHOT#. The processor platforms must not rely on software usage of this mechanism to limit the processor temperature. On-Demand Mode can be done using processor MSR or chipset I/O emulation.
On-Demand Mode may be used in conjunction with the Adaptive Thermal Monitor. However, if the system software tries to enable On-Demand mode at the same time the TCC is engaged, the factory configured duty cycle of the TCC will override the duty cycle selected by the On-Demand mode. If the I/O based and MSR-based On-Demand modes are in conflict, the duty cycle selected by the I/O emulation-based On-Demand mode will take precedence over the MSR-based On-Demand Mode.
5.4.2.1.1 MSR Based On-Demand Mode
If Bit 4 of the IA32_CLOCK_MODULATION MSR is set to a 1, the processor will immediately reduce its power consumption using modulation of the internal core clock, independent of the processor temperature. The duty cycle of the clock modulation is programmable using Bits 3:1 of the same IA32_CLOCK_MODULATION MSR. In this
Datasheet, Volume 1 75
mode, the duty cycle can be programmed in either 12.5% or 6.25% increments (discoverable using CPU ID). Thermal throttling using this method will modulate each processor core’s clock independently.
5.4.2.1.2 I/O Emulation-Based On-Demand Mode
I/O emulation-based clock modulation provides legacy support for operating system software that initiates clock modulation through I/O writes to ACPI defined processor clock control registers on the chipset (PROC_CNT). Thermal throttling using this method will modulate all processor cores simultaneously.

5.4.3 Memory Controller Specific Thermal Features

The memory controller provides the ability to initiate memory throttling based upon memory temperature. The memory temperature can be provided to the memory controller using PECI or can be estimated by the memory controller based upon memory activity. The temperature trigger points are programmable by memory mapped IO registers.
5.4.3.1 Programmable Trip Points
This memory controller provides programmable critical, hot and warm trip points. Crossing a critical trip point forces a system shutdown. Crossing a hot or warm trip point will initiate throttling. The amount of memory throttle at each trip point is programmable.
Thermal Management

5.4.4 Platform Environment Control Interface (PECI)

The Platform Environment Control Interface (PECI) is a one-wire interface that provides a communication channel between Intel processor and chipset components to external monitoring devices. The processor implements a PECI interface to allow communication of processor thermal information to other devices on the platform. The processor provides a digital thermal sensor (DTS) for fan speed control. The DTS is calibrated at the factory to provide a digital representation of relative processor temperature. Averaged DTS values are read using the PECI interface.
The PECI physical layer is a self-clocked one-wire bus that begins each bit with a driven, rising edge from an idle level near zero volts. The duration of the signal driven high depends on whether the bit value is a Logic 0 or Logic 1. PECI also includes variable data transfer rate established with every message. The single wire interface provides low board routing overhead for the multiple load connections in the congested routing area near the processor and chipset components. Bus speed, error checking, and low protocol overhead provides adequate link bandwidth and reliability to transfer critical device operating conditions and configuration information.
5.4.4.1 Fan Speed Control with Digital Thermal Sensor
Digital Thermal Sensor based fan speed control (T achieve optimal thermal performance. At the T cooling capability well before the DTS reading reaches T be T
FAN
= T
j,max
– 10 ºC.
§ §
) is a recommended feature to
FAN
temperature, Intel recommends full
FAN
. An example of this would
j,max
76 Datasheet, Volume 1
Signal Description

6 Signal Description

This chapter describes the processor signals. They are arranged in functional groups according to their associated interface or category. The following notations are used to describe the signal type.
Notations Signal Type
IInput Pin
OOutput Pin
I/O Bi-directional Input/Output Pin
The signal description also includes the type of buffer used for the particular signal (see
Table 6-1).
Table 6-1. Signal Description Buffer Types
Signal Description
PCI Express*
eDP
FDI
DMI
CMOS CMOS buffers. 1.1-V tolerant
DDR3 DDR3 buffers: 1.5-V tolerant
A
Ref Voltage reference signal
Asynchronous
PCI Express interface signals. These signals are compatible with PCI Express* 2.0 Signalling Environment AC Specifications and are AC coupled. The buffers are not 3.3­V tolerant. Refer to the PCIe specification.
Embedded Display Port interface signals. These signals are compatible with VESA Revision 1.0 DP specifications and the interface is AC coupled. The buffers are not
3.3-V tolerant.
Intel Flexible Display interface signals. These signals are based on PCI Express* 2.0 Signaling Environment AC Specifications (2.7 GT/s), but are DC coupled. The buffers are not 3.3-V tolerant.
Direct Media Interface signals. These signals are based on PCI Express* 2.0 Signaling Environment AC Specifications (5 GT/s), but are DC coupled. The buffers are not 3.3­V tolerant.
Analog reference or output. May be used as a threshold voltage or for buffer compensation
1
Signal has no timing relationship with any reference clock.
Notes:
1. Qualifier for a buffer type.
Datasheet, Volume 1 77

6.1 System Memory Interface

Table 6-2. Memory Channel A
Signal Description
Signal Name Description
SA_BS[2:0]
SA_WE#
SA_RAS#
SA_CAS#
SA_DQS[7:0]
SA_DQS#[7:0]
SA_DQ[63:0]
SA_MA[15:0]
SA_CK[1:0]
SA_CK#[1:0]
SA_CKE[1:0]
SA_CS#[1:0]
SA_ODT[1:0]
Bank Select: These signals define which banks are selected within each SDRAM rank.
Write Enable Control Signal: This signal is used with SA_RAS# and SA_CAS# (along with SA_CS#) to define the SDRAM Commands.
RAS Control Signal: This signal is used with SA_CAS# and SA_WE# (along with SA_CS#) to define the SRAM Commands.
CAS Control Signal: This signal is used with SA_RAS# and SA_WE# (along with SA_CS#) to define the SRAM Commands.
Data Strobes: SA_DQS[7:0] and its complement signal group make up a differential strobe pair. The data is captured at the crossing point of SA_DQS[7:0] and its SA_DQS#[7:0] during read and write transactions.
Data Bus: Channel A data signal interface to the SDRAM data bus. I/O
Memory Address: These signals are used to provide the multiplexed
row and column address to the SDRAM. SDRAM Differential Clock: Channel A SDRAM Differential clock signal
pair. The crossing of the positive edge of SA_CK and the negative edge of its complement SA_CK# are used to sample the command and control signals on the SDRAM.
SDRAM Inverted Differential Clock: Channel A SDRAM Differential clock signal-pair complement.
Clock Enable: (1 per rank). These signals are used to:
• Initialize the SDRAMs during power-up
• Power-down SDRAM ranks
• Place all SDRAM ranks into and out of self-refresh during STR
Chip Select: (1 per rank). These signals are used to select particular SDRAM components during the active state. There is one Chip Select for each SDRAM rank.
On Die Termination: Active Termination Control. O
Direction/
Buffer Type
O
DDR3
O
DDR3
O
DDR3
O
DDR3
I/O
DDR3
DDR3
O
DDR3
O
DDR3
O
DDR3
O
DDR3
O
DDR3
DDR3
78 Datasheet, Volume 1
Signal Description
Table 6-3. Memory Channel B
Signal Name Description
SB_BS[2:0]
SB_WE#
SB_RAS#
SB_CAS#
SB_DQS[7:0]
SB_DQS#[7:0]
SB_DQ[63:0]
SB_MA[15:0]
SB_CK[1:0]
SB_CK#[1:0]
SB_CKE[1:0]
SB_CS#[1:0]
SB_ODT[1:0]
Bank Select: These signals define which banks are selected within each SDRAM rank.
Write Enable Control Signal: This signal is used with SB_RAS# and SB_CAS# (along with SB_CS#) to define the SDRAM Commands.
RAS Control Signal: This signal is used with SB_CAS# and SB_WE# (along with SB_CS#) to define the SRAM Commands.
CAS Control Signal: This signal is used with SB_RAS# and SB_WE# (along with SB_CS#) to define the SRAM Commands.
Data Strobes: SB_DQS[7:0] and its complement signal group make up a differential strobe pair. The data is captured at the crossing point of SB_DQS[8:0] and its SB_DQS#[7:0] during read and write transactions.
Data Bus: Channel B data signal interface to the SDRAM data bus. I/O
Memory Address: These signals are used to provide the multiplexed
row and column address to the SDRAM. SDRAM Differential Clock: Channel B SDRAM Differential clock signal
pair. The crossing of the positive edge of SB_CK and the negative edge of its complement SB_CK# are used to sample the command and control signals on the SDRAM.
SDRAM Inverted Differential Clock: Channel B SDRAM Differential clock signal-pair complement.
Clock Enable: (1 per rank). These signals are used to:
• Initialize the SDRAMs during power-up.
• Power-down SDRAM ranks.
• Place all SDRAM ranks into and out of self-refresh during STR.
Chip Select: (1 per rank). These signals are used to select particular SDRAM components during the active state. There is one Chip Select for each SDRAM rank.
On Die Termination: Active Termination Control. O
Direction/
Buffer Type
O
DDR3
O
DDR3
O
DDR3
O
DDR3
I/O
DDR3
DDR3
O
DDR3
O
DDR3
O
DDR3
O
DDR3
O
DDR3
DDR3

6.2 Memory Reference and Compensation

Table 6-4. Memory Reference and Compensation
Signal Name Description
SM_RCOMP[2:0]
SM_VREF
Datasheet, Volume 1 79
System Memory Impedance Compensation: I
DDR3 Reference Voltage: This provides reference voltage to the
DDR3 interface and is defined as V
DDQ
/2.
Direction/
Buffer Type
A
I
A

6.3 Reset and Miscellaneous Signals

Table 6-5. Reset and Miscellaneous Signals
Signal Description
Signal Name Description
Configuration Signals: The CFG signals have a default value of '1' if not
terminated on the board.
•CFG[1:0]: Reserved configuration lane. A test point may be placed on
the board for this lane.
• CFG[2]: PCI Express* Static x16 Lane Numbering Reversal.
— 1 = Normal operation — 0 = Lane numbers reversed
• CFG[3]: Reserved
CFG[17:0]
• CFG[4]: eDP enable
—1 = Disabled —0 = Enabled
• CFG[6:5]: PCI Express Bifurcation:
— 00 = 1 x8, 2 x4 PCI Express — 01 = reserved
— 10 = 2 x8 PCI Express — 11 = 1 x16 PCI Express
CFG[17:7]: Reserved configuration lanes. A test point may be placed
on the board for these lands.
PM_SYNC
RESET#
RSVD
RSVD_TP
RSVD_NCTF
SM_DRAMRST#
Power Management Sync: A sideband signal to communicate power management status from the platform to the processor.
Platform Reset pin driven by the PCH I
RESERVED: All signals that are RSVD and RSVD_NCTF must be left unconnected on the board. However, Intel recommends that all RSVD_TP signals have using test points.
DDR3 DRAM Reset: Reset signal from processor to DRAM devices. One common to all channels.
Direction/
Buffer Type
I
CMOS
I
CMOS
CMOS
No Connect
Tes t P o i n t
Non-Critical to Function
O
CMOS

6.4 PCI Express* Based Interface Signals

Table 6-6. PCI Express* Graphics Interface Signals
Signal Name Description
PEG_ICOMPI
PEG_ICOMPO
PEG_RCOMPO
PEG_RX[15:0]
PEG_RX#[15:0]
PEG_TX[15:0]
PEG_TX#[15:0]
80 Datasheet, Volume 1
PCI Express Input Current Compensation I
PCI Express Current Compensation I
PCI Express Resistance Compensation I
PCI Express Receive Differential Pair I
PCI Express Transmit Differential Pair O
Direction/ Buffer Type
A
A
A
PCI Express
PCI Express
Signal Description

6.5 Embedded DisplayPort (eDP)

Table 6-7. Embedded Display Port Signals
Signal Name Description
eDP_TX[3:0]
eDP_TX#[3:0]
eDP_AUX
eDP_AUX#
eDP_HPD#
eDP_COMPIO
eDP_ICOMPO
Embedded DisplayPort Transmit Differential Pair O
Embedded DisplayPort Auxiliary Differential Pair I/O
Embedded DisplayPort Hot Plug Detect: I
Embedded DisplayPort Current Compensation I
Embedded DisplayPort Current Compensation I

6.6 Intel® Flexible Display Interface Signals

Table 6-8. Intel® Flexible Display Interface
Signal Name Description
®
FDI0_TX[3:0]
FDI0_TX#[3:0]
FDI0_FSYNC[0]
FDI0_LSYNC[0]
FDI1_TX[3:0]
FD1I_TX#[3:0]
FDI1_FSYNC[1]
FDI1_LSYNC[1]
FDI_INT
Flexible Display Interface Transmit Differential Pair –
Intel
Pipe A
®
Intel
Flexible Display Interface Frame Sync – Pipe A I
®
Flexible Display Interface Line Sync – Pipe A I
Intel
®
Intel
Flexible Display Interface Transmit Differential Pair –
Pipe B
®
Intel
Flexible Display Interface Frame Sync – Pipe B I
®
Intel
Flexible Display Interface Line Sync – Pipe B I
®
Intel
Flexible Display Interface Hot Plug Interrupt I
Direction/
Buffer Type
Diff
Diff
Asynchronous
CMOS
A
A
Direction/
Buffer Type
O
FDI
CMOS
CMOS
O
FDI
CMOS
CMOS
Asynchronous
CMOS

6.7 DMI

Table 6-9. DMI - Processor to PCH Serial Interface
Signal Name Description
DMI_RX[3:0]
DMI_RX#[3:0]
DMI_TX[3:0]
DMI_TX#[3:0]
Datasheet, Volume 1 81
DMI Input from PCH: Direct Media Interface receive differential pair. I
DMI Output to PCH: Direct Media Interface transmit differential pair. O
Direction/
Buffer Type
DMI
DMI

6.8 PLL Signals

Table 6-10. PLL Signals
Signal Name Description
BCLK
BCLK#
DPLL_REF_CLK
DPLL_REF_CLK#
Differential bus clock input to the processor I
Embedded Display Port PLL Differential Clock In: 120 MHz. I

6.9 TAP Signals

Table 6-11. TAP Signals
Signal Name Description
Breakpoint and Performance Monitor Signals: These signals are
BPM#[7:0]
BCLK_ITP
BCLK_ITP#
DBR#
PRDY#
PREQ#
TCK
TDI
TDO
TMS
TRST#
outputs from the processor that indicate the status of breakpoints and programmable counters used for monitoring processor performance.
These pins are connected in parallel to the top side debug probe to enable debug capacities.
DBR# is used only in systems where no debug port is implemented on the system board. DBR# is used by a debug port interposer so that an in-target probe can drive system reset.
PRDY# is a processor output used by debug tools to determine processor debug readiness.
PREQ# is used by debug tools to request debug operation of the processor.
TCK (Test Clock): This signal provides the clock input for the processor Test Bus (also known as the Test Access Port). TCK must be driven low or allowed to float during power on Reset.
TDI (Test Data In): This signal transfers serial test data into the processor. TDI provides the serial input needed for JTAG specification support.
TDO (Test Data Out): This signal transfers serial test data out of the processor. TDO provides the serial output needed for JTAG specification support.
TMS (Test Mode Select): A JTAG specification support signal used by debug tools.
TRST# (Test Reset): This signal resets the Test Access Port (TAP) logic. TRST# must be driven low during power on Reset.
Signal Description
Direction/
Buffer Type
Diff Clk
Diff Clk
Direction/
Buffer Type
I/O
CMOS
I
O
O
Asynchronous
CMOS
I
Asynchronous
CMOS
I
CMOS
I
CMOS
O
Open Drain
I
CMOS
I
CMOS
82 Datasheet, Volume 1
Signal Description

6.10 Error and Thermal Protection

Table 6-12. Error and Thermal Protection
Signal Name Description
Catastrophic Error: This signal indicates that the system has
experienced a catastrophic error and cannot continue to operate. The processor will set this for non-recoverable machine check errors or other unrecoverable internal errors.
CATERR#
PECI
PROCHOT#
THERMTRIP#
On the processor, CATERR# is used for signaling the following types of errors:
• Legacy MCERRs – CATERR# isasserted for 16 BCLKs.
• Legacy IERRs – CATERR# remains asserted until warm or cold reset.
PECI (Platform Environment Control Interface): A serial sideband interface to the processor, it is used primarily for thermal, power, and error management.
Processor Hot: PROCHOT# goes active when the processor temperature monitoring sensor(s) detects that the processor has reached its maximum safe operating temperature. This indicates that the processor Thermal Control Circuit (TCC) has been activated, if enabled. This signal can also be driven to the processor to activate the TCC.
Thermal Trip: The processor protects itself from catastrophic overheating by use of an internal thermal sensor. This sensor is set well above the normal operating temperature to ensure that there are no false trips. The processor will stop all execution when the junction temperature exceeds approximately 130 °C. This is signaled to the system by the THERMTRIP# pin.

6.11 Power Sequencing

Direction/
Buffer Type
O
CMOS
I/O
Asynchronous
CMOS Input/
Open-Drain
Output
O
Asynchronous
CMOS
Table 6-13. Power Sequencing
Signal Name Description
SM_DRAMPWROK Processor Input: Connects to PCH
SM_DRAMPWROK
UNCOREPWRGOOD
SKTOCC#
(rPGA only)
PROC_DETECT#
(BGA)
Datasheet, Volume 1 83
DRAMPWROK.
The processor requires this input signal to be a clean indication that the V within specifications. This requirement applies, regardless of the S­state of the processor. 'Clean' implies that the signal will remain low (capable of sinking leakage current), without glitches, from the time that the power supplies are turned on until they come within specification. The signal must then transition monotonically to a high state. This is connected to the PCH PROCPWRGD signal.
SKTOCC# (Socket Occupied)/PROC_DETECT (Processor Detect): Pulled down directly (0 Ohms) on the processor package to
ground. There is no connection to the processor silicon for this signal. System board designers may use this signal to determine if the processor is present.
CCSA
, V
CCIO
, V
AXG
, and V
, power supplies are stable and
DDQ
Direction/
Buffer Type
I
Asynchronous
CMOS
I
Asynchronous
CMOS

6.12 Processor Power Signals

Table 6-14. Processor Power Signals
Signal Description
Signal Name Description
VCC Processor core power rail Ref
VCCIO Processor power for I/O Ref
VDDQ Processor I/O supply voltage for DDR3 Ref
VAXG Graphics core power supply. Ref
VCCPLL VCCPLL provides isolated power for internal processor PLLs Ref
VCCSA System Agent power supply Ref
VCCPQE
(BGA Only)
VCCDQ
(BGA Only)
VIDSOUT
VIDSCLK
VIDALERT#
VCCSA_VID[1]

6.13 Sense Pins

Table 6-15. Sense Pins
Filtered, low noise derivative of VCCIO
Filtered, low noise derivative of VDDQ
VIDALERT#, VIDSCLK, and VIDSCLK comprise a three signal serial synchronous interface used to transfer power management information between the processor and the voltage regulator controllers. This serial VID interface replaces the parallel VID interface on previous processors.
Voltage selection for VCCSA: This pin must have a pull down resistor to ground.
Direction/
Buffer Type
Ref
Ref
I/O
O
I
CMOS
O
CMOS
Signal Name Description
VCC_SENSE VSS_SENSE
VAXG_SENSE
VSSAXG_SENSE
VCCIO_SENSE
VSS_SENSE_VCCIO
VDDQ_SENSE
VSS_SENSE_VDDQ
VCCSA_SENSE
VCC_DIE_SENSE
VCC_VAL_SENSE VSS_VAL_SENSE
VAXG_VAL_SENSE
VSSAXG_VAL_SENSE
84 Datasheet, Volume 1
VCC_SENSE and VSS_SENSE provide an isolated, low impedance connection to the processor core voltage and ground. They can be used to sense or measure voltage near the silicon.
VAXG_SENSE and VSSAXG_SENSE provide an isolated, low impedance connection to the V be used to sense or measure voltage near the silicon.
VCCIO_SENSE and VSS_SENSE_VCCIO provide an isolated, low impedance connection to the processor V They can be used to sense or measure voltage near the silicon.
VDDQ_SENSE and VSS_SENSE_VDDQ provides an isolated, low impedance connection to the V be used to sense or measure voltage near the silicon.
VCCSA_SENSE provide an isolated, low impedance connection to the processor system agent voltage. It can be used to sense or measure voltage near the silicon.
Die Validation Sense: O
Validation Sense: O
V
CC
V
Validation Sense: O
AXG
voltage and ground. They can
AXG
voltage and ground.
CCIO
voltage and ground. They can
DDQ
Direction/
Buffer Type
O
Analog
O
Analog
O
Analog
O
Analog
O
Analog
Analog
Analog
Analog
Signal Description

6.14 Ground and NCTF

Table 6-16. Ground and NCTF
Signal Name Description
VSS Processor ground node GND
VSS_NCTF
(BGA Only)
DC_TEST_xx#
Non-Critical to Function: These pins are for package mechanical reliability.
Daisy Chain- These pins are for solder joint reliability and non-critical to function. For BGA only.

6.15 Future Compatibility

Table 6-17. Future Compatibility
Signal Name Description
PROC_SELECT#
SA_DIMM_VREFDQ SB_DIMM_VREFDQ
VCCIO_SEL
VCCSA_VID[0]
This pin is for compatibility with future platforms. A pull-up resistor to V
is required if connected to the DF_TVS strap on the PCH.
CPLL
Memory Channel A/B DIMM DQ Voltage Reference: These signals are not used by the processors and are for future compatibility only. No connection is required.
Voltage selection for VCCIO: This pin must be pulled high on the motherboard, when using dual rail voltage regulator, which will be used for future compatibility.
Voltage selection for VCCSA: This pin must have a pull down resistor to ground.
Direction/
Buffer Type
Direction/
Buffer Type

6.16 Processor Internal Pull Up/Pull Down

Table 6-18. Processor Internal Pull Up/Pull Down
Signal Name Pull Up/Pull Down Rail Value
BPM[7:0] Pull Up VCCIO 65–165
PRDY# Pull Up VCCIO 65–165
PREQ# Pull Up VCCIO 65–165
TCK Pull Down VSS 5–15 k
TDI Pull Up VCCIO 5–15 k
TMS Pull Up VCCIO 5–15 k
TRST# Pull Up VCCIO 5–15 k
CFG[17:0] Pull Up VCCIO 5–15 k
§ §
Datasheet, Volume 1 85
Signal Description
86 Datasheet, Volume 1
Electrical Specifications

7 Electrical Specifications

7.1 Power and Ground Pins

The processor has VCC, V on-chip power distribution. All power pins must be connected to their respective processor power planes, while all VSS pins must be connected to the system ground plane. Use of multiple power and ground planes is recommended to reduce I*R drop. The VCC pins and VAXG pins must be supplied with the voltage determined by the processor Serial Voltage IDentification (SVID) interface. Table 7-1 specifies the voltage level for the various VIDs.
CCIO
, V
DDQ, VCCPLL, VCCSA
, V
AXG
and V
(ground) inputs for
SS

7.2 Decoupling Guidelines

Due to its large number of transistors and high internal clock speeds, the processor is capable of generating large current swings between low- and full-power states. To keep voltages within specification, output decoupling must be properly designed.
Caution: Design the board to ensure that the voltage provided to the processor remains within
the specifications listed in reduced lifetime of the processor.
Ta b le 7-3. Failure to do so can result in timing violations or

7.2.1 Voltage Rail Decoupling

The voltage regulator solution must:
• provide sufficient decoupling to compensate for large current swings generated during different power mode transitions.
• provide low parasitic resistance from the regulator to the socket.
• meet voltage and current specifications as defined in Tabl e 7-3.

7.2.2 PLL Power Supply

An on-die PLL filter solution is implemented on the processor. .
Datasheet, Volume 1 87

7.3 Voltage Identification (VID)

Electrical Specifications
The VID specifications for the processor VCC and V
SVID Protocol. The processor
uses three signals for the serial voltage identification
are defined by the VR12/IMVP7
AXG
interface to support automatic selection of voltages. Table 7-1 specifies the voltage level corresponding to the eight bit VID value transmitted over serial VID. A ‘1’ in this table refers to a high voltage level and a ‘0’ refers to a low voltage level. If the voltage regulation circuit cannot supply the voltage that is requested, the voltage regulator
must disable itself. See the VR12/IMVP7 SVID Protocol for further details. The VID
codes will change due to temperature and/or current load changes in order to minimize the power of the part. A voltage range is provided in Table 7-1. The specifications are set so that one voltage regulator can operate with all supported frequencies.
Individual processor VID values may be set during manufacturing so that two devices at the same core frequency may have different default VID settings. This is shown in the VID range values in Table 7-5. The processor
provides the ability to operate while
transitioning to an adjacent VID and its associated voltage. This will represent a DC shift in the loadline.
Note: Transitions above the maximum specified VID are not permitted. Tabl e 7-5 includes VID
step sizes and DC shift ranges. Minimum and maximum voltages must be maintained.
The VR used must be capable of regulating its output to the value defined by the new VID values issued. DC specifications for dynamic VID transitions are included in
Table 7-5 and Table 7-10. See the VR12/IMVP7 SVID Protocol for further details.
88 Datasheet, Volume 1
Electrical Specifications
h
Table 7-1. IMVP7 Voltage Identification Definition (Sheet 1 of 3)
VID7VID6VID5VID4VID3VID2VID1VID
0
HEX V
CC_MAX
00000000000.00000 10000000800.88500
00000001010.25000 10000001810.89000
00000010020.25500 10000010820.89500
00000011030.26000 10000011830.90000
00000100040.26500 10000100840.90500
00000101050.27000 10000101850.91000
00000110060.27500 10000110860.91500
00000111070.28000 10000111870.92000
00001000080.28500 10001000880.92500
00001001090.29000 10001001890.93000
000010100A0.29500 100010108A0.93500
000010110B0.30000 100010118B0.94000
000011000C0.30500 100011008C0.94500
000011010D0.31000 100011018D0.95000
000011100E0.31500 100011108E0.95500
000011110F0.32000 100011118F0.96000
00010000100.32500 10010000900.96500
00010001110.33000 10010001910.97000
00010010120.33500 10010010920.97500
00010011130.34000 10010011930.98000
00010100140.34500 10010100940.98500
00010101150.35000 10010101950.99000
00010110160.35500 10010110960.99500
00010111170.36000 10010111971.00000
00011000180.36500 10011000981.00500
00011001190.37000 10011001991.01000
000110101A0.37500 100110109A1.01500
000110111B0.38000 100110119B1.02000
000111001C0.38500 100111009C1.02500
000111011D0.39000 100111019D1.03000
000111101E0.39500 100111109E1.03500
000111111F0.40000 100111119F1.04000
00100000200.40500 10100000A01.04500
00100001210.41000 10100001A11.05000
00100010220.41500 10100010A21.05500
00100011230.42000 10100011A31.06000
00100100240.42500 10100100A41.06500
00100101250.43000 10100101A51.07000
00100110260.43500 10100110A61.07500
00100111270.44000 10100111A71.08000
00101000280.44500 10101000A81.08500
00101001290.45000 10101001A91.09000
VID7VID6VID5VID4VID3VID2VID1VID
0
HEX V
CC_MAX
Datasheet, Volume 1 89
Electrical Specifications
Table 7-1. IMVP7 Voltage Identification Definition (Sheet 2 of 3)
VID7VID6VID5VID4VID3VID2VID1VID
001010102A0.45500 1 0 1 0 1 0 1 0 A A 1.09500
001010112B0.46000 1 0 1 0 1 0 1 1 A B 1.10000
001011002C0.46500 1 0 1 0 1 1 0 0 A C 1.10500
001011012D0.47000 1 0 1 0 1 1 0 1 A D 1.11000
001011102E0.47500 1 0 1 0 1 1 1 0 A E 1.11500
001011112F0.48000 1 0 1 0 1 1 1 1 A F 1.12000
00110000300.48500 1 0 1 1 0 0 0 0 B 0 1.12500
00110001310.49000 1 0 1 1 0 0 0 1 B 1 1.13000
00110010320.49500 1 0 1 1 0 0 1 0 B 2 1.13500
00110011330.50000 1 0 1 1 0 0 1 1 B 3 1.14000
00110100340.50500 1 0 1 1 0 1 0 0 B 4 1.14500
00110101350.51000 1 0 1 1 0 1 0 1 B 5 1.15000
00110110360.51500 1 0 1 1 0 1 1 0 B 6 1.15500
00110111370.52000 1 0 1 1 0 1 1 1 B 7 1.16000
00111000380.52500 1 0 1 1 1 0 0 0 B 8 1.16500
00111001390.53000 1 0 1 1 1 0 0 1 B 9 1.17000
001110103A0.53500 1 0 1 1 1 0 1 0 B A 1.17500
001110113B0.54000 1 0 1 1 1 0 1 1 B B 1.18000
001111003C0.54500 1 0 1 1 1 1 0 0 B C 1.18500
001111013D0.55000 1 0 1 1 1 1 0 1 B D 1.19000
001111103E0.55500 1 0 1 1 1 1 1 0 B E 1.19500
001111113F0.56000 1 0 1 1 1 1 1 1 B F 1.20000
01000000400.56500 1 1 0 0 0 0 0 0 C 0 1.20500
01000001410.57000 1 1 0 0 0 0 0 1 C 1 1.21000
01000010420.57500 1 1 0 0 0 0 1 0 C 2 1.21500
01000011430.58000 1 1 0 0 0 0 1 1 C 3 1.22000
01000100440.58500 1 1 0 0 0 1 0 0 C 4 1.22500
01000101450.59000 1 1 0 0 0 1 0 1 C 5 1.23000
01000110460.59500 1 1 0 0 0 1 1 0 C 6 1.23500
01000111470.60000 1 1 0 0 0 1 1 1 C 7 1.24000
01001000480.60500 1 1 0 0 1 0 0 0 C 8 1.24500
01001001490.61000 1 1 0 0 1 0 0 1 C 9 1.25000
010010104A0.61500 1 1 0 0 1 0 1 0 C A 1.25500
010010114B0.62000 1 1 0 0 1 0 1 1 C B 1.26000
010011004C0.62500 1 1 0 0 1 1 0 0 C C 1.26500
010011014D0.63000 1 1 0 0 1 1 0 1 C D 1.27000
010011104E0.63500 1 1 0 0 1 1 1 0 C E 1.27500
010011114F0.64000 1 1 0 0 1 1 1 1 C F 1.28000
01010000500.64500 1 1 0 1 0 0 0 0 D 0 1.28500
01010001510.65000 1 1 0 1 0 0 0 1 D 1 1.29000
01010010520.65500 1 1 0 1 0 0 1 0 D 2 1.29500
01010011530.66000 1 1 0 1 0 0 1 1 D 3 1.30000
01010100540.66500 1 1 0 1 0 1 0 0 D 4 1.30500
0
HEX V
CC_MAX
VID7VID6VID5VID4VID3VID2VID1VID
0
HEX V
CC_MAX
90 Datasheet, Volume 1
Electrical Specifications
Table 7-1. IMVP7 Voltage Identification Definition (Sheet 3 of 3)
VID7VID6VID5VID4VID3VID2VID1VID
0
HEX V
CC_MAX
01010101550.67000 11010101D51.31000
01010110560.67500 11010110D61.31500
01010111570.68000 11010111D71.32000
01011000580.68500 11011000D81.32500
01011001590.69000 11011001D91.33000
010110105A0.69500 11011010DA1.33500
010110115B0.70000 11011011DB1.34000
010111005C0.70500 11011100DC1.34500
010111015D0.71000 11011101DD1.35000
010111105E0.71500 11011110DE1.35500
010111115F0.72000 11011111DF1.36000
01100000600.72500 11100000E01.36500
01100001610.73000 11100001E11.37000
01100010620.73500 11100010E21.37500
01100011630.74000 11100011E31.38000
01100100640.74500 11100100E41.38500
01100101650.75000 11100101E51.39000
01100110660.75500 11100110E61.39500
01100111670.76000 11100111E71.40000
01101000680.76500 11101000E81.40500
01101001690.77000 11101001E91.41000
011010106A0.77500 11101010EA1.41500
011010116B0.78000 11101011EB1.42000
011011006C0.78500 11101100EC1.42500
011011016D0.79000 11101101ED1.43000
011011106E0.79500 11101110EE1.43500
011011116F0.80000 11101111EF1.44000
01110000700.80500 11110000F01.44500
01110001710.81000 11110001F11.45000
01110010720.81500 11110010F21.45500
01110011730.82000 11110011F31.46000
01110100740.82500 11110100F41.46500
01110101750.83000 11110101F51.47000
01110110760.83500 11110110F61.47500
01110111770.84000 11110111F71.48000
01111000780.84500 11111000F81.48500
01111001790.85000 11111001F91.49000
011110107A0.85500 11111010FA1.49500
011110117B0.86000 11111011FB1.50000
011111007C0.86500 11111100FC1.50500
011111017D0.87000 11111101FD1.51000
011111107E0.87500 11111110FE1.51500
011111117F0.88000 11111111FF1.52000
VID7VID6VID5VID4VID3VID2VID1VID
0
HEX V
CC_MAX
Datasheet, Volume 1 91

7.4 System Agent (SA) VCC VID

The VccSA is configured by the processor output pins VCCSA_VID[1:0].
VCCSA_VID[0] output default logic state is low for the 2nd Generation Intel processor family mobile; logic high is reserved for future compatibility.
VCCSA_VID[1] output default logic state is low – will not change the SA voltage. Logic high will reduce the voltage.
Electrical Specifications
®
Core™
Note: During boot, the processor’s Vcc
is 0.9 V.
SA
Table 7-2 specifies the different VCCSA_VID configurations.
Table 7-2. VCCSA_VID configuration
Processor family VCCSA_VID[0] VCCSA_VID[1]
®
2nd Generation Intel Core™ processor Family Mobile
Future Intel processors 1 0 Note 1 Note 1
Notes:
1. Some of V
configurations are reserved for future Intel processor families.
CCSA
0 0 0.9 V 0.9 V
0 1 0.8 V 0.85 V
11Note 1Note 1

7.5 Reserved or Unused Signals

The following are the general types of reserved (RSVD) signals and connection guidelines:
• RSVD – these signals should not be connected
• RSVD_TP – these signals should be routed to a test point
• RSVD_NCTF – these signals are non-critical to function and may be left un­connected
Selected VCCSA
(XE and SV segments)
Selected VCCSA
(LV and ULV
segments)
Arbitrary connection of these signals to VCC, V
CCIO
, V
DDQ
, V
CCPLL
, V
CCSA, VAXG
, VSS, or to any other signal (including each other) may result in component malfunction or incompatibility with future processors. See Chapter 8 for a pin listing of the processor and the location of all reserved signals.
For reliable operation, always connect unused inputs or bi-directional signals to an appropriate signal level. Unused active high inputs should be connected through a resistor to ground (V
). Unused outputs maybe left unconnected; however, this may
SS
interfere with some Test Access Port (TAP) functions, complicate debug probing, and prevent boundary scan testing. A resistor must be used when tying bi-directional signals to power or ground. When tying any signal to power or ground, a resistor will also allow for system testability.
92 Datasheet, Volume 1
Electrical Specifications

7.6 Signal Groups

Signals are grouped by buffer type and similar characteristics as listed in Table 7-3. The buffer type indicates which signaling technology and specifications apply to the signals. All the differential signals, and selected DDR3 and Control Sideband signals, have On­Die Termination (ODT) resistors. There are some signals that do not have ODT and need to be terminated on the board.
1
Table 7-3. Signal Groups
Signal Group Type Signals
System Reference Clock
Differential CMOS Input
DDR3 Reference Clocks
Differential DDR3 Output
DDR3 Command Signals
Single Ended DDR3 Output
DDR3 Control Signals
Single Ended DDR3 Output
DDR3 Data Signals
Single ended DDR3 Bi-directional SA_DQ[63:0], SB_DQ[63:0]
Differential DDR3 Bi-directional
DDR3 Compensation
Analog Bi-directional SM_RCOMP[2:0]
DDR3 Reference
Analog Input SM_VREF
TAP (ITP/XDP)
Single Ended CMOS Input TCK, TDI, TMS, TRST#
Single Ended Open-Drain Output TDO
Single Ended Output DBR#
Single Ended
Single Ended
Single Ended
Control Sideband
Single Ended CMOS Input CFG[17:0]
(Sheet 1 of 3)
2
2
2
2
BCLK, BCLK# DPLL_REF_CLK, DPLL_REF_CLK#
SA_CK[1:0], SA_CK#[1:0] SB_CK[1:0], SB_CK#[1:0]
SA_BS[2:0], SB_BS[2:0] SA_WE#, SB_WE# SA_RAS#, SB_RAS# SA_CAS#, SB_CAS# SA_MA[15:0], SB_MA[15:0]
SA_CKE[1:0], SB_CKE[1:0] SA_CS#[1:0], SB_CS#[1:0] SA_ODT[1:0], SB_ODT[1:0] SM_DRAMRST#
SA_DQS[7:0], SA_DQS#[7:0] SB_DQS[7:0], SB_DQS#[7:0]
Output BCLK_ITP, BCLK_ITP#
Asynchronous CMOS Bi-Directional
Asynchronous CMOS Input
Asynchronous CMOS Output
BPM#[7:0]
PREQ#
PRDY#
Datasheet, Volume 1 93
Table 7-3. Signal Groups1 (Sheet 2 of 3)
Signal Group Type Signals
Single Ended
Single Ended
Single Ended
Single Ended
Voltage Regulator
Single Ended CMOS Input VIDALERT#
Single Ended Open Drain Output VIDSCLK
Single Ended CMOS Output VCCSA_VID[1]
Single Ended
Single Ended Analog Output
Differential Analog Output
Power/Ground/Other
Single Ended
PCI Express* Graphics
Differential PCI Express Input PEG_RX[15:0], PEG_RX#[15:0]
Differential PCI Express Output PEG_TX[15:0], PEG_TX#[15:0]
Single Ended Analog Input PEG_ICOMPO, PEG_ICOMPI, PEG_RCOMPO
eDP
Differential eDP Output eDP_TX[3:0], eDP_TX#[3:0]
Differential eDP Bi-directional eDP_AUX, eDP_AUX#
Single Ended
Single Ended Analog Input eDP_ICOMPO, eDP_COMPIO
DMI
Differential DMI Input DMI_RX[3:0], DMI_RX#[3:0]
Differential DMI Output DMI_TX[3:0], DMI_TX#[3:0]
®
Intel
FDI
Single Ended CMOS Input
Asynchronous CMOS/Open Drain Bi­directional
Asynchronous CMOS Output
Asynchronous CMOS Input
Asynchronous Bi­directional
Bi-directional CMOS Input/Open Drain Output
Power
Ground VSS, V
No Connect RSVD, RSVD_NCTF
Tes t P o i n t RSV D _ T P
Other SKTOCC#, PROC_DETECT#
Asynchronous CMOS Input
Electrical Specifications
PROCHOT#
THERMTRIP#, CATERR#
4
SM_DRAMPWROK, UNCOREPWRGOOD PM_SYNC, RESET#
PECI
VIDSOUT
VCCSA_SENSE VCC_DIE_SENSE
VCC_SENSE, VSS_SENSE VCCIO_SENSE, VSS_SENSE_VCCIO VAXG_SENSE, VSSAXG_SENSE VCC_VAL_SENSE, VSS_VAL_SENSE VAXG_VAL_SENSE, VSSAXG_VAL_SENSE
, V
, V
, V
V
CC
V
CCDQ3
CCIO
CCSA
SS_NCTF3,
DC_TEST_xx#
CCPLL
, V
DDQ
3
eDP_HPD
FDI0_FSYNC, FDI1_FSYNC, FDI0_LSYNC, FDI1_LSYNC
,
, V
AXG, VCCPQE3,
94 Datasheet, Volume 1
Electrical Specifications
Table 7-3. Signal Groups1 (Sheet 3 of 3)
Signal Group Type Signals
Single Ended
Differential FDI Output
Future Compatibility
Notes:
1. Refer to Chapter 6 for signal description details.
2. SA and SB refer to DDR3 Channel A and DDR3 Channel B.
3. These signals only apply to BGA packages.
4. The maximum rise/fall time of UNCOREPWRGOOD is 20 ns.
Asynchronous CMOS Input
FDI_INT
FDI0_TX[3:0], FDI0_TX#[3:0], FDI1_TX[3:0], FDI1_TX#[3:0]
PROC_SELECT#, VCCSA_VID[0], VCCIO_SEL, SA_DIMM_VREFDQ, SB_DIMM_VREFDQ
All Control Sideband Asynchronous signals are required to be asserted/de-asserted for at least 10 BCLKs with a maximum Trise/Tfall of 6 ns for the processor to recognize the proper signal state. See Section 7.10 for the DC specifications.

7.7 Test Access Port (TAP) Connection

Due to the voltage levels supported by other components in the Test Access Port (TAP) logic, Intel recommends the processor be first in the TAP chain, followed by any other components within the system. A translation buffer should be used to connect to the rest of the chain unless one of the other components is capable of accepting an input of the appropriate voltage. Two copies of each signal may be required with each driving a different voltage level.
The processor supports Boundary Scan (JTAG) IEEE 1149.1-2001 and IEEE 1149.6­2003 standards. Note that some small portion of the I/O pins may support only one of these standards.

7.8 Storage Condition Specifications

Environmental storage condition limits define the temperature and relative humidity that the device is exposed to while being stored in a moisture barrier bag. The specified storage conditions are for component level prior to board attach.
Table 7-5 specifies absolute maximum and minimum storage temperature limits that
represent the maximum or minimum device condition beyond which damage, latent or otherwise, may occur. The table also specifies sustained storage temperature, relative humidity, and time-duration limits. These limits specify the maximum or minimum device storage conditions for a sustained period of time. Failure to adhere to the following specifications can affect long term reliability of the processor.
Datasheet, Volume 1 95
Table 7-4. Storage Condition Ratings
Symbol Parameter Min Max Notes
The non-operating device storage
T
absolute storage
T
sustained storage
T
short term storage
RH
sustained storage
Time
sustained storage
Time
short term storage
Notes:
1. Refers to a component device that is not assembled in a board or socket and is not electrically connected to
a voltage reference or I/O signal.
2. Specified temperatures are not to exceed values based on data collected. Exceptions for surface mount
reflow are specified by the applicable JEDEC standard. Non-adherence may affect processor reliability.
3. T
absolute storage
moisture barrier bags, or desiccant.
4. Component product device storage temperature qualification methods may follow JESD22-A119 (low temp)
and JESD22-A103 (high temp) standards when applicable for volatile memory.
5. Intel branded products are specified and certified to meet the following temperature and humidity limits
that are given as an example only (Non-Operating Temperature Limit: -40 °C to 70 °C and Humidity: 50% to 90%, non-condensing with a maximum wet bulb of 28 °C.) Post board attach storage temperature limits are not specified for non-Intel branded boards.
6. The JEDEC J-JSTD-020 moisture level rating and associated handling practices apply to all moisture
sensitive devices removed from the moisture barrier bag.
7. Nominal temperature and humidity conditions and durations are given and tested within the constraints
imposed by T
temperature. Damage (latent or otherwise) may occur when exceeded for any length of time.
The ambient storage temperature (in shipping media) for a sustained period of time
The ambient storage temperature (in shipping media) for a short period of time.
The maximum device storage relative humidity for a sustained period of time.
A prolonged or extended period of time; typically associated with customer shelf life.
A short-period of time; 0 hours 72 hours
applies to the unassembled component only and does not apply to the shipping media,
sustained storage
and customer shelf life in applicable Intel boxes and bags.
Electrical Specifications
-25 °C 125 °C 1, 2, 3, 4
-5 °C 40 °C 5, 6
-20 °C 85 °C
60% at 24 °C 6, 7
0 Months 30 Months 7

7.9 DC Specifications

The processor DC specifications in this section are defined at the processor pins, unless noted otherwise. See Chapter 8 for the processor pin listings and
Chapter 6 for signal definitions.
• The DC specifications for the DDR3 signals are listed in Ta b le 7-11. Control Sideband and Test Access Port (TAP) are listed in Tab le 7-12.
Ta b le 7-5 lists the DC specifications for the processor and are valid only while meeting specifications for junction temperature, clock frequency, and input voltages. Care should be taken to read all notes associated with each parameter.
• AC tolerances for all DC rails include dynamic load currents at switching frequencies up to 1 MHz.
96 Datasheet, Volume 1
Electrical Specifications

7.9.1 Voltage and Current Specifications

Table 7-5. Processor Core (VCC) Active and Idle Mode DC Voltage and Current
Specifications (Sheet 1 of 2)
Symbol Parameter Segment Min Typ Max Unit Note
0.8
0.8
0.8
0.75
0.7
0.65
0.65
0.65
0.65
0.65
HFM_VID
LFM_VID
V
CC
XE
VID Range for Highest Frequency Mode (Includes Turbo Mode Operation)
SV-QC SV-DC LV ULV
XE
VID Range for Lowest Frequency Mode
SV-QC SV-DC LV ULV
VCC for processor core 0.3–1.52 V 2, 3
XE
I
CCMAX
Maximum Processor Core I
CC
SV-QC SV-DC LV
——
ULV
XE SV-QC
I
CC_TDC
Thermal Design ICC
SV-DC
—— LV ULV
XE SV-QC
I
CC_LFMICC
at LFM
SV-DC LV
ULV
XE SV-QC
I
C6/C7
ICC at C6/C7 Idle-state
SV-DC
—— LV ULV
PS0 ±15
TOL
VCC
Voltage Tolerance
PS2, PS3 ±11.5
PS0 & Icc > TDC+30%
——
PS0 &
Ripple Ripple Tolerance
Icc TDC+30%
——
PS1 ±13
PS2 -7.5/ +18.5
PS3 -7.5/ +27.5
VR Step VID resolution 5 mV
1.35
1.35
1.35
1.3
1.2
0.95
0.95
0.95
0.9
0.9
97 94 53 43 33
62 52 36 25
21.5
31 28
11.6
17.6
12.5
6
5.5
2.5
3.8
2.6
±15
±10
1, 2, 6,
V
8
V1, 2, 8
A4, 6, 8
A5, 6, 8
A5
A10
mV 7, 9PS1 ±12
mV 7, 9
Datasheet, Volume 1 97
Electrical Specifications
Table 7-5. Processor Core (VCC) Active and Idle Mode DC Voltage and Current
Specifications (Sheet 2 of 2)
Symbol Parameter Segment Min Typ Max Unit Note
XE SV-QC
SLOPE
Processor Loadline
LL
SV-DC
— LV ULV
Notes:
1. Unless otherwise noted, all specifications in this table are based on post-silicon estimates and simulations or empirical data.
2. Each processor is programmed with a maximum valid voltage identification value (VID), which is set at manufacturing and cannot be altered. Individual maximum VID values are calibrated during manufacturing such that two processors at the same frequency may have different settings within the VID range. Note that this differs from the VID employed by the processor during a power or thermal management event (Intel Adaptive Thermal Monitor, Enhanced Intel SpeedStep Technology, or Low Power States).
3. The voltage specification requirements are measured across VCC_SENSE and VSS_SENSE lands at the socket with a 100-MHz bandwidth oscilloscope, 1.5 pF maximum probe capacitance, and 1-M minimum impedance. The maximum length of ground wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the oscilloscope probe.
4. Processor core VR to be designed to electrically support this current.
5. Processor core VR to be designed to thermally support this current indefinitely.
6. This specification assumes that Intel Turbo Boost Technology is enabled.
7. Long term reliability cannot be assured if tolerance, ripple, and core noise parameters are violated.
8. Long term reliability cannot be assured in conditions above or below Max/Min functional limits.
9. PSx refers to the voltage regulator power state as set by the SVID protocol.
10. Idle power specification is measured under temperature condition of 35
-1.9
-1.9
-1.9
-2.9
-2.9
o
C.
—mΩ
Table 7-6. Processor Uncore (V
Symbol Parameter Min Typ Max Unit Note
Voltage for the memory controller
V
CCIO
TOL
CCIO
I
CCMAX_VCCIO
I
CCTDC_VCCIO
Note: Long term reliability cannot be assured in conditions above or below Max/Min functional limits.
and shared cache defined at the motherboard V V
SS_SENSE_VCCIO
V
Tolerance defined across
CCIO
V
CCIO_SENSE
Max Current for V
Thermal Design Current (TDC) for V
Rail
CCIO
) Supply DC Voltage and Current Specifications
CCIO
CCIO_SENSE
and V
and
SS_SENSE_VCCIO
Rail 8.5 A
CCIO
—1.05— V
DC: ±2% including ripple
AC: ±3%
——8.5A
%
98 Datasheet, Volume 1
Electrical Specifications
Table 7-7. Memory Controller (V
Symbol Parameter Min Typ Max Unit Note
V
(DC+AC)
DDQ
TOL
DDQ
I
CCMAX_VDDQ
I
CCAVG_VDDQ
(Standby)
Notes:
1. The current supplied to the SO-DIMM modules is not included in this specification.
Table 7-8. System Agent (V
Symbol Parameter Min Typ Max Unit Note
V
CCSA
TOL
CCSA
I
CCMAX_VCCSA
I
CCTDC_VCCSA
Slew Rate Voltage Ramp rate (dV/dT) 0.5 10 mV/us
Note: Long term reliability cannot be assured in conditions above or below Max/Min functional limits.
Processor I/O supply voltage for DDR3 (DC + AC specification)
V
Tol e r a n c e D C= ±3 %
DDQ
Max Current for V
Average Current for V during Standby
CCSA
Voltage for the System Agent and V
CCSA_SENSE
V
Tolerance AC+DC= ±5% %
CCSA
Max Current for V
Thermal Design Current (TDC) for
Rail
V
CCSA
) Supply DC Voltage and Current Specifications
DDQ
—1.5— V
AC= ±2%
AC+DC= ±5%
Rail 5 A 1
DDQ
Rail
DDQ
66 133 mA
) Supply DC Voltage and Current Specifications
0.75 0.90 V
Rail 6 A
CCSA
—— 6A
%
Table 7-9. Processor PLL (V
Symbol Parameter Min Typ Max Unit Note
V
CCPLL
TOL
CCPLL
I
CCMAX_VCCPLL
I
CCTDC_VCCPLL
Note: Long term reliability cannot be assured in conditions above or below Max/Min functional limits.
PLL supply voltage (DC + AC specification)
V
CCPLL
Max Current for V
Thermal Design Current (TDC) for V
CCPLL
) Supply DC Voltage and Current Specifications
CCPLL
—1.8vV
Tolerance AC+DC= ±5% %
Rail 1.2 A
CCPLL
Rail
——1.2A
Datasheet, Volume 1 99
Electrical Specifications
Table 7-10. Processor Graphics (V
Symbol Parameter Min Typ Max Unit Note
Active VID Range for V
GFX_VID
VAXG Processor Graphics core voltage 0 – 1.52 V
I
CCMAX_VAXG
I
CCTDC_VAXG
TOL
AXG
Ripple
LL
AXG
XE, SV-QC, SV-DC LV ULV
Max Current for Processor Graphics Rail
XE, SV-QC, SV-DC (GT2) SV-DC (GT1) LV (GT2) ULV (GT2) ULV (GT1)
Thermal Design Current (TDC) for Processor Graphics Rail
XE, SV-QC, SV-DC (GT2) SV-DC (GT1) LV (GT2) ULV (GT2) ULV (GT1)
V
Tol er a n c e PS 0 , P S 1 ± 1 5 mV 4
AXG
Ripple Tolerance PS0, PS1 ±18 mV 4
V
Loadline
AXG
GT2 based units GT1 based units
) Supply DC Voltage and Current Specifications
AXG
AXG
0.65
0.65
0.65
1.35
1.35
1.35
——3324
33 26 16
21.5
——
20
21.5 10
8
PS2,PS3 ±11.5 mV 4
PS2 -7.5/+18.5 mV 4
PS3 -7.5/+27.5 mV
-3.9
-4.6
1
V2, 3
A
A
m
Notes:
1. Unless otherwise noted, all specifications in this table are based on post-silicon estimates and simulations or empirical data.
2. Each processor is programmed with a maximum valid voltage identification value (VID), which is set at manufacturing and cannot be altered. Individual maximum VID values are calibrated during manufacturing such that two processors at the same frequency may have different settings within the VID range. Note that this differs from the VID employed by the processor during a power or thermal management event (Intel Adaptive Thermal Monitor, Enhanced Intel SpeedStep Technology, or Low Power States).
3. The voltage specification requirements are measured across VCC_SENSE and VSS_SENSE lands at the socket with a 100-MHz bandwidth oscilloscope, 1.5 pF maximum probe capacitance, and 1-M minimum impedance. The maximum length of ground wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the oscilloscope probe.
4. PSx refers to the voltage regulator power state as set by the SVID protocol.
5. Each processor is programmed with a maximum valid voltage identification value (VID), which is set at manufacturing and cannot be altered. Individual maximum VID values are calibrated during manufacturing such that two processors at the same frequency may have different settings within the VID range. Note that this differs from the VID employed by the processor during a power or thermal management event (Intel Adaptive Thermal Monitor, Enhanced Intel SpeedStep Technology, or Low Power States).
100 Datasheet, Volume 1
Loading...