INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS
AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS
OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO
FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER
INTELLECTUAL PROPERTY RIGHT.
Intel Corporation may have patents or pending patent applicat ions, trademarks, copyrights, or other intellectual property rights that relate to the
presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by
estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights.
Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications.
Intel may make changes to specifications and product descriptions at any time, without notice.
Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “un defined.” Intel reserves these for
future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them.
MPEG is an international standard for video compression/decompression promoted by ISO. Implementations of MPEG CODECs, or MPEG enabled
platforms may require licenses from various entities, including Intel Corporation.
This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the
license. The information in this document is furnished for informati onal use only, is subject to change without notice, and sh ould not be construe d as a
commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this
document or any software that may b e provide d in association with this document. Except as permitted by such license, no part of this document may
be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling
1-800-548-4725, or by visiting Intel's website at http://www.intel.com.
BunnyPeople, Celeron, Chips, Dialogic, EtherExpress, ETOX, FlashFile, i386, i486, i960, iCOMP, InstantIP, Intel, Intel Centrino, Intel Centrino logo,
Intel logo, Intel386, Intel486, Intel740, IntelDX2, IntelDX4, IntelSX2, Intel Inside, Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure,
Intel SingleDriver, Intel SpeedStep, Intel StrataFlash, Intel Xeon, Intel XScale, IPLink, Itanium, MCS, MMX, MMX logo, Optimizer logo, OverDrive,
Paragon, PDCharm, Pentium, Pentium II Xeon, Pentium III Xeon, Performance at Your Command, Sound Mark, The Computer Inside, The Journey
Inside, VTune, and Xircom are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 007
Intel® IXP400 Software
Contents
Revision History
DateRevisionDescription
April 2005007
November 2004006
December 2003005
September 2003004Made two minor corrections.
August 2003003Updated manual for IXP400 Software Version 1.3.
February 2003002Removed “Intel Confidential” classification.
February 2003001Initial release of document.
Updated guide for IXP400 Software Version 2.0. Added:
• Chapter 22, “Operating System Abstraction Layer (OSAL)”
Change bars indicate areas of change.
Updated manual for IXP400 Software Version 1.4. Removed API
documentation (now in a separate reference).
®
IXP400 Software v1.5”, and revised:
April 2005IXP400 Software Version 2.0Programmer’s Guide
18Document Number: 252539, Revision: 007
Intel® IXP400 Software
Introduction1
This chapter contains important information to help you learn about and use the Intel® IXP400
Software v2.0 release.
1.1Versions Supported by this Document
This programmer’s guide is intended to be used in conjunction with software release 2.0. Always
refer to the accompanying release notes for information about the latest information regarding the
proper documentation sources to be used.
Previous versions of the programmer’s guide for earlier IXP400 software releases can be found on
the following Web site:
1. Open the file ixp400_xscale_sw/src/include/IxVersionId.h.
2. Check the value of IX_VERSION_ID.
1.2Hardware Supported by this Release
The Intel® IXP400 Software v2.0 release supports the following processors:
• All Intel
• All variants of the Intel
Warning:Processor capabilities differ between processor product lines or processor variants. Not all
capabilities of the processor may be supported by this software release.
®
IXP42X Product Line of Network Processors and IXC1100 Control Plane Processor
®
IXP46X Product Line of Network Processors
1.3Intended Audience
This document describes the software release 2.0 architecture and is intended for software
developers and architects employing IXP42X product line processors or Intel
line processors. The document defines each component’s functionality, demonstrates the
behavioral links between the components, and presents the common design policies of each
component.
®
IXP46X product
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00719
Intel® IXP400 Software
Introduction
1.4How to Use this Document
This programmer’s guide is organized as follows:
ChaptersDescription
Chapters 1 and 2
Chapters 4 through 22
Chapter 3 and 24
Chapter 23 and 25–27
For the developer interested in a limited number of specific features of the IXP400 software, a
recommended reading procedure would be:
1. Read Chapters 1 through 3 to get a general knowledge of the products’ software and hardware
architecture.
Introduces the Intel
processors, including an overview of the software architecture.
Provide functional descriptions of the various access-layer
components.
Describe the memory buffer management and operating system
abstraction layers, needed for a more in-depth architectural
understanding of the software.
Describe codelets (example applications), ADSL driver, I
endianness.
®
IXP400 Software v2.0 and the supported
2
C driver, and
2. Read the chapters on the specific access-layer component(s) of interest.
Note: Many of the access-layer components have dependencies on other components —
particularly on IxNpeDl and IxQmgr. For that reason, developers also should review
those chapters.
3. Review the codelet descriptions in Chapter 23 and their respective source code for those
codelets that offer features of interest.
4. Refer to the API source code and source code documentation found in the software release
documents folder as necessary.
1.5About the Processors
Next-generation networking solutions must meet the growing demands of users for highperformance data, voice, and networked multimedia products. Manu factu rers of networking
equipment must develop new products under stringent time-to-market deadlines and deliver
products whose software can be easily upgraded. The IXP4XX product line and IXC1100 control
plane processors family is designed to meet the needs of broadband and embedded networking
products such as high-end residential gateways; small to medium enterprise (SME) routers,
switches, security devices; DSLAMs (Digital Subscriber Line Access Multiplexers) for multidwelling units (MxU); wireless access points; industrial control systems; and networked printers.
The IXP4XX product line and IXC1100 control plane processors deliver wire-speed performance
and sufficient “processing headroom” for manufacturers to add a variety of rich software services
to support their applications. These are highly integrated network processors that support multiple
WAN and LAN technologies, giving customers a common architecture for multiple applications.
With their development platform, a choice of operating systems, and a broad range of development
tools, the processor family is supported by a complete development environment for faster time-tomarket. This network processor family offers the choice of multiple clock speeds at 266, 400, 533
and 667 MHz, with both commercial (0° to 70° C) and extended (-40° to 85° C) temperature
options.
April 2005IXP400 Software Version 2.0Programmer’s Guide
20Document Number: 252539, Revision: 007
Intel® IXP400 Software
Introduction
The IXP4XX product line and IXC1100 control plane processors have a unique distributed
processing architecture that features the performance of the Intel XScale
Network Processor Engines (NPEs). The combination of the four high-performance processors
provides tremendous processing power and enables wire-speed performance at both the LAN and
WAN ports. The three NPEs are designed to offload many computationally intensive data plane
operations from the Intel XScale core. This provides ample “processing headroom” on the Intel
XScale core for developers to add differentiating product features. Software development is made
easier by the extensive Intel XScale core tools environment that includes compilers, debuggers,
operating systems, models, support services from third party vendors, and fully documented
evaluation hardware platforms and kits. The compiler, assembler, and linker support specific
optimizations designed for the Intel XScale microarchitecture, the ARM
Intel DSP extensions.
For a list of IXP42X product line features, please see the IntelProcessors and IXC1100 Control Plane Processor Datasheet.
For a list of IXP46X product line features, please see the IntelProcessors Datasheet.
1.6Related Documents
Users of this document should always refer to the associated Software Release Notes for the
specific release. Additional Intel documents listed below are available from your field
representative or from the following Web site:
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00721
Intel® IXP400 Software
Introduction
IEEE Standard for a Precision Clock Synchronization Protocol for Networked
Measurement and Control Systems (IEEE Std. 1588™ - 2002)
ARM Ltd., AMBA Specification, Rev. 2.0, May 1999–
http://www.pcisig.com/reflector/msg01668.html, a discussion on a PCI bridge
between little and big endian devices.
1.7Acronyms
AcronymDescription
AALATM Adaptation Layer
ABRAvailable Bit Rate
ACKAcknowledge Packet
ADSLAsymmetric Digital Subscriber Line
AESAdvanced Encryption Standard
AHAuthentication Header (RFC 2402)
AHBAdvanced High-Performance Bus
ALAdaptation Layer
AP Access Permission
APBAdvanced Peripheral Bus
APIApplication Programming Interface
AQMAHB Queue Manager
ARC4Alleged RC4
ATMAsynchronous Transfer Mode
ATU-CADSL Termination Unit — Central Office
ATU-RADSL Termination Unit — Remote
BEBig-Endian
BSDBerkeley Software Distribution
BSP Board Support Package
CACConnection Admission Control
CASChannel Associated Signaling
CBCCipher Block Chaining
CBRConstant Bit Rate
CCDCryptographic Context Database
CCM
CDVT Cell Delay Variation Tolerance
CFBCipher FeedBack
CPCSCommon Part Convergence Sublayer
CPECustomer Premise Equipment
Document TitleDocument #
–
Counter mode encryption with CBC-MAC
authentication
April 2005IXP400 Software Version 2.0Programmer’s Guide
22Document Number: 252539, Revision: 007
AcronymDescription
CPUCentral Processing Unit
CRCCyclic Redundancy Check
CSRCustomer Software Release
CTRCounter Mode
DDRDouble Data Rate
DESData Encryption Standard
DMTDiscrete Multi-Tone
DOIDomain of Interpretation
DSLDigital Subscriber Line
DSPDigital Signal Processor
EEmpty
E1Euro 1 trunk line (2.048 Mbps)
ECBElectronic Code Book
ECCError Correction Code
EISAExtended ISA
ERPEndpoint Request Packet
ESPEncapsulation Security Payload (RFC2406)
Eth0Ethernet NPE A
Eth1Ethernet NPE B
FFull
FCSFrame Check Sequence
FIFOFirst In First Out
FRADFrame Relay Access Device
FRFFrame Relay Forum
FXOForeign Exchange Office
FXSForeign Exchange Subscriber
G.SHDSL
GCIGeneral Circuit Interface
GEGigabit Ethernet
GFRGuaranteed Frame Rate
GPIOGeneral Purpose Input/Output
HDLCHigh-Level Data Link Control
HDSL2High Bit-Rate Digital Subscriber Line version 2
HECHeader Error Check
HLDHigh Level Design
HMACHashed Message Authentication Code
HPIHost Port Interface
HPNAHome Phone Network Alliance
ITU G series specification for symmetric High Bit Rate
Digital Subscriber Line
Intel® IXP400 Software
Introduction
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00723
Intel® IXP400 Software
Introduction
AcronymDescription
HSSHigh Speed Serial
HSSIHigh Speed Serial Interface
HWHardware
IADIntegrated Access Device
ICVIntegrity Check Value
IKEInternet Key Exchange
IMAInverse Multiplexing over ATM
IP Internet Protocol
IPsecInternet Protocol Security
IRQInterrupt Request
ISAIndustry Standard Architecture
ISRInterrupt Service Routine
ISR Interrupt Sub-Routine
IVInitialization Vector
LAN Local Area Network
LELittle-Endian
LSBLeast Significant Bit
MACMedia Access Control
MACMessage Authentication Code (in SSL or TLS)
MBS Maximum Burst Size
MCRMinimum Cell Rate
MCUMemory Controller Unit
MD5Message Digest 5
MFSMaximum Frame Size
MIBManagement Information Base
MIIMedia-Independent Interface
MLPPPMulti-Link Point-to-Point Protocol
MMU Memory Management Unit
MPHYMulti PHY
MPIMemory Port Interface
BSD 4.4–like mbuf implementation for IXP400
software. Referred to as IX_MBUF, IXP_BUF and
IX_OSAL_MBUF interchangeably.
BSD 4.4–like mbuf implementation for IXP400
software. Referred to as IX_MBUF, IXP_BUF and
IX_OSAL_MBUF interchangeably.
BSD 4.4–like mbuf implementation for IXP400
software. Referred to as IX_MBUF, IXP_BUF and
IX_OSAL_MBUF interchangeably.
April 2005IXP400 Software Version 2.0Programmer’s Guide
24Document Number: 252539, Revision: 007
AcronymDescription
MSBMost Significant Bit
MVIPMulti-Vendor Integration Protocol
MxUMulti-dwelling Unit
NAKNot-Acknowledge Packet
NAPTNetwork Address Port Translation
NAT Network Address Translation
NENearly Empty
NFNearly Full
NOTE Not Empty
NOTFNot Full
NOTNENot Nearly Empty
NOTNFNot Nearly Full
NPE Network Processing Engine
OC3Optical Carrier - 3
OFOverflow
OFBOutput FeedBack
OSOperating System
OSAL Operating System Abstraction Layer
PBXPrivate Branch Exchange
PCIPeripheral Control Interconnect
PCIPeripheral Component Interface
PCRPeak Cell Rate
PDUProtocol Data Unit
PHYPhysical Layer Interface
PIDPacket Identifier
PMUPerformance Monitoring Unit
PREPreamble Packet
PTPPrecision Time Protocol
QM or QMgrQueue Manager
rt-VBRReal Time Variable Bit Rate
RxReceive
SASecurity Association
SARSegmentation and Re-assembly
SCRSustainable Cell Rate
SDRAMSynchronous Dynamic Random Access Memory
SDSLSymmetric Digital Subscriber Line
SDUService Data Unit
SHA1Secure Hash Algorithm 1
SIOStandard I/O (input/output)
Intel® IXP400 Software
Introduction
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00725
Intel® IXP400 Software
Introduction
AcronymDescription
SIPSession Initiation Protocol
SNMPSimple Network Management Protocol
SOFStart of Frame
SPHYSingle PHY
SSLSecure Socket Layer
SSPSynchronous Serial Port
SVCSwitched Virtual Connection
SWCPSwitching Coprocessor
TCDTarget Controller Driver
TCITransmission Control Interface
TCPTransmission Control Protocol
TDMTime Division Multiplexing
TLBTranslation Lookaside Buffer
TLSTransport Level Security
ToSType of Service
TxTransmit
UBRUnspecified Bit Rate
UDCUniversal Serial Bus Device Controller
UFUnderflow
USBUniversal Serial Bus
UTOPIAUniversal Test and Operation PHY Interface for ATM
VBRVariable Bit Rate
VC Virtual Connection
VCCVirtual Circuit Connection
VCIVirtual Circuit Identifier
VDSLVery High Speed Digital Subscriber Line
VoDSLVoice over Digital Subscriber Line
VoFRVoice over Frame Relay
VoIPVoice over Internet Protocol
VPCVirtual Path Connection
VPIVirtual Path Identifier
VPNVirtual Private Network
WAN Wide Area Network
WEPWired Equivalent Privacy
XcycleIdle-Cycle Counter Utilities
xDSLAny Digital Subscriber Line
XORExclusive OR
April 2005IXP400 Software Version 2.0Programmer’s Guide
26Document Number: 252539, Revision: 007
Intel® IXP400 Software
Software Architecture Overview2
2.1High-Level Overview
The primary design principles of the Intel® IXP400 Software v2.0 architecture are to enable the
supported processors’ hardware in a manner which allows maximum flexibility. Intel
Software v2.0 consists of a collection of software components specific to the IXP4XX product line
and IXC1100 control plane processors and their supported development and reference boards.
This section discusses the software architecture of this product, as shown in “Intel® IXP400
Software v2.0 Architecture Block Diagram” on page 28
The NPE microcode consists of one or more loadable and executable NPE instruction files that
implement the NPE functionality behind the IXP40 0 so ftware library. The NPEs are RISC
processors embedded in the main processor that are surrounded by multiple coprocessor
components. The coprocessors provide specific hardware services (for example, Ethernet
processing and MAC interfaces, cryptographic processing, etc.). The NPE instruction files are
incorporated into the IXP400 software library at build time (or at run-time for Linux). The library
includes a NPE downloader component that provides NPE code version selection and downloading
services. A variety of NPE microcode images are provided, enabling different combinations of
services.
The Access Layer provides a software interface which gives customer code access to the
underlying capabilities of the supported processors. This layer is made up of a set of software
components (access-layer components), which clients can use to configure, control and
communicate with the hardware. Specifically, most access-layer components provide an API
interface to specific NPE-hosted hardware capabilities, such as AAL 0 and AAL 5 on UTOPIA,
Cryptography , Ethernet, HSS, or DMA. The remaining access-layer components provide an API
interface to peripherals on the processors (for example, UART and USB) or features of the Intel
XScale core (for example, Product ID Registers or Performance Monitoring Unit).
®
IXP400
The example Codelets are narrowly focused example applications that show how to use many of
the services or functions provided by the Intel XScale core library and the underlying hardware.
Many codelets are organized by hardware port type and typically exercise some Layer-2
functionality on that port, such as: AAL 5 PDU Transmit / Receive over UTOPIA, Channelized or
HDLC Transmit / Receive over HSS, Ethernet frame Transmit / Receive.
The Operating System Abstraction Layer (OSAL) defines a portable interface for operating
system services. The access-layer components and the codelets abstract their OS dependency to
this module.
Device Driver modules translate the generic Operating System specific device interface
commands to the Access Layer software APIs. Some device driver modules are provided by the OS
vendors’ Board Support Packages. Others may be provided in conjunction with the IXP400
software.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00727
Intel® IXP400 Software
Software Architecture Over view
Figure 1. Intel
Intel® IX P4 X X N etwor k P ro cesso r
Intel XScale® Core
®
IXP400 Software v2.0 Architecture Block Diagram
Board
Support
Package
Drivers
Ethernet
ADSL
I2C
USB
Custom er Applicat ion
Operating System
Parity
OSAL
OSSL
IxAtmdAccIx C ryptoAcc
IxHssAcc
IxTimeSyncAccIxNpeMhIxSspAccix...Acc
IxQmgr
TimeSync
Ethernet
IxDmaAcc
IxNpeDl
Codelets
ATM
HSSPerf ProfCrypto
Access Layer
IxEthAcc
IxParityENAcc
DMA
GPIOPCIUART
IEEE
1588
SSPI2CMCU
USB
2.2Deliverable Model
Intel® IXP400 Software v2.0 consists of these elements:
®
• Intel
• Complete documentation and source code for IXP400 software com ponents
• NPE microcode images
• Example codelets
Note: The software releases do not include tools to develop NPE software. The supplied NPE
functionality is accessible through the access-layer APIs provided by the software release 2.0
library. The NPE microcode is provided as a .c file that must be compiled with the access-layer
library. NPE microcode is compatible only with the specific access-layer it is provided with.
IXP400 Software v2.0 access-layer components and OSAL layer
NPE - A
UTOPIA II
Ethernet
Board Har dwar e
HSS
NPE -B
Ethernet
DMA
NPE -C
Ethernet Crypto
B2909-05
April 2005IXP400 Software Version 2.0Programmer’s Guide
28Document Number: 252539, Revision: 007
2.3Operating System Support
The Intel XScale microarchitecture offers a broad range of tools together with support for two
widely adopted operating systems. The software release 2.0 supports VxWorks* and the standard
Linux* 2.4 kernel. MontaVista* software will provide the support for Linux. Support for other
operating systems may be available. For further information, visit the following Internet site:
The software release 2.0’s software library is OS-i ndependent in tha t all components are written in
ANSI-C with no direct calls to any OS library function that is not covered by ANSI-C. A thin
abstraction layer is provided for some operating services (timers, mutexs, semiphores, and thread
management), which can be readily modified to support additional operating systems. This enables
the devices to be compatible with multiple operating systems and allows customers the flexibility
to port the IXP4XX product line and IXC1100 control plane processors to their OS of choice.
2.4Development Tools
The Intel XScale microarchitecture offers a broad range of tools together with support for two
widely adopted operating systems. Developers have a wide choice of third-party tools including
compilers, linkers, debuggers and board-support packages (BSPs). Tools include Wind River*
Tornado* 2.2.1 for the VxWorks 5.5.1 real-time operating system, Wind River’s PLATFORM for
Network Equipment* and the complete GNU* Linux* development suite.
Intel® IXP400 Software
Software Architecture Over view
Refer to the release notes accompanying the software for information on specific OS support.
2.5Access Library Source Code Documentation
The access library source code uses a commenting style that supports the Doxygen* tool for use in
creating source code documentation. Doxygen is an open-source tool, that reads appropriately
commented source code and produces hyper-linked documentation of the APIs suitable for on-line
browsing (HTML).
The documentation output is typically multiple HTML files, but Doxygen can be configured to
produce LaTeX*, RTF (Rich Text Format*), PostScript, hyper-linked PDF, compressed HTML,
and Unix* man pages. Doxygen is available for Linux, Windows* and other operating systems.
For more information, use the following Web URL:
http://www.doxygen.org.
The IXP400 software compressed file contains the HTML source code documentation at
ixp400_xscale_sw\doc\index.html. This output is suitable for online browsing. For a
printable reference, see the Adobe* Portable Document Format (PDF) file, contained in the
compressed software-download file.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00729
Intel® IXP400 Software
Software Architecture Over view
2.6Release Directory Structure
The software release 2.0 includes the following directory structure:
\---ixp_osal
+---doc
+---include
+---os
+---src
\---ixp400_xscale_sw
+---buildUtils
+---doc (API Reference in HTML and PDF format)
\---src (contains access-layer and codelet source code)
+---adsl (separate package)
+---atmdAcc
+---atmm
+---atmsch
+---codelets
| +---atm
(API References in HTML and PDF format)
(setting environment vars. in VxWorks and Linux)
(sub-directory for codelet source)
| +---cryptoAcc
| +---dmaAcc
| +---ethAal5App
| +---ethAcc
| +---hssAcc
| +---parityENAcc
| +---perfProfAcc
| +---timeSyncAcc
| \---usb
| +---drivers
| \---include
April 2005IXP400 Software Version 2.0Programmer’s Guide
30Document Number: 252539, Revision: 007
(separate package)
(for crypto version only)
Intel® IXP400 Software
Software Architecture Over view
+---cryptoAcc
+---dmaAcc
+---ethAcc
| \---include
+---ethDB
| \---include
+---ethMii
+---featureCtrl
+---hssAcc
| \---include
+---i2c
+---include
(header location for top-level public modules)
+---npeDl
| \---include
(for crypto version only)
+---npeMh
| \---include
+---osLinux
(Linux specific operations for loading NPE microcode)
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00731
Intel® IXP400 Software
Software Architecture Over view
2.7Threading and Locking Policy
The software release 2.0 access-layer does not implement processes or threads. The architecture
assumes execution within a preemptive multi-tasking environment with the existence of multipleclient threads and uses common, real-time OS functions — such as semaphores, task locking, and
interrupt control — to protect critical data and procedure sequencing. These functions are not
provided directly by the OS, but by the OS abstraction components.
2.8Polled and Interrupt Operation
It is possible to use access-layer components by running the Queue Manager in a polled mode or in
an interrupt driven mode of operation. A customers application code may be invoked by registering
with the callback mechanisms provided in the access-layer components. Access-layer components
do not autonomously bind themselves to interrupts but generally may be dispatched by an interrupt
service routine that is bound to the Queue Manager interrupts. Or, a timer-based task may
periodically check the queue manager status and dispatch the access-layer components that are
registered to specific queues. Refer to Chapter 18 for additional information.
All data path interfaces are executable in the context of both IRQ and FIQ interrupts, though not all
operating systems may take advantage of FIQ interrupts in their default configuration.
2.9Statistics and MIBs
The software release 2.0 access-layer components only maintain statistics that access-layer clients
cannot collect of their own accord. The access-layer components do not provide management
interfaces (MIBs). Access-layer clients can use the statistics provided to implement their own
MIBs.
April 2005IXP400 Software Version 2.0Programmer’s Guide
32Document Number: 252539, Revision: 007
2.10Global Dependency Chart
Figure 2 shows the interdependencies for the major APIs discussed in this document.
Figure 2. Global Dependencies
Intel® IXP400 Software
Software Architecture Over view
EthAcc
EthDBEthMii
I2CDrv
HssAcc
NpeDlNpeMh
DmaAcc
FeatureCtrl
Adsl
Usb
IxOSAL
CryptoAcc
PerfProfAcc
QMgr
Atmm
AtmdAcc
AtmSch
SspAcc
TimeSyncAcc
ParityENAcc
UartAcc
B2922-03
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00733
Intel® IXP400 Software
This page is intentionally left blank.
April 2005IXP400 Software Version 2.0Programmer’s Guide
34Document Number: 252539, Revision: 007
Intel® IXP400 Software
Buffer Management3
This chapter describes the data buffer system used in Intel® IXP400 Software v2.0, and includes
definitions of the IXP400 software internal memory buffers, cache management strategies, and
other related information.
3.1What’s New
There are no changes or enhancements to this component in software release 2.0.
3.2Overview
Buffer management is the general principle of how and where network data buffers are allocated
and freed in the entire system. Network data buffers, whose formats are known to all involved
components, need to flow between access-layer components.
As shown in Figure 3, the IXP400 software access-layer follows a simple buffer-management
principle: All buffers used between access-layer component and clients above the access-layer
component must be allocated and freed by the clients, that is, in this case, the operating system
driver. The client passes a buf fer to an access-layer component for various purposes (generally, Tx
and Rx), and the access-layer component returns the buffer to the client when the requested job is
completed. The access-layer component’s Operating System Abstraction Layer module provides
the mapping of the OS buffer header fields to the IXP buffer format. Clients can also implement
their own utilities to convert their buffers to the IXP_BUF format and vice-versa. Depending upon
the service requested, the NPE modifies the IXP_BUF’s shared structure and hands the buf fer back
to the access-layer component. The Figure 3 shows diff erent stages where the different fields in the
IXP_BUF buffer gets updated at transmit and receive time.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00735
Intel® IXP400 Software
Buffer Management
Figure 3. Intel
ix_ctrl: Pool management
®
IXP400 Software Buffer Flow
OSNPE
OS buffer
Receive
Driver
Transmit
OS buffer
fields gets updated
Access-layer Components
Allocate
IXP_BUF
buffer
Translation
OSBUF<->IXP_BUF
Free IXP_BUF
buffer
IX_MBUF: OS specific
structure gets updated
AQM
RxFree Q
Rx Q
Tx Q
TxDone Q
ix_ne: shared structure gets
updated
Update
IXP_BUF
:ixp_ne
structure
Update
IXP_BUF
:ixp_ne
structure
B-3824
The access-layer component may call a client-registered callback function to return the buffer, or
may put the buffer back on a free queue for the client to poll. The access-layer components utilize
similar buffer management techniques when communicating with the NPEs.
The network data buffers and their formats (as well as management of the buffers), must be
‘familiar’ to all components so that the buffers can efficiently flow in the system. The IXP400
software uses two internal buffer formats for all network data:
• IXP_BUF
• raw buffer
These two formats are compatible with the IXP400 software’s access-layer components and NPEs.
IXP_BUF
The IXP_BUF is the Intel
components. As shown in Figure 4, the Operating System Abstraction Layer of Intel
Software v2.0 provides the users with macros to read and write the IX_OSAL_MBUF fields of the
IXP_BUF buffer. The Intel
macros provided with the API to access the IX_OSAL_MBUF fields.
®
IXP400 Software defined buffer format used by the access-layer
®
IXP400 Software v2.0 users are expected to use the IX_MBUF_xxx
®
IXP400
April 2005IXP400 Software Version 2.0Programmer’s Guide
36Document Number: 252539, Revision: 007
Figure 4. IXP_BUF User Interface
Intel® IXP400 Software
Buffer Management
Users
IXP_BUF macros
IX_OSAL_MBUF fields
IXP_BUF structure
B-3825
The usual fields to be updated between the user and the IXP_MBUF fields depends on the accesslayer component, but most of the Intel
®
IXP400 Software API requires the use of following fields:
— IX_DATA
— IX_MLEN
— IX_PKTLEN
— IX_NEXT_BUFFER_IN_PKT_PTR (in case of chained buffers)
Raw Buffers
Raw buffer format is simply a contiguous section of memory represented in one of two ways. One
way to pass raw buffers between two access-layer components is through an agreement to
circularly access the same piece of raw buffer . One access-layer component circularly writes to the
buffer while the other access-layer component circularly reads from the buffer. The buffer length
and alignment are parts of the agreement. At run-time, another communication channel is needed
to synchronize the read pointer and write pointers between the two components.
The other way to pass raw buffers between two components is through passing a pointer to the
buffer between the components. If all buffers are the same size and that size is fixed, the length can
be made known during configuration. Otherwise, another communication channel in run-time is
needed to tell the length of the buffer. The raw buffer component is typically used for circuitswitched network data (that is, TDM-based). The access-layer component IxHssAcc channelized
service uses raw buffers. Refer to Section 13.7.2 for additional information on raw buffers.
Note: Intel
®
IXP400 Software provides OSAL macros, which can be used to allocate memory for raw
buffers as a substitute to allocating IXP_BUF from the pool.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00737
Intel® IXP400 Software
Buffer Management
3.3IXP_BUF Structure
As shown in Figure 5, IXP_BUF is comprised of the following three main structures, and each
structure is comprised of eight entries four bytes long.
1. The first structure consists of an eight word fields some of which are between the OS driver /
API users and the access-layer components.
2. The second structure consists of internal fields used by the pool manager, which is provided by
the OSAL component.
3. The third structure is the NPE Shared structure that is composed of common header fields and
NPE service specific fields. Depending upon the access-component usage, some of the service
specific fields such as VLAN tags may be available for the user through use of macros.
Figure 5. IXP_BUF Structure
Structure 1
IX_MBUF: OS Dependent Buffer format
ix_ctrl: Pool Management Fields
Structure 2
ix_ne: NPE Shared structure
Structure 3
IXP_BUF structure
3.3.1IXP_BUF Structure and Macros
Users are expected to use the following IXP_BUF macros provided to access IXP_BUF subfields.
The Figure 6 shows macros defined by the OSAL layer component to be used to access the
IXP_BUF fields.
B-3826
April 2005IXP400 Software Version 2.0Programmer’s Guide
38Document Number: 252539, Revision: 007
Figure 6. OSAL IXP_BUF structure and macros
Intel® IXP400 Software
Buffer Management
OSAL macros for IXP400 IXP_BUFIXP_BUF fields
Cache Line
Cache Line
Cache Line
ix_next
ix_nextPacket
ix_data
ix_len
ix_typeix_flagsix_reserved
ix_rsvd
ix_PktLen
ix_priv
Reserved
Reserved
ix_signature
ix_allocated_len
ix_allocated_data
ix_pool
ix_chain
ix_osbuf_ptr
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
Reserved (Shared with NPE)
IX_OSAL_MBUF_NEXT_BUFFER_IN_PKT_PTR
IX_OSAL_MBUF_NEXT_PKT_IN_CHAIN_PTR
IX_OSAL_MBUF_MDATA
IX_OSAL_MBUF_MLEN
IX_OSAL_MBUF_MTYPE/MFLAGS
IX_OSAL_MBUF_PKT_LEN
IX_OSAL_MBUF_PRIV
IX_OSAL_MBUF_SIGNATURE
IX_OSAL_MBUF_ALLOCATED_MBUF_LEN
IX_OSAL_MBUF_ALLOCATED_MBUF_DATA
IX_OSAL_MBUF_NET_POOL
IX_OSAL_MBUF_OSBUF_PTR
User visible
Shared with NPE
IXP400 SW v1.4
v1.5 Pool management
OS dependent fields, not m od ified by access-layer comp one nt
OS dependent fields, may be modified by access-layer componentUser visible
Reserved for NPE-Service specific usage
Fields used by Intel IXP400 version 1.4
Used by Intel IXP400 SW v1.5 for pool mana gem e nt / Reser ve d
B-3827
Depending upon the usage model, different software components use the structures to update the
internal fields of the IXP_BUF structure. Figure 7 shows a typical interface for the API users or
operating system drivers to the IXP_BUF fields. Depending upon the access-layer components in
use the API user may or may not use the service-specific macros to read the NPE-shared structure
of the IXP_BUF fields. Reading of the MAC address or a VLAN tag for a quick classification is an
example of NPE-shared structure use.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00739
Intel® IXP400 Software
Buffer Management
Figure 7. API User Interface to IXP_BUF
IXP_BUF
API USER
IX_OSAL_MBUF_XXX
macros (data, length …)
API User
(e.g. driver)
IX_ETHACC_NE_XXX
service-specific macros
(e.g. flags)
The Figure 8 shows a typical interface between the Intel
components and the IXP_BUF fields. The access-layer components adapt to the endianness as
defined by the Intel XScale core. The access-layer components can perform reads and write to the
IX_MBUF fields as well as the NPE-shared structure. The service-specific fields to be updated in
the NPE-shared structure may vary depending upon access-component needs.
Figure 8. Access-Layer Component Interface to IXP_BUF
IX_MBUF
Data, len …
same fields across all APIs)
Reserved for pool
management and extra fields
ix_ne: NPE shared structure
(service specific)
B-3828
®
IXP400 Software access-layer
IXP_BUF
ACCESS-LAYER components
Access-layer components
IX_OSAL_MBUF_XXX
macros
IX_Component_NE_XXX
service-specific macros
same fields across all APIs)
management and extra fields
ix_ne: NPE Shared structure
IX_MBUF
Data, len …
Reserved for pool
(service specific)
B-3829
April 2005IXP400 Software Version 2.0Programmer’s Guide
40Document Number: 252539, Revision: 007
Figure 9 below shows the interface between the OSAL pool management module and the pool
management fields used for pool maintenance. The pool management field also stores the
os_buf_ptr field, which is used by the access-layer to retrieve the original pointer to the OS buffer
and is set at the time of pool allocation.
Figure 9. Pool Management Fields
Intel® IXP400 Software
Buffer Management
IXP_BUF
IX_OSAL_MBUF_POOL
Pool manager
field macros and field
names
IX_MBUF: OS-Dependent Buffer Format
As shown in Figure 10, the IX_MBUF information follows a format originally defined in Berkeley
Software Distribution (BSD) TCP/IP code distribution to preserve the backward compatibility with
previous Intel
®
IXP400 Software releases. The OSAL layer provides translation functions to map
the OS-dependent buffer format to the IX_MBUF format for Linux* and VxWorks* operating
systems. This simplifies the buffer management without sacrificing functionality and flexibility.
Figure 10. IXP_BUF: IX_MBUF Structure
IX_MBUF
Data, len …
(standard along CSR APIs)
Reserved for pool
management and extra fields
ix_ne: NPE Shared structure
(service specific)
ix_next
ix_nextpacket
ix_data
ix_len
ix_t ypeix_flags
ix_rsvd
ix_Pk tL en
Reserved
ix_reserved
IX_MBUF: 1st Structure of IXP_BUF
(IX_MBUF fie ld s )
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00741
Intel® IXP400 Software
Buffer Management
Linux utilizes memory structures called skbuffs. The user allocates IXP_BUF and sets the data
payload pointer to the skbuff payload pointer. An os_buf_ptr field inside the ixp_ctrl structure
(defined below) of the IXP_BUF is used to save the actual skbuff pointer. In this manner, the OS
buffers are not freed directly by the IXP400 software.
The IXP400 software IXP_BUF to skbuff mapping is a ‘zero-copy’ implementation. There is no
copy/performance penalty in using Linux skbuffs. Other proprietary buffer schemes could also be
implemented with the IXP400 software using the mbuf-to-skbuff implementation as an example.
ix_ctrl: Intel
®
IXP400 Software Internal Pool Management Fields
As shown in Figure 11, the ix_ctrl fields are set and used by the IXP_BUF pool manager provided
by the OSAL component. Some of the fields can be used for specific purposes for different
operating systems For example, signature verification fields is used in Linux when NDEBUG is
enabled. The reserved field may be used in VxWorks to support IPv6 format.
Figure 11. IXP_BUF: ix_ctrl Structure
Reserved
Reserved
ix_signa ture
ix_allocated_len
ix_allocated_data
ix_pool
ix_chain
ix_osbuf_ptr
ix_ctrl: 2nd Structure of IX _B U F
(Internal fie ld s )
ix_ne: IXP400 NPE Shared Structure
As shown in Figure 12, this structure is provided by the Intel XScale core to the NPE. Depending
upon the access-layer component usage, some of these fields may be visible to the user through use
of macros and also may be altered by the NPE. The lower five words of this structure are defined
according to the needs of NPE microcode; therefore, different NPE images may have different
structure for this part. The upper three words follows the same structure across all the NPE images.
Note: Users should not make any assumptions to usage of the service-specific fields in this NPE-shared
structure. The fields are for internal NPE usage only.
April 2005IXP400 Software Version 2.0Programmer’s Guide
42Document Number: 252539, Revision: 007
Intel® IXP400 Software
Buffer Management
Figure 12. IXP_BUF: NPE Shared Structure
ixp_next
ixp_lenixp_pkt_len
ixp_data
NPE Service Specific Field
NPE Service Specific Field
NPE Service Specific Field
NPE Service Specific Field
NPE Service Specific Field
ix_ne: 3rd Structure of IX_B UF
(NPE S hared structure)
3.4Mapping of IX_MBUF to Shared Structure
The Figure 13 below shows an example case on how the IX_MBUF headers are int ernal ly mapp ed
to the NPE shared structure as in the case of the Ethernet and Crypto access-layer components only.
The IX_MBUF standard buffer format is used throughout the access-layer code. In order to
minimize overhead in reading the whole buffer control structure from the memory to the NPE
while performing NPE-specific services, the pointer to the NPE shared structure is passed to the
NPE for processing the data instead of the buffer descriptor pointer itself. Therefore, for the accesslayer components, only the required information (such as next buffer pointer, buffer data pointer,
buffer length and packet length) from the buffer control structure is copied into NPE shared
structure. Depending upon the endianness, the IXP400 software internally swaps the buffers of
packetised data and the headers between the upper software layers and the NPEs for the Ethernet
and the Crypto access-layer components. It is important to note that NPE shared buffer format used
by the IXP400 software is hard-coded in the NPE microcode. It is not possible to change this
shared buffer format.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00743
Intel® IXP400 Software
Buffer Management
Figure 13. Internal Mapping of IX_MBUF to the Shared NPE Structure
ix_next
ix_nextpkt
ix_data
ix_ le n
ix_ ty p eix_flags
ix_rs vd
ix_len
Mapping from IX_MBUF to NPE Shared
Structure
ixp_next
ixp_lenixp_pkt_len
ixp_data
NPE S ervice Specific Field
NPE S ervice Specific Field
ix_reserved
1st Cache
lin e o f
IXP_BUF
2nd Cach e
lin e o f
IXP_BUF
NPE S ervice Specific Field
NPE S ervice Specific Field
NPE S ervice Specific Field
3.5IX_MBUF Structure
Table 1 and Table 2 present IX_MBUF structure format and details.
Table 1. Internal IX_MBUF Field Format (Sheet 1 of 2)
April 2005IXP400 Software Version 2.0Programmer’s Guide
44Document Number: 252539, Revision: 007
Table 1. Internal IX_MBUF Field Format (Sheet 2 of 2)
0123
20ix_rsvd
Intel® IXP400 Software
Buffer Management
24
28
ix_pktlen
ix_priv(Reserved)
A set of macros are provided for the IXP400 software to access each of the fields in the buffer
structure. Each macro takes a single parameter – a pointer to the buffer itself. Each macro returns
the value stored in the field. More detail on the field, their usage, and the macros are detailed in the
table below.
Note: The data pointer IX_OSAL_MBUF_MDATA could be aligned on a 16 bit boundary to help align
an IP header on a 32 bit boundary.
Table 2. IX_MBUF Field Details (Sheet 1 of 2)
Field / MACROPurposeUsed by Access-Layer?
IX_OSAL_MBUF_NEXT_BUFFER_IN_PK
T_PTR
Parameter type: IX_MBUF *
Return type: IX_MBUF *
Description: Returns a 32-bit pointer to the
next buffer in the packet
IX_OSAL_MBUF_NEXT_PKT_IN_CHAIN_
PTR
Parameter type: IX_MBUF *
Return type: IX_MBUF *
Description: Returns a 32-bit pointer to the
first buffer in the next packet in the packet
chain
IX_OSAL_MBUF_MDATA
Parameter type: IX_MBUF *
Return type: char *
Description: Returns a pointer to the first
byte of the buffer data
IX_OSAL_MBUF_MLEN
Parameter type: IX_MBUF *
Return type: int
Description: Returns the number of octets of
valid data in the data section of the buffer
IX_OSAL_MBUF_TYPE
Parameter type: IX_MBUF *
Return type: unsigned char
Description: Returns the type field of the
buffer
32-bit pointer to the next buffer in a
chain (linked list) of buffers. NULL
entry marks end of chain.
32-bit pointer to the next packet in a
chain (linked list) of packets. NULL
entry marks end of chain. Each
packet in the chain may consist of a
chain of buffers.
32-bit pointer to the data section of a
buffer. The data section typically
contains the payload of a network
buffer.
Lengths (octets) of valid data in the
data section of the buffer.
Buffer typeYes, by some components.
Yes, where buffer chaining is
supported.
No. Packet chaining is not supported
by IXP400 Software.
Yes. But does not get modified by the
access-layer
Yes.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00745
Intel® IXP400 Software
Buffer Management
Table 2. IX_MBUF Field Details (Sheet 2 of 2)
Field / MACROPurposeUsed by Access-Layer?
IX_OSAL_MBUF_FLAGS
Parameter type: IX_MBUF *
Return type: unsigned char
Description: Returns the flags field of the
buffer
Reserved
IX_OSAL_MBUF_NET_POOL
Parameter type: IX_MBUF *
Return type: unsigned int
Description: Returns a 32-bit pointer to the
parent pool of the buffer
IX_OSAL_MBUF_PKT_LEN
Parameter type: IX_MBUF *
Return type: unsigned int
Description: Returns the length of the packet
(typically stored in the first buffer of the
packet only)
ReservedUsed by VxWorks*No.
Buffer flags.Yes, by some components.
Reserved field, used to preserve 32bit word alignment.
32-bit pointer to the parent pool of
the buffer
Total length (octets) of the data
sections of all buffers in a chain of
buffers (packet). Typically set only in
the first buffer in the chain (packet).
No.
Yes, by some components.
Yes, where buffer chaining is
supported.
3.6Mapping to OS Native Buffer Types
OSAL provides buffer-translation macros for users to translate OS-specific buffer formats to
OSAL IXP buffer format and vice versa. The mapping of OS buffer fields to the IXP400 software
buffer format is usually done in the OS specific driver component. However, for ease of users the
OSAL component provides generic macros for VxW orks, and Linux operating system that does the
translation. Depending upon the build, the OSAL component will translate the macros to its OSspecific implementation. The general syntax for using these macros is as follows:
These macros are intended to replace Linux skbuf and VxWorks mbuf conversions. Users can also
define their own conversion utilities in their package to translate their buffers to IXP buffers
(IX_OSAL_MBUF).
3.6.1VxWorks* M_BLK Buffer
The first structure IX_MBUF of the IXP_BUF buffer format is compatible with VxWorks M_BLK
structure. It is also intended to provide a backward compatibility to previous Intel
Software release. For this reason, when compiled for VxWorks, the IX_MBUF buffer format is
compatible directly as an M_BLK buffer . The Intel
fields defined by the M_BLK buf fer. The macros listed in Table 3 are used by the IXP400 software
to access the correct fields within the M_BLK structure.
®
IXP400 Software does not make use of all the
®
IXP400
The M_BLK structure is defined in the global VxWorks header file “netBufLib.h”.
April 2005IXP400 Software Version 2.0Programmer’s Guide
46Document Number: 252539, Revision: 007
Intel® IXP400 Software
Buffer Management
Note that the M_BLK structure contains many fields that are not used by the IXP400 software.
These fields are simply ignored and are not modified by the IXP400 software.
M_BLK buffers support two levels of buffer chaining:
• buffer chaining — Each buffer can be chained together to form a packet. This is achieved
using the IX_MBUF_NEXT_BUFFER_IN_PKT_PTR equivalent field in the M_BLK. This
is supported and required by the IXP400 software.
• packet chaining — Each packet can consist of a chain of one or more buffers. Packets can also
be chained together (to form a chain of chains). This is not used by the IXP400 software. The IX_MBUF_NEXT_PKT_IN_CHAIN_PTR equivalent field of the M_BLK buffer structure is
used for this purpose. Most IXP400 software components will ignore this field.
Note: The VxWorks netMbuf pool library functions will not be supported to allocate and free the
IXP_BUF buffers.
Table 3 shows the field mapping between the IX_MBUF and the M_BLK buffer structures through
The buffer format native to the Linux OS is the “skbuff” buffer structure, which is significantly
different from the IX_MBUF buffer format used by the IXP400 software.
The Linux skbuf structure is attached to the os_buf_ptr field during transmit or receive and is
detached during TxDone. The user must allocate an IXP_BUF header, make a call to a translational
function and pass the IXP_BUF buffer to the IXP400 software release. The translation functions
enter all the required fields from the OS buffers to respective fields in the first structure, that is, the
IX_MBUF structure within the IXP_BUF structure. The translation of fields from the IX_MBUF
structure into the NPE shared structure is accomplished by the OSAL component on Transmit and
Receive Replenish. On TxDone the user may recycle the IXP_BUF back to the IXP_BUF_POOL
or to an internal data structure.
The OSAL layer provides buffer translation macros for users to translate OS-specific buffer
formats to IXP_BUF buffer format and vice versa.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00747
Intel® IXP400 Software
Buffer Management
It works on the following principles:
• Each IXP_BUF is mapped to an skbuff (1:1 mapping)
• The os_buf_ptr field of the ix_ctrl structure is used to store a pointer to the corresponding
skbuff.
• The ix_data pointer field of the IX_MBUF structure within the IXP_BUF structure will be set
to point to the data field of the corresponding skbuff through use of the
IX_OSAL_MBUF_MDATA macro.
• The ix_len and ix_pkt_len fields of the IX_MBUF structure within the IXP_BUF structure
will be set to the length of the skbuff data section (the len field in the skbuff structure) through
use of the IX_OSAL_MBUF_PKT_LEN and IX_OSAL_MBUF_MLEN macros.
The prototype for this function is shown in Table 4.
The following fields of IX_MBUF within the IXP_BUF structure
will get updated:
— ix_len
— ix_pktlen
— ix_data
— ix_ctrl.os_buf_ptr
• IX_OSAL_CONVERT_IXPBUF_TO_OS_BUF(ixpBufPtr)
The following fields will get updated in the skbuffer
— (skb)osBufPtr = ix_ctrl.os_buf_ptr
— skb->data = IX_OSAL_MBUF_MDATA(ixMbufPtr)
— skb->len = IX_OSAL_MBUF_MLEN(ixMbufPtr)
— skb->len = IX_OSAL_MBUF_PKT_LEN(ixMbufPtr)
The suggested usage model of this function is:
• Allocate a pool of IXP_BUF buffer headers. Do not allocate data sections for these buffers.
• When passing a buffer from higher-level software (for example, OS network stack) to the
IXP400 software, attach the skbuff to an IXP_BUF using the translation function.
• When receiving an IXP_BUF passed from the IXP400 software to higher-level software, use
the translation function to retrieve a pointer to the skbuff that was attached to the IXP_BUF,
and use that skbuff with the OS network stack to process the data.
The Intel
the IXP400 software distribution in form of a patch, contain s an example of this suggest ed usage
model.
®
IXP400 Software Linux Ethernet Device driver (“ixp425_eth.c”), which is included in
April 2005IXP400 Software Version 2.0Programmer’s Guide
48Document Number: 252539, Revision: 007
3.7Caching Strategy
The general caching strategy in the IXP400 software architecture is that the software (include Intel
XScale core-based code and NPE microcode) only concerns itself with the parts of a buffer which
it modifies. For all other parts of the buffer, the user (higher-level software) is entirely responsible.
IXP_BUF buffers typically contain a header section and a data section. The header section contains
fields that can be used and modified by the IXP400 software and the NPEs. Examples of such
fields are:
• pointer to the data section of the IXP_BUF
• length of the data section of the mbuf
• pointer to the next mbuf in a chain of mbufs
• buffer type field
• buffer flags field
As a general rule, IXP400 software concerns itself only with IXP_BUF headers, and assumes that
the user (that is, higher-level software) will handle the data section of buffer.
The use of cached memory for IXP_BUF buffer is strongly encouraged, as it will result in a
performance gain as the buffer data is accessed many times up through the higher layers of the
operating system’s network stack. However, use of cached memory has some implications that
need to be considered when used for buffers passed through the IXP400 software Access-Layer.
Intel® IXP400 Software
Buffer Management
The code that executes on Intel XScale core accesses the buffer memory via the cache in the Intel
XScale core MMU. However, the NPEs bypass the cache and access this external SDRAM
memory directly. This has different implications for buffers transmitted from Intel XScale core to
NPE (Tx path), and for buffers received from NPE to Intel XScale core (Rx path).
3.7.1Tx Path
If a buffer in cached memory has been altered by Intel XScale core code, the change will exist in
the cached copy of the IXP_BUF , but may not be written to memory yet. In order to ensure tha t the
memory is up-to-date, the portion of cache containing the altered data must be flushed.
The cache flushing strategy uses the following general guidelines:
• The “user” is responsible for flushing the data section of the IXP_BUF. Only those portions of
the data section which have been altered by the Intel XScale core code need to be flushed. This
must be done before submitting an IXP_BUF to the IXP400 software for transmission via the
component APIs (for example, ixEthAccPortTxFrameSubmit().
• The IXP400 software is responsible for writing and flushing the ix_ne shared section of the
buffer header. This must be done before submitting an IXP_BUF to the NPE. Communication
to the NPEs is generally performed by access-layer components by sending IXP_BUF headers
through the IxQMgr queues.
Since flushing portions of the cache is an expensive operation in terms of CPU cycles, it is not
advisable to simply flush both the header and data sections of each IXP_BUF. To minimize the
performance impact of cache-flushing, the IXP400 software only flushes that which it modifies
(the IXP_BUF header) and leaves the flushing of the data section as the responsibility of the user.
The user can minimize the performance impact by flushing only what it needs to.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00749
Intel® IXP400 Software
Buffer Management
Tx Cache Flushing Example
In the case of an Ethernet bridging system, only the user can determine that it is not necessary to
flush any part of the packet payload. In a routing environment, the stack can determine that only
the beginning of the mbuf may need to be flushed (for example, if the TTL field of the IP header is
changed). Additionally, with the VxWorks OS, mbufs can be from cached memory or uncached
memory. Only the user knows which buffers need to be flushed or invalidated and which buffers do
not.
When the NPE has transmitted the data in a buffer, it will return the buffer back to the Intel XScale
core. In most cases, the cache copy is still valid because the NPE will not modify the contents of
the buffer on transmission. Therefore, as a general rule, the IXP400 software does not invalidate
the cached copy of IXP_BUF used for transmission after they are returned by the NPE.
3.7.2Rx Path
If a buffer has been altered by an NPE, the change will exist in memory but the copy of the buffer
in Intel XScale core cache may not be up-to-date. We need to ensure that the cached copy is up-todate by invalidating the portion of cache that contains the copy of the altered buffer data.
The strategy for dealing with data received by the NPEs uses the following general guidelines:
• The “user” is responsible for invalidating the data section of the IXP_BUF. Again, only the
user knows which portions of the data section it needs to access. In some instances, the user
may be required to submit free IXP_BUFs that are to be used to hold received data (for
example, ixEthAccPortRxFreeReplenish()). It is strongly recommended that the cache
location holding the data portion of the free IXP_BUFs be invalidated before submitting them
via the API.
• The IXP400 software is responsible for writing and flushing the ix_ne shared section of the
buffer header. The IXP400 software may modify the header of the IXP_BUF before passing it
to the NPE, hence the need to flush and then invalidate the header section of the IXP_BUF.
This should be done before submitting an IXP_BUF to the NPE for reception (via IxQMgr
queues).
Note: In some cases, the Access-Layer will flush the header section of the IXP_BUF before submitting
the IXP_BUF to the NPE, and will invalidate the header section after receiving it back from the
NPE with data. This approach is also acceptable; however, the approach listed above is considered
more efficient and more robust.
As in the flushing operations listed in the previous section, invalidating portions of the cache is an
expensive operation in terms of CPU cycles. To minimize the performance impact of cacheinvalidating, the IXP400 software only invalidates that which it modifies (the IXP_BUF header)
and leaves the invalidating of the data section as the responsibility of the user. The user can
minimize the performance impact by invalidating only what is necessary. When recycling
IXP_BUFs, only the user knows what was the previous use of the IXP_BUF and the parts of
payload that may need to be invalidated.
3.7.3Caching Strategy Summary
Before the NPE reads the memory, ens ure that the memory is up-to-date by flushing cached copies
of any parts of the buffer memory modified by the Intel XScale core.
April 2005IXP400 Software Version 2.0Programmer’s Guide
50Document Number: 252539, Revision: 007
Intel® IXP400 Software
Buffer Management
After the NPE modifies the memory , ensure that the Intel XScale core MMU cache is up-to-date by
invalidating cached copies of any parts of the buffer memory that the Intel XScale core will need to
read. It is more robust to invalidate before the NPE gets a chance to write to the SDRAM.
OS-independent macros are provided for both flushing (IX_ACC_DATA_CACHE_FLUSH) and
invalidating (IX_ACC_DATA_CACHE_INVALIDATE). For more information, refer to the header
file ixp_osal/include/IxOsal.h).
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00751
Intel® IXP400 Software
This page is intentionally left blank.
April 2005IXP400 Software Version 2.0Programmer’s Guide
52Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components:
ATM Driver Access (IxAtmdAcc) API4
This chapter describes the Intel® IXP400 Software v2.0’s “ATM Driver-Access” access-layer
component.
4.1What’s New
There are no changes or enhancements to this component in software release 2.0.
4.2Overview
The ATM access-driver component is the IxAtmdAcc software component and provides a unified
interface to AAL transmit and receive hardware. The software release 2.0 supports AAL 5, AAL 0,
and OAM. This component provides an abstraction to the IXP4XX product line and IXC1100
control plane processors’ ATM cell-processing hardware. It is designed to support ATM transmit
and receive services for multiple ports and VCs.
This chapter describes the configuration, control, and transmit/receive flow of ATM PDU data
through the IxAtmdAcc component.
The general principle of improving performance by avoiding unnecessary copying of data is
adhered to in this component. The BSD-based buffering scheme is used.
Since AAL 0 is conceptually a raw cell service, the concept of an AAL-0 PDU can be somewhat
misleading. In the context of software release 2.0, an AAL-0 PDU is defined as containing an
integral number of 48-byte (cell payload only) or 52-byte (cell payload and cell header without
HEC field) cells.
4.3IxAtmdAcc Component Features
The services offered by the ixAtmdAcc component are:
• Supports the configuration and activation of up to 12 ports on the UTOPIA Level-2 interface.
• Supports AAL-5 CPCS PDUs transmission service, which accepts fully formed PDUs for
transmission on a particular port and VC. AAL-5 CRC calculation is performed by hardware.
(PDUs may consist of single or chained IXP_BUFs.)
• Supports AAL-0-48 PDU transmission service, which accepts PDUs containing an integral
number of 48-byte cells for transmission on a particular port and VC. (PDUs may consist of
single or chained IXP_BUFs.)
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00753
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
• Support AAL-0-52 PDU transmission service, which accepts PDUs containing an integral
number of 52-byte cells for transmission on a particular port and VC. (PDUs may consist of
single or chained IXP_BUFs.)
• Supports OAM PDU transmission service, which accepts PDUs containing an integral number
of 52-byte OAM cells for transmission on a particular port independent of the VC. (PDUs may
consist of single or chained IXP_BUFs.)
• Supports ATM traffic shaping
— Scheduler registration: Allows registration of ATM traffic-shaping entities on a per-ATM-
port basis. A registered scheduler must be capable of accepting per-VC-cell demand
notifications from AtmdAcc.
— Transmission control: Allows ATM traffic-shaping entities to determine when cells are
sent and the number of cells sent from each VC at a time.
• Supports setting or viewing the CLP for AAL-5 CPCS SARed PDUs.
• Supports setting the transmit CLP CUP in all cells of an AAL-0-48 PDU.
• Supports the client setting the transmit GFC, PTI, or CLP in any cell of an AAL-0-52/OAM
PDU.
IxAtmdAcc does not process cell headers for AAL-0-52/OAM, thus GFC, PTI, and CLP must
be set in the cell headers in the PDU by the client. (The HEC is not included.)
• Supports delivery of fully formed AAL-5 CPCS PDUs received on a particular port and VC
with error detection for CRC errors, priority queuing, and corrupt-packet delivery.
(PDUs may consist of single or chained IXP_BUFs.)
• Supports delivery of AAL-0 PDU containing 48-byte cells (with good HEC) — received on a
particular port and VC.
• Supports delivery of AAL-0 PDU containing 52-byte cells — received on a particular port and
VC.
• Supports delivery of an OAM PDU containing a single, 52-byte OAM cell (with good HEC,
and good CRC-10) — received on any port and any VC.
• Allows the client to determine the port on which the PDU was received, for all client service
types.
• Supports viewing the receive CLP of an AAL-0-48 PDU (logical or of the CLP value in each
cell contained in the PDU).
• Allows the client to view the GFC, PTI, or CLP of any cell in a received AAL-0-52/OAM
PDU.
The component does not process cell headers for AAL-0-52/OAM. CLP may be read from the
header cells in the PDU by the client.
• Supports up to 32 VCC channels for transmit services and up to 32 channels for AAL-0/
AAL-5 receive services. One client per channel is supported.
• Supports one dedicated OAM transmit channel (OAM-VC) per port. This channel supports
transmission of OAM cells on any VC.
• Supports one dedicated OAM receive channel (OAM-VC) for all ports. This channel supports
reception of OAM cells from any port on any VC.
• Provides an interface to retrieve statistics unavailable at the client layer.
April 2005IXP400 Software Version 2.0Programmer’s Guide
54Document Number: 252539, Revision: 007
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
These statistics include the number of cells received, the number of cells receive with an
incorrect cell size, the number of cells containing parity errors, the number of cells containing
HEC errors, and the number of idle cells received.
• Provides an interface to use either a threshold mechanism — which allows the client actions to
be driven by events — or a polling mechanism — through which the client decides where and
when to invoke the functions of the interface.
• Supports fast-path-exception packet processing.
• Supports use in a complete user environment, a complete-interrupt environment, or a mixture
of both.
This is done by providing the control over the Rx and TxDone dispatch functions and transmit
and replenish functions. The user may trigger them from interrupts, or poll them, or both,
assuming an exclusion mechanism is provided as needed.
The ixAtmdAcc component communicates with the NPEs’ ATM-over-UTOPIA component
through entries placed on Queue Manager queues, IXP_BUFs, and associated descriptors —
located in external memory and through the message bus interface.
4.4Configuration Services
Intel® IXP400 Software
IxAtmdAcc supports three configuration services:
• UTOPIA port configuration
• ATM traffic shaping
• VC configuration
4.4.1UTOPIA Port-Configuration Service
The UTOPIA interface is the IXP4XX product line and IXC1100 control plane processors’
interface by which ATM cells are sent to and received from external PHYs. In order to configure
the UTOPIA interface, IxAtmdAcc provides an interface that allows a configuration structure to be
sent to and/or retrieved from the UTOPIA interface.
IxAtmdAcc provides the interface to configure the hardware and enable/disable traffic on a perport basis.
4.4.2ATM Traffic-Shaping Services
An ATM scheduling entity provides a mechanism where VC traffic on a port is shaped in
accordance with its traffic parameters. IxAtmdAcc does not itself provide such a traffic-shaping
service, but can be used in conjunction with external scheduling services.
The scheduler registration interface allows registration of ATM traffic-shaping entities on a perport basis. These entities, or proxies thereof, are expected to support the followin g cal lbacks on
their API:
• Function to exchange VC identifiers.
A VC identifier identifies a port, VPI, and VCI and is usually specific to layer interface.
IxAtmdAcc has an identifier known as a connId and the scheduling entity is expected to have
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00755
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
its own identifier known as a scheduler VcId. This callback also serves to allow the scheduling
entity to acknowledge the presence of VC.
• Function to submit a cell count to the scheduling entity on a per-VC basis.
This function is used every time the user submits a new PDU for transmission.
• Function to clear the cell count related to a particular VC.
This function is used during a disconnect to stop the scheduling services for a VC.
No locking or mutual exclusion is provided by the IxAtmdAcc component over these registered
functions.
The transmission-control API expects to be called with an updated transmit schedule table on a
regular basis for each port. This table contains the overall number of cells, the number of idle cells
to transmit, and — for each VC — the number of cells to transmit to the designated ATM port.
The ATM Scheduler can be different for each logical port and the choice of the ATM scheduler is a
client decision. ATM scheduler registrations should be done before enabling traffic on the
corresponding port. Once registered, a scheduler cannot be unregistered. If no ATM scheduler is
registered for one port, transmission for this port is done immediately.
4.4.3VC-Configuration Services
IxAtmdAcc provides an interface for registering VCs in both Tx and Rx directions. The ATM VC
is identified by a logical PHY port, an ATM VPI, and an ATM VCI. The total number of ATM
AAL-5 or AAL-0 VCs supported — on all ports and in both directions — is 32. IxAtmdAcc
supports up to 32 Rx channels, and up to 32 Tx channels on all ports. For AAL-5 and AAL-0, the
number of logical clients supported per-VC is one.
In addition to the 32 VCs mentioned above, one dedicated OAM transmit VC per port and one
dedicated OAM receive VC are supported. These dedicated OAM VCs behave like an “OAM
interface” for the OAM client, and are used to carry OAM cells for any VPI/VCI (even if that VPI/
VCI is one of the 32 connected for AAL services).
In the Tx direction, the client has to register the ATM traffic characteristics to the ATM scheduler
before invoking the IxAtmDAcc “connect” function. The TxVcConnect function does the
following actions:
• Checks if the PHY port is enabled.
• Checks for ATM VC already in use in an other TX connection.
• Checks if the service type is OAM and, if so, checks that the VC is the dedicated OAM-VC for
that port.
• Checks the registration of this VC to the registered ATM scheduler.
• Binds the VC with the scheduler associated with this port.
• Registers the callback by which transmitted buffers get recycled.
• Registers the notification callback by which the hardware will ask for more data to transmit.
• Allocates a connection ID and return it to the client.
In the Rx directions, the RxVcConnect steps involve the following actions:
• Check if the PHY port is enabled.
April 2005IXP400 Software Version 2.0Programmer’s Guide
56Document Number: 252539, Revision: 007
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
• Check for ATM VC already in use in an other Rx connection.
• Check if the service type is OAM and, if so, check that the VC is the dedicated OAM-VC.
• Register the callback by which received buffers get pushed into the client’s protocol stack.
• Register the notification callback by which the hardware will ask for more available buffers.
• Allocate a connection ID and return it to the client.
When connecting, a connection ID is allocated and must be used to identify the VC, in all calls to
the API. The connection IDs for Receive and Transmit, on the same ATM VC, are different.
The client has the choice of using a threshold mechanism provided by IxAtmdAcc or polling the
different resources. When using the threshold mechanism, the client needs to register a callback
function and supply a threshold level. As a general rule, when configuring threshold values for
different services, the lower the threshold value is, the higher the interrupt rate will be.
4.5Transmission Services
The IxAtmdAcc transmit service currently supports AAL 5, AAL 0-48, AAL 0-52, and OAM only
and operates in scheduled mode.
Intel® IXP400 Software
In scheduled mode, buffers are accepted and internally queued in IxAtmdAcc until they are
scheduled for transmission by a scheduling entity. The scheduling entity determines the number
cells to be transmitted from a buffer at a time, this allows cells from different VCs to be interleaved
on the wire.
AtmdAcc accepts outbound ATM payload data for a particular VC from its client in the form of
chained IXP_BUFs. For AAL 5, an IXP_BUF chain represents an AAL-5 PDU which can contain
0-65,535 payload octets. A PDU is, however, a multiple of 48 octets, when padding and the AAL-5
trailer are included. For AAL 0-48/AAL 0-52/OAM, an IXP_BUF chain represents a PDU where
the maximum length is limited to 256 chained IXP_BUFs and/or 65,535 octets.
The submission rate of buffers for transmission should be based on the traffic contract for the
particular VC and is not known to IxAtmdAcc. However, there will be a maximum number of
buffers that IxAtmdAcc can hold at a time and a maximum number of buffers that the underlying
hardware can hold — before and during transmission. This maximum is guaranteed to facilitate the
port rate saturation at 64-byte packets.
Under the ATM Scheduler control (scheduled mode), IxAtmdAcc interprets the schedule table and
builds and sends requests to the underlying hardware. For AAL 5/AAL 0-48, these will be
segmented into 48-byte cell payloads and transmitted with ATM cell headers over the UTOPIA
bus. For AAL 0-52/OAM, these cells will be segmented into 52-byte cells, HEC added, and they
will be transmitted “as is” over the UTOPIA bus.
Once the transmission is complete, IxAtmdAcc passes back the IXP_BUFs to its client (on a perconnection basis). The client can free them or return them to the pool of buffers. The preferred
option is to reuse the buffers during the next transmission. Processing of transmit-done buffers
from IxAtmdAcc is controlled by the client.
Transmit Done is a system-wide entity which provides a service to multiple ports. A system using
multiple ports — with very different transmit activity — results in latency effects for low-activity
ports. The user needs to tune the number of buffers — needed to service a low-rate port or channel
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00757
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
— if the overall user application involves a port configured with a VC supporting a very different
traffic rate. This tuning is at the client’s discretion and, therefore, is beyond the scope of this
document.
In the case of OAM, a PDU containing OAM cells for any port, VPI, or VCI must be submitted for
transmission on the dedicated OAM-VC for that port. This is true regardless of whether an AAL-5/
AAL-0-48/AAL-0-52 transmit service connection exists for the given VPI or VCI. The dedicated
OAM-VC will be scheduled just like any other VC.
4.5.1Scheduled T ransmission
The scheduling entity controls the VC from which cells are transmitted and when they are
transmitted. Buffers on each VC are always sent in the sequence they are submitted to IxAtmdAcc.
However, cells from different VCs can be interleaved.
Figure 14 shows VC connection and buffer transmission for a scheduled port.
Figure 14. Buffer Transmission for a Scheduled Port
Data Client
2: ixAtmdAccTxConnect(port,vpi, vc i, connParams)
6: ixAtmdAccTxPduSubmit(connId, mbuf*,
numCells))
9: hwSend(mbuf, numCells)*
5: connId
1: ixAtmdAccScheduledModeEnable(port,
schedulerCallbacks)
4: VcId
8: ixAtmdAccTxProcess(port,
scheduleTable)
AtmdAcc
3: IxAtmdAccTxSchVcIdGetCB(port,vpi, vc i , c onnI d)
7: IxAtmdAccTxVcDemandUpdat eCB(vc I d,numCells)
Tx Ctrl
Client
(Scheduling
Component)
B2284-01
1. A control client wants to use an A TM traffic shaping entity that will control the transmission of
cells on a particular port, ensuring VCs on that port conform to their traffic descriptor values.
The client, therefore, calls ixAtmdAccScheduledModeEnable() — passing the port and some
callback functions as parameters.
IxAtmdAcc has no client connections active for that port and accepts the scheduler
registration.
2. Later, a data client wants to use the IxAtmdAcc AAL-5/AAL-0-48/AAL-0-52/OAM transmit
service for a VC on the same port, and therefore calls ixAtmdAccTxVcConnect().
In the case of the OAM transmit service, the connection will be on the dedicated OAM-VC for
that port.
3. IxAtmdAcc calls the IxAtmdAccTxSchVcIdGetCallback () callback registered for the port. By
making this call, IxAtmdAcc is asking the traffic shaping entity if it is OK to allow traffic on
April 2005IXP400 Software Version 2.0Programmer’s Guide
58Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
this VC. In making this callback, ixAtmdAcc is also providing the AtmScheduler VC
identifier that should be used when calling IxAtmdAcc for this VC.
4. The shaping entity acknowledges the validity of the VC, stores the IxAtmdAcc connection ID
and issues a VcId to IxAtmdAcc.
5. IxAtmdAcc accepts the connection request from the data client and returns a connection ID to
be used by the client in further IxAtmdAcc API calls for that VC.
6. Sometime later, the data client has a fully formed AAL-5/AAL-0-48/AAL-0-52/OAM PDU in
an IXP_BUFs ready for transmission. The client calls ixAtmdAccTxPduSubmit() passing the
IXP_BUF and numbers of cells contained in the chained IXP_BUF as parameters.
Note:
— In the case of AAL 5, the CRC in the AAL-5 trailer does not have to be pre-calculated.
— In the case of OAM, the CRC 10 does not have to be pre-calculated.
7. IxAtmdAcc ensures the connection is valid and submits new deman d in cells to the shaping
entity by calling ixDemandUpdateCallback() callback. The shaping entity accepts the demand
and IxAtmdAcc internally enqueues the IXP_BUFs for later transmission.
8. The traffic-shaping entity decides at certain time — by its own timer mechanism or by using
the “Tx Low Notification” service provided by IxAtmdAcc component for this port — that
cells should be transmitted on the port based on the demand it has previously obtained from
AtmdAcc. It creates a transmit schedule table and passes it to the IxAtmdAcc by calling
ixAtmdAccTxProcess().
9. IxAtmdAcc takes the schedule, interprets it, and sends scheduled cells to the hardware. In the
case of hardware queue being full (only possible if the “Tx Low Notification” service is not
used), the ixAtmdAccTxProcess call returns an overloaded status so that the traffic shaping
entity can retry this again later.
4.5.1.1Schedule Table Description
IxAtmdAcc uses a schedule table when transmitting cell information to the hardware. This
schedule table drives the traffic on one port.
The schedule table is composed of an array of table entries, each of which specifies a
ConnectionID and a number of cells (up to 16) to transmit from that VC. Idle cells are inserted in
the table with the ConnectionID identifier set to IX_ATMDACC_IDLE_CELLS.
Figure 15 shows how this table is translated into an ordered sequence of cells transmitted to one
ATM port.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00759
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Figure 15. IxAtmdAccScheduleTable Structure and Order Of ATM Cell
ixAtmdScheduleTable
ixAtmdScheduleTableEntry
IX_ATMDACC_IDLE_CELLS
IX_ATMDACC_IDLE_CELLS
Schedule table
Table size : 5
Table entry Ptr
TotalCellSlots : 9:
ConnectionId 12
NumberOfCells 2
ConnectionId 6
NumberOfCells 1
NumberOfCells 1
ConnectionId 12
NumberOfCells 2
NumberOfCells 1
12126IDLEIDLE1212
Cells transmitted on the ATM port
B2285-01
4.5.2Transmission Triggers (Tx-Low Notification)
In Scheduled Mode, the rate and exact point at which the ixAtmdAccTxProcess() interface should
be called by the shaping entity is at the client’s discretion and hence beyond the scope of this
document.
However, ixAtmdAcc transmit service does provide a Tx-Low Notification service which can be
configured to execute a client-supplied notification callback, when the number of cells not yet
transmitted by the hardware reaches a certain low level. The service only supports a single client
per port and the maximum default cell threshold is eight cells.
4.5.2.1Transmit-Done Processing
When buffers have been sent on a port, they are placed in a single, transmit-complete stream,
which is common to all ports. IxAtmdAcc doe s not autonomously process this stream — the client,
instead, deciding when and how many buffers will be processed.
April 2005IXP400 Software Version 2.0Programmer’s Guide
60Document Number: 252539, Revision: 007
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Processing primarily involves handing back ownership of buffers to clients. The rate at which this
is done must be sufficient to ensure that client-buffer starvation does not occur. The details of the
exact rate at which this must be done is implementation-dependent and not within the scope of this
document. Because the Tx-Done resource is a system-wide resource, it is important to note that
failing to poll it will cause transmission to be suspended on all ports.
Transmit Done — Based on a Th reshold Level
IxAtmdAcc does provide a notification service whereby a client can choose to be notified when the
number of outstanding buffers in the transmit done stream has reached a configurable threshold, as
shown in Figure 16.
Figure 16. Tx Done Recycling — Using a Threshold Level
1. The control client wants to use the threshold services to process the transmitted buffers. The
ixAtmdAccTxDoneCallbackRegister() function is called to set a buffer threshold level and
register a callback. IxAtmdAcc provides the function ixAtmdAccTxDoneDispatch() to be used
by the control client. This function itself can be used directly as the callback.
IxAtmdAccTxDoneCallbackRegister allows the client to register its own callback. From this
callback the IxAtmdAccTxDoneDispatch() function must be called. An algorithm can also be
used to decide the number of IXP_BUFs to service, depending on system load or any other
constraint.
2. Sometime earlier, the data client sent data to transmit. Cells are now sent over the UTOPIA
interface and the IXP_BUFs are now available.
3. At a certain point in time, the threshold level of available buffers is reached and the control
client’s callback is invoked by IxAtmdAcc. In response to this callback, the control client calls
ixAtmdAccTxDoneDispatcher(). This function gets the transmitted buffer and retrieves the
connId associated with this buffer.
4. Based on connId, ixAtmdAccTxDoneDispatcher identifies the data client to whom this buffer
belongs. The corresponding data client’s TxDoneCallback function, as registered during a
TxVcConnect, is invoked with the IXP_BUF.
This TxDoneCallback function is likely to free or recycle the IXP_BUF.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00761
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Transmit Done — Based on Polling Mechanism
A polling mechanism can be used instead of the threshold service to trigger the recycling of the
transmitted buffers, as shown in Figure 17.
Figure 17. Tx Done Recycling — Using a Polling Mechanism
Data Client
5: ixAtmdAccBufferReturnCB (userId, mbuf)
3: mbufLevel6: mbufProc es s ed
AtmdAcc
2: ixAtmdAcc T xDoneLevel Query()
4: iixAtmdAc c T xDoneDis p atch(numMbuf)
1: hwSend()
Tx Ctrl
Client
B2287-01
1. Sometime earlier, the data client sent data to transmit. Cells are now sent over the UTOPIA
interface and the IXP_BUFs are now available.
2, 3.A control client does not want to use the threshold services to process the transmitted buffers.
Therefore, the ixAtmdAccTxDoneQueryLevel() function can optionally be called to get the
current number of IXP_BUFs already transmitted.
4. The control client requests the ixAtmdAcc to do more processing and provides a number of
buffers to process as a parameter of the ixAtmdAccTxDoneDispatch() function. This function
gets the transmitted buffer and retrieves the connId associated with this buffer.
5. Based on connId, ixAtmdAccTxDoneDispatch id entifies the data client to which this buffer
belongs. The corresponding data client’s TxDoneCallback function — as registered during a
TxVcConnect — is invoked with the IXP_BUF.
This TxDoneCallback function is likely to free or recycle the chained IXP_BUFs.
6. The client gets the number of buffer processed from the control client. This number may be
different to the number requested when multiple instances of the
ixAtmdAccTxDoneDispatch() function are used at the same time.
4.5.2.2Transmit Disconnect
Before a client disconnects from a VC, all resources must have been recycled, as shown in
Figure 18. This is done by calling the ixAtmdAccTxVcDisconnect() function until all PDUs are
transmitted by the hardware and all buffers are sent back to the client.
April 2005IXP400 Software Version 2.0Programmer’s Guide
62Document Number: 252539, Revision: 007
Figure 18. Tx Disconnect
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Data Client
4: ixAtmdAccB uf ferReturnCB(userId, mbuf )
2: IX_ATMDACC_RESOURCES_ST I LL_ALLOCATED
6: IX_SUCCESS
1. The data client sends the last PDUs and the control client wants to disconnect the VC.
IxAtmdAccTxVcDisconnect() invalidates further attempts to transmit mo re PDUs.
Any call to ixAtmdAccPduSubmit() will fail for this VC.
2. If there are resources still in use, the IxAtmdAccTxVcDisconnect() fun ctions returns
IX_ATMDACC_RESOURCES_STILL_ALLOCATED. This means that the hardware has not
finished transmitting and there are still IXP_BUFs pending transmission, or IXP_BUFs in the
TxDone stream.
3,4.Transmission of remaining traffic is running — no new traffic is accepted through
ixAtmdAccPduSubmit().
5. The client waits a certain delay — depending on the TX rate for this VC — and asks again to
disconnect the VC.
6. There are no resources still in use, the IxAtmdAccTxVcDisconnect() functions returns
IX_SUCCESS. This means that the hardware did finish transmitting all cells and there are no
IXP_BUFs either pending transmission or in the txDone stream.
4.5.3Receive Services
IxAtmdAcc processes inbound AAL payload data for individual VCs, received in IXP_BUFs. In
the case of AAL 5, IXP_BUFs may be chained. In the case of AAL 0-48/52/OAM, chaining of
IXP_BUFs is not supported. In the case of OAM, an ix_IXP_BUF contains only a single cell.
In the case of AAL 0, Rx cells are accumulated into an IXP_BUF under supervision of an Rx timer .
The IXP_BUF is passed to the client when either the IXP_BUF is passed to the client — when
either the IXP_BUF is filled — or when the timer expires. The Rx timer is implemented by the
NPE-A.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00763
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
In order to receive a PDU, the client layer must allocate IXP_BUFs and pass their ownership to the
IxAtmdAcc component. This process is known as replenishment. Such buffers are filled out with
cell payload. Complete PDUs are passed to the client. In the case of AAL 5, an indication about the
validity of the PDU — and the validity of the AAL-5 CRC — is passed to the client.
In the case of AAL 0, PDU completion occurs either when an IXP_BUF is filled, or is controlled
by a timer expiration. The client is able to determine this by the fact that the IXP_BUF will not be
completely filled, in the case that completion was due to a timer expiring.
Refer to the API for details about the AAL-0 timer.
IxAtmdAcc supports prioritization of inbound traffic queuing by providing two separate receive
streams. The algorithms and tuning required to service these streams can be different, so
management of latency and other priority constraints, on receive VCs, is allowed. As an example,
one stream can be used for critical-time traffic (such as voice) and the other stream for data traffic.
The streams can be serviced in two ways:
• Setting a threshold level (when there is data available)
• Polling mechanism
Both mechanisms pass buffers to the client through a callback. Once the client is finished
processing the buffer, it can either ask to replenish the channel with available buffers or free the
buffer back directly to the operating-system pool.
IxAtmdAcc receive service does provide a Rx-free-low notification service that can be configured
to execute a client supplied notification callback when the number of available buffers reaches a
certain low level. The service is supported on a per-VC basis and the maximum threshold level is
16 unchained IXP_BUFs.
4.5.3.2Receive Processing
When buffers have been received on a port, they are placed in one of two Rx streams common to
the VCs sharing this resource as decided by the client when establishing a connection. IxAtmdAcc
does not autonomously process this stream, but instead the client decides when and how many
buffers will be processed.
Processing primarily involves handing back ownership of buffers to clients. The rate at which this
is done must be sufficient to ensure that client requirements in terms of latency are met. The details
of the exact rate at which this must be done is implementation-dependent and not within the scope
of this document.
Receive — Based on a Threshold Level
IxAtmdAcc provides a notification service where a client can choose to be notified when incoming
PDUs are ready in a receive stream as shown in Figure 19.
April 2005IXP400 Software Version 2.0Programmer’s Guide
64Document Number: 252539, Revision: 007
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
1. A control client wants to use the threshold services to process the received PDUs. The
ixAtmdAccRxThresholdSet() function is called to register a callback. IxAtmdAcc provides the
ixAtmdAccRxDispatch() function to be used by this callback. This function itself can be used
directly as the callback. IxAtmdAccRxThresholdSet allows the client to register its own
callback.
From this callback (where an algorithm can be used to decide the number of IXP_BUFs to
service, depending on system load or any user constraint), the user has to call the
IxAtmdAccRxDispatch() function.
2. Cells are now received over the UTOPIA interface and there is a PDU available.
3. When a complete PDU is received, the callback is invoked and the function
ixAtmdAccRxDispatch() runs. This function iterates through the received buffers and retrieve
the connId associated with each buffer.
4. Based on connId, ixAtmdAccRxDispatch identified the data client to whom this buffer
belongs. The corresponding data client’s RxCallback function — as registered during a
RxVcConnect — is invoked with the first IXP_BUF of a PDU.
This RxCallback function is likely to push the received information to the protocol stack, and
then to free or recycle the IXP_BUFs. The RxCallback will be invoked once per PDU. If there
are many PDUs related to the same VC, the RxCallback will be called many times.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00765
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Received — Based on a Polling Mechanism
A polling mechanism can also be used to collect received buffers as shown in Figure 20.
1. Cells are now received over the UTOPIA interface and a complete PDU is now available.
2,3.The control client does not want to use the threshold services. Therefore, the client can
optionally query the current number of PDUs already received in one of the receive streams,
using the ixAtmdAccRxLevelQuery() function.
4. The control client asks IxAtmdAcc to process an amount of PDUs from one of the streams
using the function ixAtmdAccTxDoneDispatch().
5. IxAtmdAcc gets the requested number of PDUs from the underlying hardware. Based on
connId, ixAtmdAccRxDispatch() identifies the data clients to which the buffers belong. The
corresponding data client’s RxCallback functions — as registered during a
ixAtmdAccRxVcConnect — is invoked with the first IXP_BUF a PDU.
This RxCallback function is likely to push the received information to the protocol stack, and
then to free or recycle the IXP_BUFs. The RxCallback will be invoked once per PDU. If there
are many PDUs related to the same VC, the RxCallback will be called many times.
6. IxAtmdAcc returns the number of PDUs processed.
4.5.3.3Receive Disconnect
Before a client disconnects from a VC, all resources must have been recycled as shown in
Figure 21.
April 2005IXP400 Software Version 2.0Programmer’s Guide
66Document Number: 252539, Revision: 007
Figure 21. Rx Disconnect
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Data Client
3: rxCallback(userId,IX_ BUFFER_RETURN, mbuf)
2: IX_ATMDACC_RESOURCES_ST I LL_ALLOCATED
5: IX_SUCCESS
1,2.The control client wants to disconnect the VC. IxAtmdAccRxVcDisconnect() tell IxAtmdAcc
to discard any rx traffic and — if resources are still in use — the
IxAtmdAccRxVcDisconnect() function returns
IX_ATMDACC_RESOURCES_STILL_ALLOCATED.
3. Reception of remaining traffic is discarded.
4. The client waits a certain delay — depending on the Rx drain rate fo r this VC — and asks
again to disconnect the VC. If resources are still in use, the IxAtmdAccRxVcDisconnect()
function returns IX_ATMDACC_RESOURCES_STILL_ALLOCATED
5. Because there are no resources still in use, the IxAtmdAccRxVcDisconnect() function returns
IX_SUCCESS. This means that there are no resources or IXP_BUFs pending for reception or
in the rxFree queue for this VC.
4.5.4Buffer Management
The IxAtmdAcc Interface is based on IXP_BUFs. The component addressing space for physical
memory is limited to 28 bits. Therefore IXP_BUF headers should be located in the first
256 Mbytes of physical memory.
4.5.4.1Buffer Allocation
IXP_BUFs used by IxAtmdAcc are allocated and released by the client through the appropriate
operating-system functions. During the disconnect steps, pending buffers will be released by the
IxAtmDAcc component using the callback functions provided by the client, on a per-VC basis.
4.5.4.2Buffer Contents
For performance reasons, the data pointed to by an IXP_BUF is not accessed by the IxAtmDAcc
component.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00767
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
The IXP_BUF fields required for transmission are described in Table 5. These fields will not be
changed during the Tx process.
Table 5. IXP_BUF Fields Required for Transmission
FieldDescription
ix_next
ix_nextpktNot used.
ix_dataRequired. This field should point to the part of PDU data.
ix_lenRequired. This field is the length of data pointed to by mh_data.
ix_type Not used.
ix_flagsNot used.
ix_reservedNot used.
pkt.rcvifNot used.
pkt.lenRequired in the first IXP_BUF of a chained PDU. This is the total length of the PDU.
Required. When IXP_BUFs are chained to build a PDU. In the last IXP_BUF of a PDU,
this field value has to be 0.
The IXP_BUF fields of available IXP_BUFs used by the receive service are described in Table 6.
They are set by the client which wants to provide available buffers to IxAtmdAcc Rx service.
Table 6. IXP_BUF Fields of Available Buffers for Reception
FieldDescription
ix_next
ix_nextpktNot used.
ix_dataThis field is the pointer to PDU data.
ix_lenThis field is the length of data pointed to by mh_data.
ix_type Not used.
ix_flagsNot used.
ix_reservedNot used.
pkt.rcvifNot used.
pkt.lenSet to 0.
This field value has to be 0. Buffer chaining is not supported when providing available
buffers.
The IXP_BUF fields in received buffers that are set during traffic reception are described in
Table 7.
Table 7. IXP_BUF Fields Modified During Reception (Sheet 1 of 2)
FieldsDescription
ix_next
ix_nextpktNot used.
ix_dataThis field is the pointer to PDU data.
ix_lenModified. This field is the length of data pointed to by mh_data.
ix_type Not used.
Modified when IXP_BUFs are chained to build a PDUro point to the next IXP_BUF. In the
last IXP_BUF of a PDU, this field value has to be 0.
April 2005IXP400 Software Version 2.0Programmer’s Guide
68Document Number: 252539, Revision: 007
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
Table 7. IXP_BUF Fields Modified During Reception (Sheet 2 of 2)
FieldsDescription
ix_flagsNot used.
ix_reservedNot used.
pkt.rcvifNot used.
pkt.lenNot used.
4.5.4.3Buffer-Size Constraints
Any IXP_BUF size can be transmitted, but a full PDU must be a multiple of a cell size (48/
52 bytes, depending on AAL type). Similarly, the system can receive and chain IXP_BUFs that are
a multiple of a cell size.
When receiving and transmitting AAL PDUs, the overall packet length is indicated in the first
IXP_BUF header. For AAL 5, this length incl udes the AAL-5 PDU padding and trailer.
Buffers with an incorrect size are rejected by IxAtmDAcc functions.
4.5.4.4Buffer-Chaining Constraints
Intel® IXP400 Software
IXP_BUFs can be chained to build PDUs up to 64 Kbytes of data plus overhead. The number of
IXP_BUFs that can be chained is limited to 256 per PDU.
T o submit a PDU for transmission, the client needs to supply a chained IX P_BUF. When receiving
a PDU, the client gets a chained IXP_BUF.
Similarly, the interface to replenish the Rx-queuing system and supporting the Tx-done feature are
based on unchained IXP_BUFs.
4.5.5Error Handling
4.5.5.1API-Usage Errors
The AtmdAcc component detects the following misuse of the API:
• Inappropriate use of connection IDs
• Incorrect parameters
• Mismatches in the order of the function call — for example, using start() after disconnect()
• Use of resources already allocated for an other VC — for example, port/VPI/VCI
Error codes are reported as the return value of a function API.
The AAL client is responsible for using its own reporting mechanism and for taking the
appropriate action to correct the problem.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00769
Intel® IXP400 Software
Access-Layer Components: ATM Driver Access (IxAtmdAcc) API
4.5.5.2Real-Time Errors
Errors may occur during real-time traffic. Table 8 shows the different possible errors and the way
to resolve them.
Table 8. Real-Time Errors
CauseConsequences and Side EffectsCorrective Action
• System is not able to store the
inbound traffic, which gets dropped.
Rx-free queue underflow
Tx-Done overflow
IxAtmdAccPduSubmit()
reports
IX_ATMD_OVERLOADED
Rx overflow
• AAL-5 CRC errors
• PDU length invalid
• Cells missing
•PDUs missing
The hardware is blocked because the
Tx-done queue is full.
System is unable to transmit a PDU.
• Inbound traffic is dropped.
• AAL-5 CRC errors
• PDU length invalid
• Use the replenish function more often
• Use more and bigger IXP_BUFs
• Poll the TxDone queue more often.
• Change the TxDone threshold.
• Increase the scheduler-transmit speed.
• Slow down the submitted traffic.
Poll the Rx streams more often.
April 2005IXP400 Software Version 2.0Programmer’s Guide
70Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components:
ATM Manager (IxAtmm) API5
This chapter describes the Intel® IXP400 Software v2.0’s “ATM Manager API” access-layer
component.
IxAtmm is an example IXP400 software component. The phrase “Atmm” stands for “ATM
Management.”
The chapter describes the following details of ixAtmm:
• Functionality and services
• Interfaces to use these services
• Conditions and constraints for using the services
• Dependency on other IXP400 software components
• Performance and resource usage
5.1What’s New
There are no changes or enhancements to this component in software release 2.0.
5.2IxAtmm Overview
The IXP400 software’s IxAtmm component is a demonstration ATM configuration and
management component intended as a “point of access” for clients to the ATM layer of the
IXP4XX product line and IXC1100 control plane processors.
This component, supplied only as a demonstration, encapsulates the configuration of ATM
components in one unit. It can be modified or replaced by the client as required.
5.3IxAtmm Component Features
The ixAtmm component is an A TM-port, virtual-connection (VC), and VC-access manager. It does
not provide support for ATM OAM services and it does not directly move any ATM data.
IxAtmm services include:
• Configuring and tracking the usage of the (physical) ATM ports on IXP4XX product line and
IXC1100 control plane processors.
In software release 2.0, up to eight parallel logical ports are supported over UTOPIA Level 2.
IxAtmm configures the UTOPIA device for a port configuration supplied by the client.
• Initializing the IxAtmSch ATM Scheduler component for each active port.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00771
Intel® IXP400 Software
Access-Layer Components: ATM Manager (IxAtmm) API
IxAtmm assumes that the client will supply initial upstream port rates once the capacity of
each port is established.
• Ensuring traffic shaping is performed for each registered port.
IxAtmm acts as transmission control for a port by ensuring cell demand is communicated to
the IxAtmSch ATM Scheduler from IxAtmdAcc and cell transmission schedules produced by
IxAtmSch are supplied at a sufficient rate to IxAtmdAcc component.
• Determining the policy for processing transmission buffers recycled from the hardware.
In the IXP400 software, the component will ensure this processing is done on an event-driven
basis. That is, a notification of threshold number of outstanding recycled buffers will trigger
processing of the recycled buffers.
• Controlling the processing of receive buffers via IxAtmdAcc.
IxAtmdAcc supports two incoming Rx buffer streams termed high- and low-priority streams.
— The high-priority stream will be serviced in an event-driven manner. For example, as soon
a buffer is available in the stream, it will be serviced.
— The low-p rio rity stream will be serviced on a timer basis.
• Allowing clients to register VCCs (Virtual Channel Connections) on all serving ATM ports for
transmitting and/or receiving ATM cells.
IxAtmm will check the validity (type of service, traffic descriptor, etc.) of the registration
request and will reject any request that presents invalid traffic parameters. IxAtmm does not
have the capability to signal, negotiate, and obtain network admission of a connection. The
client will make certain that the network has already admitted the requested connection before
registering a connection with IxAtmm.
IxAtmm also may reject a connection registration that exceeds the port capacity on a firstcome-first-serve basis, regardless of whether the connection has already been admitted by the
network.
• Enabling query for the ATM port and registered VCC information on the port.
• Allowing the client to modify the port rate of any registered port after initialization.
5.4UTOPIA Level-2 Port Initialization
IxAtmm is responsible for the initial configuration of the IXP4XX product line and IXC1100
control plane processors’ UTOPIA Level-2 device. This is performed through a user interface that
will facilitate specification of UTOPIA-specific parameters to the IxAtmm component.
IxAtmm supports up to eight logical ports over the UTOPIA interface.
The data required for each port to configure the UTOPIA device is the five-bit address of the
transmit and receive PHY interfaces on the UTOPIA bus.
The UTOPIA device can also be initialized in loop-back mode. Loop-back is only supported,
however, in a single-port configuration.
All other UTOPIA configuration parameters are configured to a static state by the IxAtmm and are
not configurable through the functional interface of this component. Clients that wish a greater
level of control over the UTOPIA device should modify and recompile the IxAtmm component
with the new static configuration. Alternately, they can use the interface provided by the
IxAtmdAcc component.
April 2005IXP400 Software Version 2.0Programmer’s Guide
72Document Number: 252539, Revision: 007
Access-Layer Components: ATM Manager (IxAtmm) API
5.5ATM-Port Management Service Model
IxAtmm can be considered an “ATM-port management authority.” It does not directly perform data
movement, although it does control the ordering of cell transmission through the supply of ATM
cell-scheduling information to the lower levels.
IxAtmm manages the usage of registered ATM ports and will allow or disallow a VC to be
established on these ports — depending on existing active-traffic contracts and the current
upstream port rate.
Once a connection is established, a client can begin to use it. The client makes data transfer
requests directly to corresponding AAL layer through the IxAtmdAcc component. The AAL layer
passes the request to the IXP4XX product line and IXC1100 control plane processors though the
appropriate hardware layers, under direction from IxAtmm.
The IxAtmm service model consists of two basic concepts:
• ATM port
• VC/VCC (virtual channel/virtual channel connection) connections that are established over
this port
Intel® IXP400 Software
A VC is a virtual channel through a port. A VC is unidirectional and is associated with a unique
VPI/VCI value. Two VCs — in opposite direction on the same port — can share the same VPI/VCI
value. A VCC is an end-to-end connection through linked VCs, from the local A TM port to another
device across the ATM network.
Initially, a port is “bare” or “empty.” A VC must be attached (registered) to a port. Registration
means, “to let IxAtmm know that — fro m no w on — that the VC can be considered usable on this
port.”
IxAtmm is not responsible for signaling and obtaining admission from the network for a VCC. A
client needs to use other means, where necessary, to obtain network admission of a VCC. A client
specifies to IxAtmm the traffic descriptor for the requested VCC. IxAtmm will accept or deny this
request based only on the port rate available and the current usage of the port by VCCs already
registered with the system. This CAC functionality is provided by the IxAtmSch component.
IxAtmm presumes that the client has already negotiated — or will negotiate — admission of the
VCC with the network.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00773
Intel® IXP400 Software
Access-Layer Components: ATM Manager (IxAtmm) API
Figure 22. Services Provided by Ixatmm
IXP4XX/IXC1100 System
Initialization
2.*1.3.*.*
ATMM
UTOPIA-2 Interface
ATM
PORT
ATM Clients
ATM
PORT
IxAtmSch
ATM
PORT
B2292-01
Figure 22 shows the main services provided by the IxAtmm component. In this diagram, the three
services outlined are:
• IXP4XX product line and IXC1100 control plane processors system-initialization routine will
invoke an IxAtmm interface function to initialize the UTOPIA Level-2 device for all active
ATM ports in the system. This function call is only performed once, encompassing the
hardware configuration of all ports in a single call to the interface.
• Once the link is established for each active port and the line rates are known to the system,
IxAtmm is informed of the upstream and downstream rate for each port. The upstream rate is
required by the ATM scheduler component in order to provide traffic shaping and admission
services on the port. The port rates must be registered with IxAtmm before any VCs may be
registered. In addition, once the scheduling component is configured, it is bound to
IxAtmdAcc. This ensures shaped transmission of cells on the port.
• Once the port rate has been registered, the client may register VCs on the established ports.
Upstream and downstream VCs must be registered separately. The client is assumed to have
negotiated any required network access for these VCs before calling IxAtmm. IxAtmm may
refuse to register upstream VCs — the ATM scheduler’s admission refusal being based on port
capacity.
Once IxAtmm has allowed a VC, any future transmit and receive request on that VC will not
pass through IxAtmm. Instead, they go through corresponding AAL layer directly to the
IXP4XX product line and IXC1100 control plane processors’ hardware.
April 2005IXP400 Software Version 2.0Programmer’s Guide
74Document Number: 252539, Revision: 007
Access-Layer Components: ATM Manager (IxAtmm) API
Further calls to IxAtmDAcc must be made by the client following registration with IxAtmm to
fully enable data traffic on a VC.
IxAtmm does not support the registration of Virtual Path Connections (VPCs). Registration and
traffic shaping is performed by IxAtmm and IxAtmSch on the VC/VCC level only.
5.6Tx/Rx Control Configuration
The IxAtmm application is responsible for the configuration of the mechanism by which the lowerlayer services will drive transmit and receive of traffic to and from the IXP4XX product line and
IXC1100 control plane processors’ hardware. This configuration is achieved through the
IxAtmdAcc component interface.
Configuration of these services will be performed when the first active port is registered with
IxAtmm.
IxAtmm will configure IxAtmdAcc for the following traffic events:
• Transmit Required — The IXP4XX product line and IXC1100 control plane processors’
hardware requires more cells to be scheduled for transmission on a particular port. IxAtmm
will implement a callback function that will be registered as a target for the low-queue
notification callback with IxAtmdAcc. When invoked, this function will generate a transmit
schedule table for the port through the IxAtmSch component and pass this table to the
IxAtmdAcc interface to cause more cells to be transmitted to the hardware, according to the
generated schedule table.
• Transmit Done — When all data from a particular buffer has been transmitted, it is necessary
for the IXP4XX product line and IXC1100 control plane processors’ hardware to return the
buffer to the relevant client. IxAtmm will configure the IXP4XX product line and IXC1100
control plane processors such that the processing of these buffers will be performed whenever
there are a specific number of buffers ready to be processed. IxAtmm will configure the
system such that the default IxAtmdAcc interface returns these buffers to the appropriate
clients and are then invoked automatically.
• High-Priority Receive — Data received on the any high-priority receive channel (such as
voice traffic) is required to be supplied to the client in a timely manner. IxAtmm will configure
the IxAtmdAcc component to process the receipt of data on high-priority channels using a low
threshold value on the number of received data packets. The default IxAtmdAcc receive
processing interface will be invoked whenever the number of data packets received by the
IXP4XX product line and IXC1100 control plane processors reaches the supplied threshold.
These packets will then be dispatched to the relevant clients by the IxAtmdAcc component.
• Low-Priority Receive — Data received on low-priority receive channels (for example, data
traffic) is not as urgent for delivery as the high-priority data and is, therefore, expected to be
tolerant of some latency when being processed by the system. IxAtmm will configure the
IXP4XX product line and IXC1100 control plane processors such that the receive processing
of low-priority data will be handled according to a timer. This will cause the processing of this
data to occur at regular time intervals, each time returning all pending low-priority data to the
appropriate clients.
Intel® IXP400 Software
The IxAtmm component is responsible only for the configuration of this mechanism. Where
possible the targets of threshold and timer callbacks are the default interfaces for the relevant
processing mechanism, as supplied by IxAtmdAcc. The exception is the processing of cell
transmission, which is driven by an IxAtmm callback interface that passes ATM scheduling
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00775
Intel® IXP400 Software
Access-Layer Components: ATM Manager (IxAtmm) API
information to the IxAtmDAcc component, as required to drive the transmit function. As a result,
all data buffers in the system — once configured — will pass directly through IxAtmdAcc to the
appropriate clients. No data traffic will pass through the IxAtmm component at any stage.
Figure 23. Configuration of Traffic Control Mechanism
Only transmit traffic — which has already been queued by the client with IxAtmdAcc when the
request for more traffic is made — will be scheduled and sent to the hardware. (That is, no callback
to the data client will be made in the context of the transmit processing.) IxAtmdAcc makes
IxAtmSch aware of the existence of this pending traffic when it is queued by the client through the
use of a previously registered callback interface.
The supply of empty buffers to the hardware — for use in the receive direction — is the
responsibility of the individual client on each active VC. As a result, the target callback for this
event on each VC is outside of the visibility of the IxAtmm component, being part of the client
logic. It is the responsibility of each client, therefore, to ensure that the supply mechanism of free
buffers for receive processing is configured correctly before traffic may begin passing on the
system.
April 2005IXP400 Software Version 2.0Programmer’s Guide
76Document Number: 252539, Revision: 007
Access-Layer Components: ATM Manager (IxAtmm) API
5.7Dependencies
Figure 24. Component Dependencies of IxAtmm
IxAtmm configures the IXP4XX product line and IXC1100 control plane processors’ UTOPIA
Level-2 device through an interface provided by the IxAtmdAcc component.
IxAtmm is also responsible for configuring VC registrations with the IxAtmS ch demo ATM
scheduler component and relaying CAC decisions to the client in the event of VC registration
failure.
Intel® IXP400 Software
IxAtmm
IxAtmSchIAtmDAcc
B2294-01
IxAtmm is responsible for port traffic shaping by conveying traffic and scheduling information
between the ATM scheduler component and the cell transmission control interface provided by the
IxAtmdAcc component.
5.8Error Handling
IxAtmm returns an error type to the user when the client is expected to handle the error. Internal
errors will be reported using the IXP4XX product line and IX C1100 control plane processors’
standard error-reporting techniques.
The established state of the IxAtmm component (registered ports, VCs, etc.) is not affected by the
occurrence of any error.
5.9Management Interfaces
No management interfaces are supported by the IxAtmm component. If a management interface is
required for the ATM layer, the Ix Atmm is the logical place for this interface to be implemented, as
the component is intended to provide an abstract public interface to the non-data path ATM
functions.
5.10Memory Requirements
IxAtmm code is approximately 26 Kbytes in size.
IxAtmm data memory requirement — under peak cell-traffic load — is approximately 20 Kbytes.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00777
Intel® IXP400 Software
Access-Layer Components: ATM Manager (IxAtmm) API
5.11Performance
The IxAtmm does not operate on the data path of the IXP4XX product line and IXC1100 control
plane processors. Because it is primarily concerned with registration and deregistration of port and
VC data, IxAtmm is typically executed during system initialization.
April 2005IXP400 Software Version 2.0Programmer’s Guide
78Document Number: 252539, Revision: 007
This chapter describes the Intel® IXP400 Software v2.0’s “ATM Transmit Scheduler” (IxAtmSch)
access-layer component.
6.1What’s New
There are no changes or enhancements to this component in software release 2.0.
6.2Overview
IxAtmSch is an “example” software release 2.0 component, an ATM scheduler component
supporting ATM transmit services on IXP4XX product line and IXC1100 control plane processors.
This chapter discusses the following IxAtmSch component details:
• Functionality and services
• Interfaces to use the services
• Conditions and constraints for using the services
• Component dependencies on other IXP400 software components
• Component performance and resource usage estimates
IxAtmSch is a simplified scheduler with limited capabilities. See Table 9 on page 80 for details of
scheduler capabilities.
The IxAtmSch API is specifically designed to be compatible with the IxAtmdAcc transmissioncontrol interface. However, if a client decides to replace this scheduler implementation, they are
urged to reuse the API presented on this component.
IxAtmSch conforms to interface definitions for the IXP4XX product line and IXC1100 control
plane processors’ ATM transmission-control schedulers.
6.3IxAtmSch Component Features
The IxAtmSch component is provided as a demonstration A TM scheduler for use in the processor’s
ATM transmit. It provides two basic services for managing transmission on ATM ports:
• Outbound (transmission) virtual connection admission cont rol on serving ATM ports
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00779
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
• Schedule table to the ATM transmit function that will contain information for ATM cell
scheduling and shaping
IxAtmSch implements a fully operational ATM traffic scheduler for use in the processor’s ATM
software stack. It is possible (within the complete IXP400 software architecture) to replace this
scheduler with one of a different design. If replaced, this component still is valuable as a model of
the interfaces that the replacement scheduler requires to be compatible with the IXP400 software
ATM stack. IxAtmSch complies with the type interfaces for an IXP400 software compatible ATM
scheduler as defined by the IxAtmdAcc software component.
The IxAtmSch service model consists of two basic concepts: ATM port and VCC. Instead of
dealing with these real hardware and software entities in the processor and software stack,
IxAtmSch models them. Because of this, there is no limit to how many A TM ports it can model and
schedule — given enough run-time computational resources.
IxAtmSch does not currently model or schedule Virtual Paths (VPs) or support any VC aggregation
capability.
In order to use IxAtmSch services, a client first must ask IxAtmSch to establis h the mo del for an
ATM port. Virtual connections then can be attached to the port.
IxAtmSch models the virtual connections and controls the admission of a virtual connection, based
on the port model and required traffic parameters. IxAtmSch schedules and shapes the outbound
traffic for all VCs on the ATM port. IxAtmSch generates a scheduling table detai ling a list of VCs
and number of cells of each to transmit in a particular order.
The IxAtmSch component’s two basic services are related. If a VC is admitted on the ATM port,
IxAtmSch is committed to schedule all outbound cells for that VC, so that they are conforming to
the traffic descriptor. The scheduler does not reject cells for transmission as long as the transmitting
user(s) (applications) do not over-submit. Conflict may happen on the ATM port because multiple
VCs are established to transmit on the port.
If a scheduling commitment cannot be met for a particular VC, it is not be admitted. The IxAtmSch
component admits a VC based only on the port capacity, current-port usage, and required-traffic
parameters.
The current resource requirements are for a maximum of eight ports and a total of 32 VCs across
all ports. This may increase in the future.
Table 9 shows the ATM service categories that are supported in the current scheduler model.
Table 9. Supported Traffic Types
Traffic TypeSupportedNum VCsCDVTPCRSCRMCRMBS
rt-VBRYes
nrt-VBRYes
UBRYes Up to 32 VC NoYesNoNoNo
CBR
† This scheduler implementation is special purpose and assumes SCR = PCR.
†† The CDVT does not comply with the ATM-TM-4.1 standard.
Yes —
simulated
Single VC
per port
Single VC
per port
Single VC
per port
†
YesYes
NoYesYesNoNo
YesYes= PCRNoNo
YesNoYes
April 2005IXP400 Software Version 2.0Programmer’s Guide
80Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
6.4Connection Admission Control (CAC) Function
IxAtmSch makes outbound virtual connection admission decisions based a simple ATM port
reference model. Only one parameter is needed to establish the model: outbound (upstream) port
rate R, in terms of (53 bytes) ATM cells per second.
IxAtmSch assumes that the “real-world” ATM port is a continuous pipe that draws the ATM cells
at the constant cell rate. IxAtmSch does not rely on a hardware clock to get the timing. Its timing
information is derived from the port rate. It assumes T = 1/R seconds pass for sending every ATM
cell.
IxAtmSch determines if a new (modeled) VC admission request on any ATM port is acceptable
using the following information supplied by its client:
• Outbound port rate
• Required traffic parameters for the new VC
• Traffic parameters of existing VCs on that port
IxAtmSch works on a first-come-first-served basis. For example, if three existing CBR VCs on the
ATM port each use one-fourth of the port’s capacity (PCR = R/4), the fourth CBR VCC asking for
1/3 of the port capacity (PCR = R/3) will be rejected. IxAtmSch issues a globally unique VCC ID
for each accepted VCC.
For non-CBR real time VCs — where the SCR and PCR values are different — only the SCR value
is used to determine the required capacity for the VC. This is based on the principle that, over a
long term, the required capacity of the VC will be equal to the SCR value, even if the VC may burst
at rates above that rate for short periods.
Upon a successful registration via the CAC function, each VC is issued a port-unique identifier
value. This value is a positive integer. This value is used to identify the VC to IxAtmSch during
any subsequent calls. The combination of port and VC ID values will uniquely identify any VC in
the processor device to the IxAtmSch component.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00781
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
6.5Scheduling and Traffic Shaping
Figure 25. Multiple VCs for Each Port, Multiplexed onto Single Line by the ATM Scheduler
VCs submit demand
for transmit of ATM
cells.
IxAtmSch component
determines when to
schedule each cell
on the physical port.
Cells are queued for
transmission on each
port based on this
schedule table, such
that all traffic cont rac t s
are fulfilled.
VC 1
Port 1
Schedule Table for
PORT 1
6.5.1Schedule Table
VC 2
Port 2
VC 3
Port 1
IxAtmSch Component
Schedule Table for
VC 4
Port 3
PORT 2
VC 5
Port 1
Schedule Table for
VC 6
Port 2
PORT 3
B2298-01
Once an ATM port is modeled and VCs are admitted on it, the client can request IxAtmSch to
publish the schedule table that indicates how the cells — on all modeled VCs over the port — will
be interleaved and transmitted.
IxAtmSch publishes a scheduling table each time its scheduling function is called by a client for a
particular port. The schedule table data structure returned specifies an ordering on which cells
should be transmitted from each VCs on the port for a forthcoming period. The client is expected to
requests a table for a port when the transmit queue is low on that port.
The number of cells that are scheduled by each call to the scheduling function will vary depending
on the traffic conditions. The schedule table contains an element, totalCellSlots, which specifies
how many cell slots are scheduled in this table returned, including idle cells.
When the client calls the schedule function, the scheduler assumes that all previously scheduled
cells on this port have been transmitted and that it may overwrite the previous schedule table with
the new table. The client, therefore, must not be dependent on the integrity of the previo us table
when a request is made for a new schedule table. Additionally, the client should ensure that the
current schedule table has been processed by the transmit mechanism before it requests for a new
table.
April 2005IXP400 Software Version 2.0Programmer’s Guide
82Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
The schedule table is composed of an array of table entries, each of which specifies a VC ID and a
number of cells to transmit from that VC. The scheduler explicitly inserts idle cells into the table,
where necessary, to fulfill the traffic contract of the VCs registered in the system. Idle cells are
inserted in the table with the VC identifier set to 0.
The exact format of the schedule table is defined in IxAtmTypes.h.
Figure 26 shows how this table is translated into an ordered sequence of cells transmitted to the
ATM port.
Figure 26. Translation of IxAtmScheduleTable Structure to ATM Tx Cell Ordering
IxAtmScheduleTable
tableSize: 5
totalCellSlots: 9
table:
*ptr
Cells transmitt ed on the ATM line
in the order specified, numbered
by vcUserConnId. (0 indicat e s
idle cell)
ATM Tx
IxAtmScheduleTableEntry[]
vcUserConnId: 1
0
numberOfCells: 2
vcUserConnId: 2
1
numberOfCells: 3
vcUserConnId: 0
2
numberOfCells: 1
vcUserConnId: 1
3
numberOfCells: 1
vcUserConnId: 2
4
numberOfCells: 2
6.5.1.1Minimum Cells Value (minCellsToSchedule)
When a port model is created the minimum number of cells (minCellstoSchedule) that the
scheduler should schedule per table is specified. Therefore, as long as there is at least one cell
available to schedule the scheduler will guarantee to generate a table containing a minimum
totalCellSlots value of minCellsToSchedule. If the number of outstanding cells available for
scheduling is less than minCellsToSchedule, idle cells are scheduled to make up the difference.
This value is setup once per port and cannot be modified.
112220122
B2301-01
Note: The minCellstoSchedule facility is provided to simplify the transmission control code in the case
where queue threshold values are used to drive scheduling. The threshold value in cells can be
matched to the minCellsToSchedule so that scheduler is always guaranteed to schedule enough
cells to fill the Tx Q above its threshold value.
6.5.1.2Maximum Cells Value (maxCells)
The maximum number of cells that the scheduler produces in a table can be limited by the
maxCells parameter. This can controllable on a table by table basis. The actual number of cells
scheduled will be the lesser of maxCells and minCellsToSchedule.
6.5.2Schedule Service Model
IxAtmSch provides schedule service through two functional interfaces: “VC queue update” and
“Schedule table update.”
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00783
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
The client calls the VC queue update interface whenever the user of the VC submits cells for
transmission. The structure of the VC queue update interface is compatible with the requirements
of the IxAtmdAcc component.
The client calls the schedule-table-update interface whenever it needs a new table. Internally,
IxAtmSch maintains a transmit queue for each VC.
IxAtmSch also provides a “VC queue clear” interface for use when the client wishes to cancel
pending demand on a particular VC. This interface is useful, for example, when the client wishes to
remove a VC from the system.
6.5.3Timing and Idle Cells
IxAtmSch does not rely on a hardware clock for timing. Instead, the component derives timing
information from the supplied port transmit rate for each modeled ATM port. IxAtmSch assumes
that T = 1/R seconds pass for sending every ATM cell. IxAtmSch also assumes that all cells
scheduled in a schedule table are transmitted immediately following the cells previously scheduled
by the scheduler on that port. (No cells — other than those scheduled by IxAtmSch — are being
transmitted on the port.)
The client is responsible for calling “update table” in the following timely fashion, if the demand is
always there. Suppose the “update table” calls for a port corresponds to time spending T(1),
T(2),…, where one T(n) is the time needed to transmit cells scheduled in the n’th updated table.
Then, if the demand is always there, the client must call the n’th “update table” before
T(1)+T(2)+…+T(n-1) has passed, assuming the client’s first such call is at time 0. This can be
easily achieved by making sure that port transmission is never empty when the demand is
continuously pouring in.
When all registered VC transmit queues are exhausted, an empty schedule table is returned by the
ixAtmSchTableUpdate interface. It is assumed that the client will instruct the lower layers to
transmit idle cells until new cells are submitted for transmit on a registered VC. IxAtmSch is not
aware of the number of idle cells transmitted in this situation and will reset its internal clock to its
starting configuration when new cells are queued.
A further interface is provided to allow the client to update the transmit port rate of an ATM port
which has already been registered with the IxAtmSch device, and may have established VCs with
pending transmit demand. This interface is provided to cater for the event of line-rate drift, as can
occur on transmit medium.
In the event that the new port rate is insufficient to support all established VC transmit contracts,
IxAtmSch will refuse to perform this modification. The client is expected to explicitly remove or
modify some established VC in this event, such that all established contracts can be maintained and
then resubmit the request to modify the ATM port transmit rate.
Note: If UBR VCs are registered and they specify a PCR that is based on the initial line rate and the line
rate subsequently changes to below the PCR values supplied for the UBR connections, the
scheduler will still allow the port rate change.
6.6Dependencies
The IxAtmSch component has an idealized local view of the system and is not dependent on any
other IXP400 software component.
April 2005IXP400 Software Version 2.0Programmer’s Guide
84Document Number: 252539, Revision: 007
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
Some function interfaces supplied by the IXP400 software component adhere to structure
requirements specified by the IxAtmdAcc component. However, no explicit dependency exists
between the IxAtmSch component and the IxAtmdAcc component.
6.7Error Handling
IxAtmSch returns an error type to the user when the client is expected to handle the error. Internal
errors will be reported using standard processor error-reporting techniques.
6.8Memory Requirements
Memory estimates have been sub-divided into two main areas: performance critical and not
performance critical.
6.8.1Code Size
The ixAtmSch code size is approximately 35 Kbytes.
Intel® IXP400 Software
6.8.2Data Memory
There are a maximum of 32 VCs per port and eight ports supported by the IxAtmSch component.
These multipliers are used in Table 10.
The key performance measure for the IxAtmSch component is the rate at which it can generate the
schedule table, measured by time per cell. The rate at which queue updates are performed is also
important. As this second situation will happen less frequently, however — because a great many
cells may be queued in one call to the update function — it is of secondary importance.
The remaining functionality provided by the IxAtmSch is infrequent in nature, being used to
initialize or modify the configuration of the component. This situation is not performance-critical
as it does not affect the data path of the IXP42X product line processors.
Per VC DataPer Port DataTotal
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00785
Intel® IXP400 Software
Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API
6.9.1Latency
The transmit latency introduced by the IxAtmSch component into the overall transmit path of the
processor will be zero under normal operating conditions. This is due to the fact that — when
traffic is queued for transmission — scheduling will be performed in advance of the cell slots on
the physical line becoming available to transmit the cells that are queued.
April 2005IXP400 Software Version 2.0Programmer’s Guide
86Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components:
Security (IxCryptoAcc) API7
This chapter describes the Intel® IXP400 Software v2.0’s “Security API” IxCryptoAcc accesslayer component.
The Security Hardware Accelerator access component (IxCryptoAcc) provides support for
authentication and encryption/decryption services needed in cyrptographic applications, such as
IPSec authentication and encryption services, SSL or WEP. Dep endin g on the cryptographic
algorithm used, cryptography clients can offload the task of encryption/decryption from the Intel
XScale core by using the crypto coprocessor. Clients can also offload the task of authentication by
using the hashing coprocessor.
7.1What’s New
There are no changes to this component in software release 2.0. However, the API has been
enhanced by the creation of a new function alias.
ixCryptoAccHashPerform() has been added to help clarify that the API can be used to generate a
generic SHA1 or MD5 hash value. This function is aliased to
7.2Overview
The IxCryptoAcc component provides the following major capabilities:
• Operating modes:
— Encryption only
— Decryp tion only
— Authentication calculation only
— Authentication check only
— Encryption followed by authentication calculation (for IPSec and WEP clients)
— Authentication check followed by decryption (for IPSec and WEP clients)
• Μode of operation for encryption and decryption:
ixCryptoAccHashKeyGenerate().
— NULL (for stream ciphers, like ARC4)
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00787
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
—ECB
— CBC
— CTR (for AES algorithm only)
— Single-Pass AES-C CM encryp tion and security for 802.11i.
• Authentication algorithms:
— HMAC-SHA 1 (512-bit data block size, from 20-byte to 64-byte key size)
— HMAC-MD5 (512-bit data block size, from 16-byte to 64-byte key size)
— SHA1/MD5 (basic hashing functionality)
— WEP ICV generation and verification using the 802.11 WEP standard 32-bit CRC
polynomial.
• Supports a maximum of 1,000 security associations (tunnel) simultaneously. (A Security
Association [SA] is a simplex “connection” that affords security services to the traffic carried
by it.)
7.3IxCryptoAcc API Architecture
The IxCryptoAcc API is an access-layer component that provides cryptographic services to a client
application. This section describes the overall architecture of the API. Subsequent sections
describe the component parts of the API in more detail and describe usage models for the IxCrypto
API.
7.3.1IxCryptoAcc Interfaces
IxCryptoAcc is the API that provides cyrptography acceleration features in software release 2.0.
This API contains functions that can generally be grouped into two distinct “services.” One service
is for IPSec-type cryptography protocols that utilize a combination of encryption (e.g., 3DES or
AES) and/or authentication processing (e.g., SHA-1, MD5) in a variety of different operating
modes (ECB, CBC, etc.). Throughout this document, the term “IPSec client” is used to refer to the
type of application that uses the IxCryptoAcc API in this manner. There are specific API features to
support this type of client.
The second service type is designed for 802.11-based WEP security client implementations. The
IxCryptoAcc API provides specific features that perform WEP ICV generation and ARC4 stream
cipher encryption and decryption. The “WEP services” in the API are used by “WEP clients”.
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocol clients can use some of
the features of both types of services.
The IPSec and WEP clients are application-level code executing on the Intel XScale core that
utilize the services provided by IxCryptoAcc. In this software release, the IxCryptoAccCodelet is
provided as an example of client software.
The API utilizes a number of other access-layer components, as well as hardware-based
acceleration functionality available on the NPEs and Intel XScale core. Figure 27 on page 90
shows the high-level architecture of IxCryptoAcc.
April 2005IXP400 Software Version 2.0Programmer’s Guide
88Document Number: 252539, Revision: 007
The Intel XScale core WEP Engine is a software-based “engine” for performing ARC4 and WEP
ICV calculations used by WEP clients. While this differs from the model of NPE-based hardware
acceleration typically found in the IXP400 software, it provides additionally design flexibi lit y for
products that require NPE A to perform non-crypto operations.
IxQMgr is another access-layer component that interfaces to the hardware-based AHB Queue
Manager (AQM). The AQM is SRAM memory used to store pointers to data in SDRAM memory,
which is accessible by both the Intel XScale core and the NPEs. These items are the mechanism by
which data is transferred between IxCryptoAcc and the NPEs. Separate hardware queues are used
for both IPSec and WEP services.
The NPEs provide hardware acceleration for IxCryptoAcc. Specifically, AES, DES, and hashing
acceleration can be provided by NPE C. NPE A offers ARC4 and WEP ICV CRC acceleration.
Note: CryptoAcc access-component layer provides APIs that are generic for all applications that need to
perform encryption and authentication operations. In this chapter IPSec is used as one of the
example that makes use our cryptoAcc access-layer API to perform the authentication and
encryption operations needed for implementation of IPSec
7.3.2Basic API Flow
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
This section describes a high-level flow of the IxCryptoAcc API. A more detailed example of API
usage is provided in a subsequent section.
The flow of the API is similar for both IPSec and WEP services. The client application initializes
the IxCryptoAcc API and then defines the cryptographic contexts (which describe the
cryptographic processing type, mode, direction, and a pointer back to the client application
callback) necessary for the type of data the client will be submitting to the API. Packets for
encryption/decryption and/or authentication are prepared by the client and passed to the
IxCryptoAcc component using a “Perform” function of the API, referencing a particular
cryptographic context for each packet. IxCryptoAcc invokes IxQMgr to instruct the NPEs to gather
the data and appropriate crypto context information from SDRAM.
The NPE (or Intel XScale core WEP Engine) performs encryption/decryption and authentication
using the appropriate acceleration component. The resulting data is stored back into the SDRAM.
At this point, a previously registered callback will be executed (in most cases), giving the
execution context back to the client application.
The IxCryptoAcc component depends on the IxQMgr component to configure and use the
hardware queues to access the NPE.
The basic API flow described above is shown in Figure 27.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00789
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
Figure 27. Basic IxCryptoAcc API Flow
Intel XScale
®
Core
IXP4XX North AHB
Bus
Authentication/Encry ption/
Decryption Request
Callback executed upon
operation complete
Communication between
access component and
NPE v ia A QM
Client
Access-Layer Component
Co-Processor
Access Layer
Intel XScale
WEP Engi ne
WEP Client
z
AHB Queue Manager (AQM)
NPE A
AAL Co-Processor
(for CRC
acc eleration)
Pe rf orm
Callback
IxCryptoAcc
IPSec Client
IxQMgr
NPE C
AES
Co-Processor
Perform
Callback
DES
Co-Processor
Hashing
Co-Processor
B2320-02
7.3.3Context Registration and the Cryptographic Context
Database
The IxCryptoAcc access component supports up to 1,000 simultaneous security association (SA)
tunnels. While the term SA is well-known in the context of IPSec services, the IxCryptoAcc
component defines these security associations more generically, as they can be used for WEP
services as well. Depending upon the application's requirements, the maximum active tunnels
supported by IxCryptoAcc access-layer component can be changed by the client. The number of
active tunnels will not have any impact on the performance, but will have an impact on the memory
needed to keep the crypto context information. The memory requirement will depend on the
number of tunnels.
Each cryptographic “connection” is defined by registering it as a cryptographic context containing
information such as algorithms, keys, and modes. Each of these connections is given an ID during
the context registration process and stored in the Cryptographic Context Database. The information
stored in the CCD is stored in a structure detailed below, and is used by the NPE or Intel XScale
core WEP Engine to determine the specific details of how to perform the crypto graphi c processin g
on submitted data.
April 2005IXP400 Software Version 2.0Programmer’s Guide
90Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
The context-registration process creates the structures within the CCD, but the crypto context for
each connection must be previously defined in an IxCryptoAccCtx structure. The IxCryptoAccCtx
structure contains the following information:
• The type of operation for this context. For example, encrypt, decrypt, authenticate, encrypt and
authenticate, etc.
• Cipher parameters, such as algorithm, mode, and key length
• Authentication parameters, such as algorithm, digest length, and hash length
• In-place versus non-in-place operation. In-place operation means the once the cryto processing
of the source data is completed, the resulting data is placed onto the same IX_MBUF as it was
read from.
When the client performs calls the ixCryptoAccCtxRegister() function, the following data
must be provided or received:
• The client provides a pointer to the crypto context (i.e., SA definition) being registered.
• The client is required to allocate two IX_MBUFs to the hardware accelerator will populate
with the primary and secondary chaining variables.
• The client must register two callbacks. One callback is executed upon the completion of the
registration function, the second is executed each time a cryptographic procedure (“perform”
functions) has completed on the NPE for this context. There is one exception for the perform
callback function, noted in section “ixCryptoAccXscaleWepPerform()” on page 108.
• The function returns a context ID upon successful registration in the CCD.
Figure 28 on page 92 shows the IxCryptoAcc API call process flow that occurs when registering
security associations within the CCD. This process is identical for both IPSec and WEP services
except in situations where NPE-based acceleration will not be used, such as when using WEP
services using only the Intel XScale core WEP engine. For more detailed information on this usage
model see “ixCryptoAccXscaleWepPerform()” on page 108.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00791
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
Figure 28. IxCryptoAcc API Call Process Flow for CCD Updates
(cry p to Co n te x tId , mBuf *, Ix CryptoAcc S tatus )
SDRAM
7.
6.
NPE
B2917-01
1. The proper NPE microcode images must be downloaded to the NPEs and initialized, if
applicable.
2. IxCryptoAcc must be configured appropriately according to the NPEs and services that will be
utilized. By default, IxCryptoAccConfig() configured the component for using NPE C and
enabled the Intel XScale core WEP engine.
3. IxCryptoAcc must be initialized. At this point the client application should define the crypto
context to be registered, as well as create the buffers for the initial chaining variables.
4. The crypto context must be registered using the IxCryptoAccCtxRegister() function.
5. The IxCryptoAcc API will write the crypto context structure to SDRAM. If NPE-based
acceleration is being used, IxCryptoAcc will use IxQMgr to place a descriptor for the crypto
context being registered into the Crypto Request Queue.
6. The NPE will read the descriptor on the Crypto Ready Queue, generate any reverse keys
required, and generate the initial chaining variable if required.
7. The NPE or Intel XScale core WEP Engine writes the resulting data in the Crypto Context
Database residing in SDRAM. The NPE will then enqueue a descriptor onto the Crypto
Complete Queue to alert the IxCryptoAcc component that registration is complete.
April 2005IXP400 Software Version 2.0Programmer’s Guide
92Document Number: 252539, Revision: 007
Access-Layer Components: Security (IxCryptoAcc) API
8. IxCryptoAcc will return a context Id to the client ap plication upon successful context
registration, and will call the Register Complete callback function.
7.3.4Buffer and Queue Management
The IX_OSAL_MBUF buffer format is for use between the IxCryptoAcc access component and
the client. All buffers used between the IxCryptoAcc access component and clients are allocated
and freed by the clients. The client will allocate the IX_OSAL_MBUFs and the buffers will be
passed to IxCryptoAcc. The CryptoAcc access-layer component will allocate memory for the
CCD. The client passes a buffer to IxCryptoAcc when it requests hardware-accelerator services,
and the IxCryptoAcc component returns the buffer to the client when the requested job is done.
The component assumes that the allocated IX_OSAL_MBUFs are sufficient in length and no
checking has been put in place for the IX_MBUF length within the IX_OSAL_MBUF structure.
There is, however, IX_MBUF checking when the code is compiled in DEBUG mode. When
appending the ICV at the end of the payload, it is assumed that the IX_OSAL_MBUF’s length is
sufficient and will not cause memory segmentation. The ICV offset should be within the length of
the IX_MBUF.
Depending on the transfer mode in-place before returning the buffer to the client, the encrypted /
decrypted payload is written into the source buffer or destination buffer. This selection of in-place
versus non-in-place buffer operation may be defined for each crypto context prior to context
registration.
Intel® IXP400 Software
When the AHB Queue Manager is full, the hardware accelerator will return
IX_CRYPTO_ACC_QUEUE_FULL to the client. The client will have to re-send the data to be
encrypted or decrypted or authenticated after a random interval.
7.3.5Memory Requirements
This section shows the amount of data memory required by IxCryptoAcc for it to operate under
peak call-traffic load. The IxCryptoAcc component allocates its own memory for the CCD to store
the required information, and for the NPE queue descriptors required when using NPE-based
acceleration. The total memory allocation follows this general formula:
T otal Memory Allocation = (Size of NPE queue descriptor + size of additional authentication data)
* Number of descriptors + (size of crypto context) * (number of crypto contexts
This shows the memory requirements for 1,000 security associations, the default value set by
IX_CRYPTO_ACC_MAX_ACTIVE_SA_TUNNELS. This value can be increased or decreased as
needed by the client.
Table 11. IxCryptoAcc Data Memory Usage (Sheet 1 of 2)
StructureSize in BytesTotal Size in Bytes
NPE Queue Descriptor96
Additional Authentication Data64
Total Memory per NPE Descriptor96+64=160
Number of NPE Descriptors278
Total Memory Allocated for NPE Descriptors160 * 278=44,480
Crypto Context152
).
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00793
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
Table 11. IxCryptoAcc Data Memory Usage (Sheet 2 of 2)
StructureSize in BytesTotal Size in Bytes
Number of Crypto Context
(IX_CRYPTO_ACC_MAX_ACTIVE_SA_TUNNELS)
Total Memory Allocated for Crypto Contexts152 * 1000=152,000
Size of KeyCryptoParam Structures256
Total memory allocated for KeyCryptoParam
Structures
Total Memory Allocated by IxCryptoAcc44480 + 152000 +26624=~218Kbytes
7.3.6Dependencies
Figure 29 shows the component dependencies of the IxCryptoAcc component.
Figure 29. IxCryptoAcc Component Dependencies
1,000
104*25626624
Client
IxCry p toAc c
Queue Manager
NPE A
(WAN/V OIC E NPE)
(WEPEngine)
AB
(QMg r)
NPE C
(Ethernet NPE B)
Component A depends on C om ponent B
Optional Dependancy
BA
OS Abstraction
Layer (OSAL)
IxFeatureCtrl
B3835-01
Figure 29 can be summarized as follows:
• Client component will call IxCryptoAcc for cryptographic services. NPE will perform the
encryption, decryption, and authentication process via IxQMgr.
April 2005IXP400 Software Version 2.0Programmer’s Guide
94Document Number: 252539, Revision: 007
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
• IxCryptoAcc depends on the IxQMgr component to configure and use the hardware queues to
access the NPE.
• OS Abstraction Layer access-component is used for error handling and reporting,
IX_OSAL_MBUF handling, endianness handling, mutex handling, and for memory
allocation.
• IxFeatureCtrl access-layer component is used to detect the processor capabilities at runtime, to
ensure the necessary hardware acceleration features are available for the requested
cryptographic context registrations. The IxFeatureCtrl will only issue an warning and will not
return any errors if it detects that the hardware acceleration features are not available on the
silicon. The client should make sure that they do not use the cryptographic features if a
particular version of silicon does not support the cryptographic features.
• In situations where only the Intel XScale core WEP Engine is used, the IxQMgr component is
not utilized. Instead, local memory is used to pass context between the IxCryptoAcc API and
the Intel XScale core WEP Engine.
After the CCD has been updated, the API can then be used to perform cryptographic processing on
client data, for a given crypto context. This service request function ality of the API is descri bed in
“IPSec Services” on page 96 and “WEP Services” on page 106.
7.3.7Other API Functionality
In addition to crypto context registration, IPSec and WEP service requests, the IxCryptoAcc API
has a number of other features.
• A number of status definitions, useful for determining the cause of registration or
cryptographic processing errors.
• The ability to un-register a specific crypto context from the CCD.
• Two status and statistics functions are provided. These function show information such as th e
number of packets returned with operation fail, number of packets encrypted/ decrypted/
authenticated, the current status of the queue, whether the queue is empty or full or current
queue length.
• The ability to halt the API.
The two following functions are used in specific situations that merit further explanation.
ixCryptoAccHashKeyGenerate()
This is a generic SHA-1 or MD5 hashing function that takes as input the specification of a basic
hashing algorithm, some data and the length of the digest output. There are several useful scenarios
for this function.
This function should be used in situations where an HMAC authentication key of greater than
64 bytes is required for a crypto context, and should be called prior to registering that crypto
context in the CCD. An initialization vector is supplied as input.
The function can also be used by SSL client applications as part of the SSL protocol MAC
generation by supplying the record protocol data as input.
perform this type of operation.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00795
ixCryptoAccHashPerform() can
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
ixCryptoAccCtxCipherKeyUpdate()
This function is called to change the key value of a previously registered context. Key change for a
registered context is only supported for CCM cipher mode. This is done in order to quickly change
keys for CCM mode, without going through the process of context deregistration and registration.
Changes to the key lengths are not allowed for a registered context. This function should only be
used if one is invoking cryptographic operations using CCM as cipher mode.
The client should make sure that there are no pending requests on the “cryptoCtxtId” for the key
change to happen successfully. If there are pending requests on this context the result of those
operations are undefined.
For contexts registered with other modes, the client should unregister and re-register a context for
the particular security association in order to change keys and other parameters.
7.3.8Error Handling
IxCryptoAcc returns an error type to the client and the client is expected to handle the error.
Internal errors will be reported using an IxCryptoAcc-specific, error-handling mechanism listed in
IxCryptoAccStatus.
7.3.9Endianness
The mode supported by this component is both big endian and little endian.
7.3.10Import and Export of Cryptographic Technology
Some of the cryptographic technologies provided by this software (such as 3DES and AES) may be
subjected to both export controls from the United States and import controls worldwide. Where
local regulations prohibit, some described modes of operation may be disabled.
7.4IPSec Services
This section describes the way that IxCryptoAcc is used in an IPSec usage model.
7.4.1IPSec Background and Implementation
When deploying IPSec-related applications, the generalized architecture in Figure 30 is used. The
figure shows the scope and the roles played by the NPE and the IxCryptoAcc component in an
IPSec application.
April 2005IXP400 Software Version 2.0Programmer’s Guide
96Document Number: 252539, Revision: 007
Access-Layer Components: Security (IxCryptoAcc) API
Figure 30. IxCryptoAcc, NPE and IPSec Stack Scope
Intel® IXP400 Software
Policy
Database
Original IP
Packet
Policy
Lookup
Hardware Accelerator (NPE) Scope
Client IPSec’s scope
The IPSec protocol stack provides security for the transported packets by encrypting and
authenticating the IP payload. Before an IP packet is sent out to the public network, it is processed
by the IPSec application (the IxCryptoAcc and supporting components, in this scenario) to
encapsulate the IP packet into the ESP or AH packet format.
The information within the SA database that is required for the cryptographic protection is passed
in via the client to the Hardware Accelerator (in the Cryptographic Protection Block). The client
looks up the crypto context policy and SA database to determine the mode of transporting packets,
the IPSec protocol (ESP or AH), etc. The client determines use of the transport or tunnel mode
from the registered security context. The mode is transparent to the hardware accelerator and the
ixCyptoAcc component.
The client processes the IP packet into ESP- or AH-packet format, the IP packet is padded
accordingly (if ESP is chosen), and the IP header mutable fields are handled (if AH). Then, based
on the SA information, the NPE executes cryptographic protection algorithms (encryption and/or
authentication). This is done regardless of whether transport or tunnel mode is used.
SA Database
Management
SA Lookup
Crypto Context
Database
Packet Processing
Cryptographic
Protection
IP
Fragmentation
IPSec'd
Packet
B2313-02
The client sends out the protected IP packet after the cryptographic protection is applied. If the IP
packet is too large in size, the client fragments the packet before sending.
Figure 31 shows the relationship of encryption and authentication algorithms within the IPSec
protocol.
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00797
Intel® IXP400 Software
2
Access-Layer Components: Security (IxCryptoAcc) API
Figure 31. Relationship Between IPSec Protocol and Algorithms
ESPAH
Encryption
Algorithm
7.4.2IPSec Packet Formats
IPSec standards have defined packet formats. The authentication header (AH) provides data
integrity and the encapsulating security payload (ESP) provides confidentiality and data integrity.
In conjunction with SHA1 and MD5 algorithms, both AH and ESP provide data integrity. The
IxCryptoAcc component supports both different modes of authentication. The ICV is calculated
through SHA1 or MD5 and inserted into the AH packet and ESP packet.
In ESP authentication mode, the ICV is appended at the end of the packet, which is after the ESP
trailer if encryption is required.
Figure 32. ESP Packet Structure
Security Parameters Index (SPI)
Sequence Number
Payload Data (Variable Length)
Authentication
Algorithm
B2307-0
Authenticated
Encrypted
Padding (0-255 Bytes)
Pad LengthNext Header
Authentication Data (Variable Length)
B2311-02
April 2005IXP400 Software Version 2.0Programmer’s Guide
98Document Number: 252539, Revision: 007
In AH mode, the ICV value is part of the authentication header. AH is embedded in the data to be
protected. This results in AH being included for ICV calculation, which means the authentication
data field (ICV value) must be cleared before executing the ICV calculation. The same applies to
the ICV verification — the authentication data needing to be cleared before the ICV value is
calculated and compared with the original ICV value in the packet. If the ICV values don’t match,
authentication is failed.
NPE determines where to insert the ICV value, based on the ICV offset specified in the perform
function.
Figure 33. Authentication He ade r
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
Next Header
Authentication Data (Variable Length)
7.4.2.1Reference ESP Dataflow
Figure 34 shows the example data flow for IP Security environment. Transport mode ESP is used
in this example. The IP header is not indicated in the figure.
The IP header is located in front of the ESP header while plain text is the IP payload.
Payload Length
Security Parameters Index (SPI)
Sequence Number
(Reserved)
B2312-01
Programmer’s GuideIXP400 Software Version 2.0April 2005
Document Number: 252539, Revision: 00799
Intel® IXP400 Software
Access-Layer Components: Security (IxCryptoAcc) API
Figure 34. ESP Data Flow
Applic a tio n
IPSec Client
Access Component /
Intel XScale® Core
NPE
Processed by
IPSec client
Processed by
NPE
Plain text
ESP
Header
Encry p t & Auth enticate
Req (S A_ID, ...)
ESP
Header
Encry p t & Au th e n tica te
Req (S A_ ID, ...)
ESP
Header
Plain Text
Plain Text
Cipher Text
Encyption Operation
ESP
Trailer
ESP
Trailer
ESP
Trailer
ESP
Header
ESP
Header
ESP
Header
Cipher Text
Cipher Text
Cipher Text
Forward authentication Operation
ESP
Trailer
ESP
Trailer
ESP
Trailer
ESP
Auth
ESP
Auth
ESP
Auth
From Application
B2333-02
7.4.2.2Reference AH Dataflow
Figure 35 shows the example data flow for IP Security environment. Transport mode AH is used in
this example. IPSec client handles IP header mutable fields.
April 2005IXP400 Software Version 2.0Programmer’s Guide
100Document Number: 252539, Revision: 007
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.