PROPRIETARY AND CONFIDENTIAL TO P MC-SIERRA, INC.,AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
This document is the proprietaryand confidential information of PMC-Sierra Inc. Access to this information
does not transfer or grant any right or license to use this intellectual property. PMC-Sierra will grant such
rights only under a separate written license agreement.
Any product, process or technology described in this document is subject to intellectual property rights
reserved by PMC-Sierra, Inc.and are not licensed hereunder. Nothing contained herein shall be construed
as conferring by implication, estoppel or otherwise any license or right under any patent or trademark of
PMC-Sierra, Inc.or any third party. Except as expressly provided for herein, nothing contained herein shall
be construed as conferring any license or right under any PMC-Sierra, Inc.copyright.
Each individual document published by PMC-Sierra, Inc. may contain additional or other proprietary
notices and/or copyright information relating to that individual document.
THE DOCUMENT MAYCONTA IN TE CHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS.
CHANGES ARE REGULARLY MADE TO THE INFORMATION CONTAINED IN THE DOCUMENTS.
CHANGES MAYOR MAY NOT BE INCLUDED IN FUTURE EDITIONS OF THE DOCUMENT.
PMC-SIERRA, INC.OR ITS SUPPLIERS MAY MAKE IMPROVEMENTS AND/OR CHANGES IN THE
PRODUCTS(S), PROCESS(ES), TECHNOLOGY, DESCRIPTION(S), AND/OR PROGRAM(S)
DESCRIBED IN THE DOCUMENT AT ANY TIME.
THE DOCUMENT IS PROVI DED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTY OR MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
ETT1, Enhanced TT1, TT1, and LCS are trademarks of PMC-Sierra, Inc.
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
Revision History
Issue NumberIssue DateDetails Of Change
1March 2000Creation of document
2July 2000Added HSTL Power Dissipation values, corrected programming
constraints for EPP, added OOB, JTAG and PLL blocks to device
dataflow diagrams, added heat sink information to Characteristics
section, corrected EPP register section, corrected Frame Format
tables, added correct bit definition for EPP Interrupt register,
corrected diagram in Appendix C, added send and receive
procedure for control packets, added ESOBLCP register
explanation, corrected conditions in Table 57, corrected Scheduler
Refresh Procedure, added new drawing for Crossbar data flow,
corrected token explanation, corrected Figures 36 and 37,
corrected spelling and formatting errors throughout document.
3August 2001Removed bit 17 from Scheduler Status Register, updated BP_FIFO
information in Scheduler Control and Reset Register, corrected
bottom view drawings for Scheduler and Crossbar, corrected signal
description section in Scheduler and Crossbar for power pins,
corrected tper3, and tpl3 in AC Electrical, added memory address
information in EPP registers, corrected mechanical drawings,
updated state diagram in Scheduler, added information about
Scheduler PLL timing, updated initialization sequence for all chips,
corrected DatasliceSignal Descriptionfor ibpen0, added bit 14(13th
and 14th Dataslice enable) to EPP Control register, updated TDM
constraints, added Output TDM Queue Overflow to the EPP’s
Interrupt register, updated EPP Output Backpressure /
Unbackpressure Threshold register, updated LCS2 link
synchronization in appendix, modified EPP Egress Control Packet
Data Format table, Modified ETT1 usage of LCS2 protocol section
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USExiii
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
xivPROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
1Functional Description
1.1OVERVIEW
The ETT1TMChip Set provides a crossbar-based switch core which is capable of switching cells between
32 ports with each port operating at data rates up to 10 Gbit/s. This section describes the main features of
the switch core and how cells flow through a complete system that is based on the ETT1 Chip Set.
This document often refers to port rates of OC-192c or OC-48c. The ETT1 Chip Set itself operates at a
fixed cell rate of 25M cells per second per port and thus is unaware of the actual data rate of the attached
link. So a switchmight be 32 port sof OC-192c, or it could be 32 ports of 10 Gbit/sEthernet; itis the internal
cell rate that is determined by the ETT1 Chip Set, not the link technology.
1.1.1E TT1 Switch Core Features
The ETT1 switch core provides the following features:
•320 Gbit/s aggregate bandwidth- up to 32 ports of 10 Gbit/s bandwidth each
•Each port can be configured as 4 x OC-48c or 1 x OC-192c
•Both port configurations support four priorities of best-ef fort traffic for unicast and multicast data
traffic
•TDM support for guaranteed bandwidth and zero delay variation with 10 Mbit/s channel resolution
•LCS
TM
protocol supports a physical separation of switch core and linecards up to 200 feet (70 m)
•Virtual output queues to eliminate head-of-line blocking on unicast cells
•Internal speedup to provide near-output-queued performance
•Cells are transferred using a credit mechanism to avoid cell losses due to buffer overrun
•In-band management and control via Control Packets
•Out-of-band management and control via a dedicated CPU interface
•Optional redundancy of all shared components for fault tolerance
•Efficient support for multicast with cell replication performed within the switch core
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE15
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
1.1.2The Switch Core Model
The ETT1Chip Set is designed to provide a switch core, not a complete packet switch system.A complete
switch consists of one or more switch cores, together with a number of linecards. Each linecard connects
to one port in the core. The linecard includes a physical interface (fiber, co-axial cable) to a transmission
system suchas SONET/SDH or Ethernet. The linecardanalyzes incoming cells or packets and determines
the appropriate egress port and priority. The linecard contains any cell/packet queues that are needed to
allow for transient congestion through the switch.
The ETT1 switch core operates on fixed sizecells. If the linecard transmissionsystem uses variablelength
packets, or cells of a size different from those used in the core, then the linecard is responsible for
performing any segmentation and reassembly that is needed. Figure 1 illustrates this generic
configuration.
Figure 1. The Basic Components of a Switch Built Around the ETT1 Chip Set.
ETT1 Switch Core
port 0
Linecard 0
SONET/SDH
Ethernet....
Linecard 31
SONET/SDH
Ethernet....
port 0
LCS Protocol
ETT1 Chip Set
OOB Bus
CPU
The ETT1 Chip Set has been designed to allow for up to 200 feet (70 meters) of physical separation
between the ETT1 core and the linecard. The LCS
TM
(Linecard to Switch) protocol is used between the
ETT1 core and the linecard to ensure lossless transfer of cells between the two entities. However, while
the LCS protocol must be implemented,the physical separation is not mandatory; the linecard could reside
on the same physical board as the ETT1 port devices.
The switch core has two main interfaces. One is the interface between the linecard and the core port,
described in the LCS Protocol section.
16PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
The secondinterface is between the ETT1 devices and the local CPU. TheETT1 ChipSet requires a local
CPU for configuration,diagnostics and maintenance purposes. A singleCPU can control a completeETT1
core via a common Out-Of-Band (OOB) bus. All of the ETT1 devices have an interface to the OOB bus.
The OOB bus is described in Section 1.1.4 “The OOB (Out-Of-Band) Bus” on page 18.
1.1.3The LCS Protocol
The Linecard-to-Switch (LCSTM) protocol provides a simple, clearly defined interface between the linecard
and the core. In this section we introduce LCS. There are two aspects to LCS:
•a per-queue, credit-based flow control protocol
•a physical interface
The LCS protocol provides per-queue, credit-based flow control from the ETT1 core to the linecard, which
ensures that queues are not overrun.The ETT1core has shallow (64 cells)queues in both the ingressand
egress directions. These queues compensate for the latency between the linecard and the core. One way
to think of these queues is simply as extensions of the queues within the linecards. The queues
themselves are described further in Section 1.3 “Prioritized Best-Effort Queue Model” on page 30.
The LCS protocol is asymmetrical; it uses different flow control mechanisms for the ingress and egress
flows. For the ingress flow LCS uses credits to manage the flow of cells between the linecards and the
ETT1 core. The core provides the linecard with a certain number of creditsfor each ingress queue in the
core. These credits correspond to the number of cell requests that the linecard cansend to the core. For
each cell request that is forwardedto a given queuein the core the linecardmust decrementthe number of
credits for that queue. The core sends a grant (which is also a new credit) to the linecard whenever the
core is ready to accept a cell in responseto the cell request. At some later time, which is dependent on the
complete traffic load, the cell will be forwarded through the ETT1 core to the egress port. In the egress
direction a linecard can send hole requests, requesting that the ETT1 core does not forward a cell for one
celltime. The linecard can issue a hole request for each of the four besteffort unicastor multicast priorities.
If the linecard continually issued hole requestsat all four priorities then the ETT1 core would not forward
any best effort traffic to the linecard.
The LCS protocol information is contained within an eight byte header that is added to every cell.
The physical interface that has been implemented in the ETT1 Chip Set is based on a faster version of the
Gigabit EthernetSerdes interface,enabling theuse of off-the-shelf parts for the physical link. This interface
provides a
1.5 Gbit/sserial link that uses 8b/10b encoded data. Twelve of these links are combined to provide a single
LCS link operating at 18Gbaud, providing an effective databandwidth that is in excessof an OC-192clink.
NOTE: The LCS protocol is defined in the “LCS Protocol Specification -- Protocol Version 2”,
available from PMC-Sierra,Inc. This version of LCS supersedes LCS Version 1. Version 2
is first supported in the TT1 Chip Set with the Enhanced Port Processor device (also
referred to as the ETT1 Chip Set) and will be supported in future PMC-Sierra products.
The ETT1 implementation of the LCS protocol is described further in Section 1.6 “ETT1
Usage of the LCS Protocol” on page 64.
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE17
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
1.1.4The OOB (Out-Of-Band) Bus
The ETT1 Chip Set requires a local CPU to perform initialization and configuration after the Chip Set is
reset or if the core configuration is changed - perhaps new ports are added or removed, for example. The
OOB bus provides a simple mechanism whereby a local CPU can configure each device.
Logically, the OOB bus provides a 32 bit address/data bus with read/write, valid and ready signals. The
purpose of the OOB bus is for maintenance and diagnostics; the CPU is not involved in any per-cell
operations.
1.2ARCHITECTURE AND FEATURES
1.2.1ETT1 Switch Core
An ETT1 switch core consists of four types of entities: Port, Crossbar, Scheduler, and CPU/Clock. There
may be one or more instances of each entity within a switch. For example, each entity might be
implemented as a separate PCB, with each PCB interconnected via a single midplane PCB. Figure 2
illustrates the logical relationship between these entities.
Linecard
Figure 2. ETT1 Switch Core Logical Interconnec ts
Crossbar
Port
Flow Control
Crossbar
Scheduler
CPU/clock
Linecard
Port
18PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
Each ETT1 port is attached to one or more linecards. The port contains the shallow cell queues and
implements the LCS protocol. The port and Scheduler exchange information about cells that are waiting to
be forwarded through the Crossbar core.
The Scheduler maintains local information on the number of cells that are waiting in all of the ingress and
egress queues. It arbitrates amongst all cells in the ingress queues, and instructs all of the ports as to
which cell they can forward through the Crossbar at each cell time.
Two Crossbars are used. The first, referred to as simply ‘the Crossbar’, interconnects all of the ports with
all of the other ports, enabling cells to be forwardedfrom the ingress port queues to theegressport queues
(in a different port). The Crossbaris reconfigured at every celltime to provide any non-blocking one-to-one
or one-to-many mapping from input ports to output ports. Each Crossbar port receives its configuration
information from its attached port; the Crossbars do not communicate directly with the Scheduler. The
second Crossbar is the flow-control Crossbar. It passes output queue occupancy information from every
egress port to every ingress port. The ingress ports use this information to determine when requests
should and should not be made to the Scheduler.
The CPU/clock provides clocks and cell boundary information to every ETT1 device. It also has a local
CPU which can read and write state information in every ETT1 device via the OOB bus. The CPU/clock
entity is a necessary element of the ETT1 switch core, but does not contain any of the ETT1 devices.
1.2.2Basic Cell Flow
The ETT1 Chip Set consists of four devices. Their names (abbreviations) are:
•Dataslice(DS)
•Enhanced Port Processor (EPP)
•Scheduler (Sched)
•Crossbar (Xbar)
This section describes how cells flow through these four devices.
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE19
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
Figure 3. Two P ort Conf igu ration of a ETT1 Switch
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
1,3
2
1,3
Input Data Slice
2,3
5
5
7
Input EPP
Crossbar
Flow Control
Crossbar
6
5
7
Output Data Slice
6
5,7
Output EPP
7
44
Scheduler
data cell flow
control flow
3
6,7
Figure 3 shows a two-port configuration of a ETT1 switch. Only the ingress queues of the left hand port
and theegress queues of the right hand port are shown. The port has one EPP and eithersix or seven DS
devices. The DS contains the cell queue memory and also has the Serdes interface to the linecard. A
single cell is “sliced” across all of the Dataslice devices, each of which can manage two slices. The EPP is
the port controller, and determines where cells should be stored in the DS memories. Multiple Crossbar
devices make up a full Crossbar. A single Scheduler device can arbitrate for the entire core.
A cell traverses the switch core in the following sequence of events:
1. A cell request arrives at the ingress port, and is passed to the EPP. The EPP adds the request to
any other outstanding cell requests for the same queue.
2. At some later time, the EPP issues a grant/credit to the source linecard, requesting an actual cell
for a specific queue. The linecard must respond with the cell within a short period of time.
3. The cell arrives at the ingress port and the LCS header is passed to the EPP. The EPP
determines the destination queue from the LCS header, and then tells the Dataslices where to
store the cell (each Dataslice stores part of the cell). The EPP also informs the Scheduler that a
new cell has arrived and so the Scheduler should add it to the list of cells waiting to be forwarded
through the Crossbar. The EPP modifies the LCS label by replacing the destination port with the
source port, so the egress port and linecard can see which port sent the cell.
4. The Scheduler arbitrates among all queuedcells and sends a granttothose ports that can forward
a cell. The Scheduler also sends a routing tag to each of the destination (egress) ports; this tag
20PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
tells the ports which of the many source ports will be sending it a cell.
5. The source EPP sends a read command to the Dataslices, which then reads the cell from the
appropriate queue and sends it to the Crossbar. At the same time, the destination ports send the
routing tag information to the Crossbar. This routing tag information is used to configure the
internal connections within the Crossbar for the duration of one cell time. The cell then flows
through the Crossbar from the source port to the destination port.
6. The cell arrives at the destination port, and the EPP receives the LCS header of the cell. It uses
this information to decide in which egress queue the cell should be stored. If this was a multicast
cell which caused the egress multicast to reach its occupancy limit, then the EPP would send a
congestion notification to the Scheduler.
7. At some later time, the EPP decides to forward the cell to the linecard. The EPP sends a read
command to the Dataslices which read the cell from memory and forward the cell out to the
linecard. The egress EPP also sends flow control to the ingress EPP, informing it that there now
exists free space in one or more of the egress EPP’s output queues. Also, if the transmitted cell
was a multicast cell then this may cause the egress queue to go from full to not full, in which case
the EPP notifies the Scheduler that it (the EPP) can once again accept multicast cells.
The above description does not account for all of the interactions that can take place between ETT1
devices, but it describes the most frequent events. In general, users do not need to be aware of the
detailed interactions, however knowledge of the main information flows will assist in gaining an
understanding of some of the more complicated sections.
1.2.3Prioritized Best-effort Service
An ETT1 switch core provides two types of service. The first is a prioritized,best-effort service. The second
provides guaranteed bandwidth and is described later.
The best-effort service is very simple. Linecards forward best-effort cells to the ETT1 core where they will
be queued. The Scheduler arbitrates among the various cells; the arbitration algorithm has the dual goals
of maximizing throughput while providing fair access to all ports. If more than one cell is destined for the
same egress port then the Scheduler will grant one of the cells and the others will remain in their ingress
queues awaitinganother round of arbitration. The serviceis best-effort in that the Scheduler tries its best to
satisfy all queued cells, but in the case of contention then some cells will be delayed.
The Scheduler support s four levels of strict priority for best effort traffic. Level 0 cells have the highest
priority, and level 3 cells have the lowest priority. A level 0 cell destined for a given port will always be
granted before a cell of a different priority level, in the same ingress port, that is destined for the same
egress port.
A ‘flow’ is a sequence of cells from the same ingress port to the same egress port(s) at a given priority.
Best-effort flows are either unicast flows (cells in the flow go to only one egress port), or multicast flows (in
whichcasecellscangotomany,evenall,oftheegressports).
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE21
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
1.2.4End-to-End Flow Control
The full queueing and flow control model is shown in Figure 4.
NOTE: Creditsor backpressure are used at every transfer point to ensure that cells cannot be lost
due to lack of buffer space.
Figure 4. Queueing and Flow Control
ETT1 core
r
a
Linecard
Ingress queues
ETT1 port
Ingress
queues
b
s
s
o
r
C
1
T
T
E
ETT1 port
Egress
queues
Linecard
Egress queues
LCS credits
cell flow
backpressure/credits
hole requestsETT1 backpressure
1.2.5TDM Service
The ETT1 TDM service provides guaranteed bandwidth and zero cell delay variation. These properties,
which are not available from the best-effort service, mean that the TDM service might be used to provide
an ATM CBR service, for example. The ETT1 core provides the TDM service at the same time as the best
effort service, and TDM cells integrate smoothly with the flow of best-effort traffic. In effect, the TDM cells
appear to the Scheduler to be cells of the highest precedence, even greater than level zero best-effort
multicast traffic.
The TDM service operates by enabling a ETT1 port (and linecard) to reserve the crossbar fabric at some
specified cell time in the future. The Scheduler is notified of this reservation and will not schedule any
best-effort cells from the ingress port or to the egress port during that one cell time. Each port can make
separate reservations according to whether it will send and/or receive a cell at each cell time.
Several egress ports may receive the same cell from a given ingress port; therefore, the TDM service is
inherently a multicast service.
In order to provide a guaranteed bandwidth over a long period of time, an ingress port will want to repeat
the reservations on a regular basis. To support this the ETT1 core uses an internal construct called a TDM
22PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
Frame. A TDM Frame is simplya sequence of ingressand egressreservations. The length (number of cell
times) of a TDM Frame is configurable up to a maximum of 1024 cells. The TDM Frame repeat s after a
certain fixed time. All ports are synchronized to the start of the TDM Frame, and operate
cell-synchronouslywith respectto each other. Thus, atany cell time, everyETT1 port knows whether ithas
made a reservation to send or receive a TDM cell. See the application note “LCS-2 TDM Service in the
ETT1 and TTX Switch Core”, available from PMC-Sierra, Inc.
Figure 5 illustrates the idea of a TDM Frame. The TDM Frame has N slots(N is 1024 or less) where each
slot is one 40ns cell time. The TDM Frame is repeated continuously, but adjacent TDM Frames are
separated by a guard band of at least 144 cells.
Figure 5. The TDM Fr ame Concept
40ns
1
One TDM Frame
N
Guard band
All of the ETT1 ports must be synchronized with respect to the start of the TDM Frame. This
synchronization information is distributed via the Scheduler. The Scheduler can generate the
synchronization pulses itself, or it can receive “Suggested Sync” pulses from a ETT1 port which can in turn
receive synchronization signals from a linecard. The linecards do not need to be exactly synchronized to
either the ETT1 core or the other linecards. The LCSprotocol willcompensate forthe asynchrony between
the linecard and the ETT1 core.
1.2.6S ubport Mode (2.5 Gbit/s Linecards)
An ETT1 switch core can have up to 32 ports. Each port supports a linecard bandwidth in excess of
10 Gbit/s.Thisbandwidth mightbe used by a single linecard (for example, OC-192c), or the ETT1port can
be configured to share this bandwidth among up to four ports, each of 2.5 Gbit/s. This latter mode,
specifically four 2.5 Gbit/s linecards, is referred to as subport mode, or sometimes as quad OC-48c mode.
A single ETT1 switch can have some ports in ‘normal’ mode, and some in subport mode; there is no
restriction. The four levels of prioritized best-effort traffic are available in all configurations. However, there
are some important differences between the two modes. One difference is that the LCS header must now
identify the subport associated with each cell. Two bits in the LCS label fields are used for this purpose, as
described below.
A second difference is that the EPP must carefully manage the output rate of each subport, so as not to
overflow the buffers at the destination linecard, and this is also described below.
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE23
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
A thirddifference is thatthe EPP must maintainseparate LCS request counters for each of the subports. It
must also maintain separate egress queues. So the number of queues can increase four-fold in order to
preserve the independence of each subport. Section 1.3 “Prioritized Best-EffortQueue Model” on page 30
describes the various queuing models in great detail.
1.2.6.1
Identifying th e Source Subport
The EPP manages a single physical stream of cells at 25M cells/second. In subport mode the EPP must
look at each incoming cell and determine which subport has sent the cell. The LCS label field is used to
achieve this. Two bitswithin the label(referred toin the LCS Specification as MUX bits) are used to denote
the source subport, numbered 0 through 3. The MUX bits must be inserted in the LCS label before the cell
arrives at the EPP. The MUX bits might be inserted at the source linecards themselves. Alternatively, they
might be inserted by a four-to-one multiplexer device placed between the subport linecards and the EPP.
In this latter case, the multiplexer device might not be able to re-calculate the LCS CRC. For maximum
flexibility, the EPP can be configured to calculate the LCS CRC either with or without the MUX bits.
1.2.6.2
Egress Cell Rate
Within the EPP is an Output Scheduler process which is different from, and should not be confused with,
the ETT1 Scheduler device. At every OC-192 cell time, the Output Scheduler looks at the egress queues
and decides which cell should be forwarded to the attached egress linecard(s). In subport mode, the
Output Scheduler will constrain the egress cell rate so as not to overflow any of the 2.5 Gbit/s links. It does
this by operating in a strict round-robin mode, so that at any OC-192 cell time it will only try to send a cell
for one of the subports. The four 2.5 Gbit/s subports are labeled as 0 through 3 ; at some cell time the
Output Scheduler will only try to send a cell from the egress queues associated with subport 0. In the next
time, it will only consider cells destined for subport 1, etc. If, at any cell time, there are nocells to be sent to
the selected subport, then an Idle (empty) cell is sent. For example, if subports 0 and 3 are connected to
linecards but subports 2 and 4 are disconnected, then the Output Scheduler will send a cell to 0, then an
empty cell, then a cell to 3, then an empty cell, and then repeat the sequence. The effective cell rate
transmitted to each subport will not exceed 6.25 M cells per second.
1.2.7LCS Control Packets
The LCS protocol provides in-band control packets. These packets (cells) are distinct from normal cell
traffic in that they do not pass through the fabric to an egress linecard, but are intended to cause some
effect within the switch.
There are two classes of Control Packets. The first class, referred to as CPU Control Packets, are
exchanged between the linecard and the ETT1 CPU (via the EPP and Dataslices). The intention is that
CPU Control Packets form the basic mechanism through which the linecard CPU and ETT1 CPU can
exchange information. This simple mechanism is subject to cell loss, and so should be supplemented by
some form of reliable transport protocol that would operate within the ETT1 CPU and the linecards.
The second class, referred to as LCS Control Packets, are used to manage the link between the linecard
and the ETT1 port. These LCS Control Packets can be used to start and stop the flow of cells on the link,
24PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
to provide TDM synchronizationevent information,and can recover from any grant/credit information that
is lost if cells are corrupted in transmission.
1.2.7.1
Sending a Control Packet f rom the OOB to the Linecard
Before sendinga CPU (OOB to linecard) control packet,the OOB must first write the controlpacketheader
and payload data intothe appropriatelocations in the Dataslice. (See Section3.3 “OutputDataslice Queue
Memory Allocation with EPP”, Table 23, on page 164.) The header for customer-specific CPU control
packets should be written into the Dataslices as shown, but the payload data is completely up to the
customer.
To send the CPU control packet which has been written into Dataslice queue memory, the OOB writes the
ESOBLCP register with the control packet type select in bits [5:4] (see the bit-breakout), and a linecard
fanout in bits [3:0]. If the port is connected to 4 subport linecards, then bits [3:0] are a subport-bitmap. If
the port is connected to one OC-192c linecard, then bit 0 must be set when the OOB wishes to send a CP.
When the CP has been sent to the linecard(s) indicated in bits [3:0], bits [3:0] will read back as 0. Since
control packets have higher priority than any other traffic type, they will be sent immediately, unless the
EPPisprogrammedtosendonlyidlecells.
1.2.7.2
Sending a Control Packet F rom the Linecard to the OOB
The linecard sends control packets to OOB using the regular LCS request/grant/cell mechanism. A CPU
(linecard to OOB) control packet must have the CPU bit set in its request label (See Section 1.1.3 “The
LCS Protocol” on page 17.) When the EPPreceives the cellpayload for a CPU control packet, it stores the
cell in the Dataslices’ Input Queue memories and raises the “Received LC2OOB/CPU Control Packet from
Linecard...” interrupt. (In OC-192c mode, it will always be Linecard 0.)
The input queue for CPU control packets from each linecard is only 8 cells deep, so as soon as the OOB
sees a “Received LC2OOB...” interrupt, it should read the appropriate “Subport* to OOB FIFO Status”
register (0x80..0x8c). Bit [3:0] of that register will tell how many control packet cells are currently in the
input queue; bits 6:4 will tell the offset of the head the 8-cell CPU CP input queue. That queue offset
should be used to form addresses for the Dataslices’ Input Queue memories. See Section 3.2 “Input
Dataslice Queue Memory Allocation with EPP” on page 162, for Dataslice Input Queue memory
addressing. Then the OOB should read those addresses to obtain the CPU CP payload data. When the
OOB has read the CPU CP payload data, it should write the appropriate “Linecard * to OOB FIFO Status”
register (any value). A write to that register, regardless of the write data, will cause the head of the queue
to be dequeued, freeing up that space in the CPU CP input queue.
See Section 1.6.3 “Control Packets” on page 69 for more details.
1.2.8Redundancy
An ETT1 core can be configured with certain redundant (duplicated) elements. A fully redundant core is
capable of sustaining single errors within any shared device without losing or re-ordering any cells. This
section describes the main aspects of a redundant core. A complete switch may have two ETT1 cores,
PROPRIETARY AND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE25
Released
Data Sheet
PMC-2000164ISSUE 3ENHANCED TT1™ SWITCH FABRIC
PMC-Sierra, Inc.
PM9311/2/3/5 ETT1™ CHIP SET
with linecards dual-homed to both cores, and while this is a valid configuration the ETT1 core does not
provide any specific support for such a configuration.
Figure 6 shows a fully redundant core. The Crossbars and Scheduler are fully replicated, as are the links
connecting those devices. The port devices (EPP and Dataslices) are not replicated. It is important to
understand that if an EPP or Dataslice fails (or incurs a transient internal error), then data may be lost, but
only within that port.
Figure 6. Fully R edu ndan t Core
Crossbar 0
Port
Dataslice
EPP
Dataslice
Crossbar 1
EPP
Flow Control
Crossbar 0
Port
Flow Control
Crossbar 1
Scheduler 0
Scheduler- 1
Fault tolerant region
A fully redundant core operates in exactly the same way as a non-redundant core. Consider the EPP: it
sends new request information to bothSchedulers. The two Schedulers operate synchronously, producing
identical outputs, and the information (grants) received by the EPP will be the same. The Dataslices
operate in exactly the same way with their two Crossbar devices.
26PROPRIETARYAND CONFIDENTIAL TO PMC-SIERRA, INC., AND FOR ITS CUSTOMERS’ INTERNAL USE
Loading...
+ 313 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.