ERICSSON PBM 990 08-1MQ User Manual

October 1998
PBM 990 08/1MQ
ATM Multi Service Chip

Description

The ATM Multi Service Chip is a cost efficient solution for multi service access applications. It is especially suited for ADSL, VDSL, FTTx and HFC applications, where all services are transported over ATM.
The chip interfacesto the modem/transceiver chip set, and distributes data to and from the different service interfaces. The integrated services are ATM Forum 25.6 and POTS/ISDN. Other services such as Ethernet can be added via the Utopia interface.
Circuit emulation via AAL1 is performed for the POTS/ISDN service.
ATM Multi Service Chip
ATMF 25.6 #2
POTS
ATMF 25.6 #1
circuitry
Line
PCM
ATMF
transceiver
Circuit
emulation

Key Features

Low power CMOS technology.
240-pin MQFP package.
Utopia level 2 interface to modem/transceiver.
Programmable VPI/VCI handling.
Upstream QoS handling supporting up to 128 kBytes external SRAM.
Generic CPU interface.
Support for ATM signalling and OAM via CPU.
Performance monitoring counters.
Two ATM Forum 25.6 interfaces.
Utopia level 1/2 interface for additional external services.
POTS/ISDNover ATM via AAL1, PCM interface for up to 4 structured 64 kbps channels or E1/T1 interface for one unstructured 2048/1544 kbps channel.
QoS
buffer
Utopia2
Utopia 2
ATM
Core
Cell
buffer
Modem /
Transceiver
10Base-T
Utopia 1/2
E/N xcvr
CPU
Figure 1. Block diagram.
Utopia
buffer
CPU block
CPU bus
Memory
1 (14)
PBM 990 08/1MQ

Functional description

General

The Multi Service Chip handles the distributionof ATM traffic in multi service access applications, and is especially suited for Network Terminals (NT). All functions are implemented in hardware only. The chip hasaUtopialevel2interfaceto the modem/transceiver chip set and several different service interfaces. The service interfaces include two ATMF 25.6 Mbps interfaces, a PCM interface that supports circuit emulation for four structured 64 kbps channels, and a Utopia level 2 (or level 1) interface. The PCM interface can also be configured to a digital E1/T1 interface which supports circuit emulation for an unstructured 2048/1544 kbps channel.
By setting up ATM connections via the CPU interface, the Multi Service Chip will distribute data traffic between the different service interfaces and the modem/transceiver interface. Since all functions are implemented in hardware only, the Multi Service Chip can handle very high bandwidth in both the downstream direction (from the modem/transceiver interface to the service interfaces) and the upstream direction (from the service interfaces to the modem/transceiver interface).
In the upstream direction, data might arrive on the tributary interfaces at higher bit rate than the modem/transceiver can handle. Therefore the Multi Service Chip has an interface to an external SRAM for temporary storage of upstream data. The interface supports SRAM with sizesupto 128 kBytes, which can be divided into 4 different buffer areas. This enables support for different service classes.
Beside the connections between the modem/transceiver interface and the service interfaces,itisalsopossibletosetupATMconnections between the CPU and both the modem/transceiver interface and the service interfaces. This makes it possible to let the NT have an active role in signalling and OAM.

ATM Core

The ATM Core is the central block of the Multi Service Chip. It handles the distribution of ATM cells between the modem/transceiver interface (also referred to as the
aggregate
the integrated service devices (also referred to as the
tributary
routingand translation, and has aset of VPI/VCI tables that must be configured by the CPU. The structure is shown in Figure 2.
interface) and the internal interface to
interface). This block handles the VPI/VCI
Tx_Utopia
Tributary (Service)
Rx_Utopia
CPU bus
CPUuw
Figure 2. Structure of ATM Core.
Mux
VPI/VCI
configuration
VPI/VCI
demux and
translation
CPUur
Loop back
EPD
CPUdw
CPUdr
QoS1
QoS2
QoS3
QoS4
VPI/VCI
demux and
translation
Loop back
VPI/VCI
configuration
Mux
Rx_Utopia
Aggregate (Modem)
Tx_Utopia
2 (14)
PBM 990 08/1MQ
Data flows
In the downstream direction, ATMcells are distributed from the aggregate interface to either one of the tributary services, the CPU buffer, or the aggregate loop-back buffer where they will be send back upstream.The destination of the cells is determined by the VPI/VCI tables.
When no cells are distributedto the tributary interface, cells can be read from either the CPUdw buffer or the tributary loop-back buffer. No VPI/VCI handling is performedforthese cells. The CPUdw bufferis divided into three separate buffers, where each is associated withoneoftheATMFand Utopia blocks.Theloop-back buffercanbe associated with any of the service blocks.
In the upstream direction, ATM cells are distributed from the tributary interface to either one of the QoS buffers, the CPUur buffer, or the tributary loop-back buffer where they will be send back downstream. The destination of the cells is determined by the VPI/VCI tables. As for the downstream direction the CPUur buffer is divided into three separate buffers, where each is associated with one of the ATMF and Utopia blocks. Besides the tributary interface, cells can also be read from the CPUuw buffer and sent to one of the QoS buffers. No VPI/VCI handling is performed for these cells. The CPUuw buffer can be associated with any of the QoS buffers.
Cells are distributed from the QoS buffers and the aggregate loop-back buffer to the aggregate interface. The QoS buffers can be configured to have different priorities. As an example, all four QoS buffers can be associated with only one channel at the aggregate interface,withfourdifferentpriorities. It is alsopossible to associate two buffers with one channel and two with another,orassociate all buffers with one channel each.

VPI/VCI handling

TheATMCore handles VPI/VCI routing and translation of upstream and downstream cells separately. This means that the CPU must set up one set of connections for the downstream direction and another for the upstream direction. For each direction, a maximum of 128 simultaneous connections are supported.
In the downstream direction, the VPI/VCI tables support the 4 least significant bits of the VPI, which gives a VPI range from 0 to 15. In addition to this, the 8 most significant bits of the VPI is defined in a separate register. This means that the VPI can have different ranges in steps of 16, e.g. 0-15, 16-31, 4080-
4095. All cells with VPI values outside the chosen VPI range will be discarded. Beside these 16 VP’s, a broadcastVPcanalsobesetupbyaseparateregister. Cellswith a VPI that corresponds to thisregister will be sent to the CPU.
In the upstream direction, the VPI/VCI tables also support the 4 least significant bits of the VPI, which gives a VPI range from 0 to 15.
For both upstream and downstream direction, the VPI/VCItablessupport the 8 leastsignificantbits of the VCI, which gives a VCI range from 0 to 255.
Any of the 128 connections for each direction can be configured as either a VP cross connection (VPC) or a VC cross connection (VCC). VPC means that only the VPI determines the destination of the cell, and the VCI is transparent. In this case only the VPI is translated. VCC means that also the VCI determines the destination, and in this case both VPI and VCI are translated. The VPI/VCI handling is shown in Figure 3 and Figure 4.
Demux and translation table
Tributary cell
Tributary
VPI[3:0] VCI[7:0]
0
X
127
-
VP cross connection; VC transparent except for OAM
Aggregate
VPI[3:0]VCI[7:0]
Y
Figure 3. VP cross connection through ATM Core.
VP filter
Range[11:4]
Broadcast[11:0]
-
Aggregate cell
3 (14)
PBM 990 08/1MQ
Demux and translation table
Tributary cell
Tributary
VPI[3:0] VCI[7:0]
5
16
A C
VC cross connection; All VC’s must be set up except for OAM
127
0
X X
X X
Aggregate
VPI[3:0]VCI[7:0]
Y Y
Y Y
Figure 4. VC cross connection through ATM Core.

Operation and Maintenance (OAM) handling

For all connections that are set up through ATM Core, F4 and F5 OAM cells will be sorted out and sent to the CPU automatically. For VPC’s, only the F4 segment and end-to-end cells are sorted out. For VCC’s,the F4 cells are handled just like for VPC’s, but the F5 segment cells are also sorted out.
VP filter
Range[11:4] Broadcast[11:0]
5
16
B D
Aggregate cell

Performance monitoring

There are a number of counters in the ATM Core that can be used for performance monitoring, such as
downstream user cell counter,upstream user cell counter,EPDeventcounter
(
PPD) event counter
.
and
PartialPacketDiscard
Since the OAMflow is terminated in the ATM Core, it is possible to re-generate it by letting the CPU create OAM cells and write them into the CPU buffers for further downstream or upstream transportation.

Quality of Service (QoS) handling

QoS buffering is performed for the upstream direction using an external SRAM with sizes up to 128 kBytes. The buffer can be divided into 4 (or less) different areas, which can be configured to have different priorities when they are read at the aggregate interface. The size of each buffer area is configurable.
Any upstream ATM connection between the tributary interface and the QoS buffer might be subject to Early Packet Discard (EPD). For this a threshold value per QoS buffermust be configured, to which the amount of data in the buffer is compared.

ATMF 25.6 service

The two integrated ATMF 25.6 transceivers include all the TC and PMD layer functions as specified in ref[1]. They are optimized to be connected to the PE-67583 transformer from Pulse Electronics.
Since there are two ATMF devices in the Multi Service Chip, it is possible to set up broadcast connections to both ATMF devices. This means that such ATM cells will be accepted and distributed through both devices.
Itis also possible to transfertiming information overthe ATMF interfaces. In this case an 8 kHz reference clock must be provided from the modem/transceiver, as shown in the block diagram in Figure 5.
The functions in the data paths are described below.A number of performance monitoring counters are also included.
4 (14)
PBM 990 08/1MQ

Transmit path

The ATM Core transmits cells to the ATMF device over the internal Utopia interface, which are stored in the two cell deep
The
Tx Cell Processor
FIFO and calculates a newHEC byte which is inserted inthe cell header. When no cells are availablein the Tx FIFO, idle cells are generated and transmitted.
The
Framer
of the cell flow, and inserts a command byte (X_X or X_4)at the startof each cell. Each timea positive edge is detected on the 8 kHz reference clock (NET_REF_CLK), a timing information byte (X_8) is inserted in the data flow. Finally the data is serialized and NRZI encoded.
The analog a bipolar format and transmits it on the external outputs.
Tx FIFO
.
reads the cells from the Tx
performs scrambling and 4B5B encoding
Line Driver
converts the data stream into

Receive path

The analog data is passed through the where the high frequency noise is removed and the data is equalized in order to compensate for the line distortion.
The analog data is converted to a digital format in the
Data & Clock Recovery
block, which also recovers
the receive clock from the data stream. The
Deframer
aligns the cell stream by detecting the command bytes (X_X or X_4). All other command bytes are sorted out, and the data is 5B4B decoded and descrambled before it is forwarded. Cells with one or several faulty 5-bit symbols are discarded.
The
Rx Cell Processor
calculates the HEC value of the cell header and discards cells with a faulty header. Idle cells and physical OAM cells are also sorted out. The remaining correct data cells are stored in the two cell deep
Rx FIFO
.
Cells are read from the Rx FIFO by the ATM Core over the tributary Utopia interface.
Equalizer
= external ports
ATMF transceiver
PMD
ATMFx_TXD_X
ATMFx_TXD_Y
ATMF_CLKT_32
ATMFx_RXD_X
ATMFx_RXD_Y
Equa-
lizer
ATMFx_EQ_A
Line
Driver
ATMFx_EQ_B
Data &
Clock
Recovery
Figure 5. ATMF transceiver block diagram.
Tx Clk
Rx Clk
NET_REF_CLK
Framer
De-
framer
Configuration &
Status Registers
TC
Tributary
Utopia
Tx FIFO
Tx Cell
Processor
Rx FIFO
Rx Cell
Processor
CPU interface
5 (14)
Loading...
+ 9 hidden pages