No claim to original U.S. Government works
Version Date: 20151201
International Standard Book Number-13: 978-1-4987-4683-0 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Index .............................................................................................................................................. 681
Summary
This book presents a comprehensive approach to networking, cable and wireless communications,
and networking security. It describes the most important state-of-the-art fundamentals and system
details in the eld, as well as many key aspects concerning the development and understanding of
current and emergent services.
Three of the author's earlier books, Transmission Techniques for Emergent Multicast and
Broadcast Systems, Transmission Techniques for 4G systems, and MIMO Processing for 4G and
Beyond: Fundamentals and Evolution, focused on the transition from 3G into 4G and 5G cellular
systems, including the fundamentals of multi-input and multi-output (MIMO) systems, and therefore, they spanned a wide range of topics. Another book by the author, Multimedia Communications and Networking, focused on networking.
In this book, the author gathers in a single volume his point of view on current and emergent
cable and wireless network services and technologies. Different bibliographic sources cover each
one of these topics independently, without establishing the natural relationships between the topics.
The advantage of the present work is twofold: on the one hand, it allows the reader to learn quickly,
thereby helping the reader to master the topics covered, providing a deeper understanding of their
interconnection; on the other hand, it collects in a single source the latest developments in the area,
which are generally only within reach of an active researcher, such as the author, with a committed
research career of several years and regular participation in conferences and international projects.
Each chapter illustrates the theory of cable and wireless communications with relevant examples, contains hands-on exercises suitable for readers with a BSc degree or an MSc degree in computer science or electrical engineering, and ends with review questions. This approach makes the
book well suited for higher education students in courses such as networking, telecommunications, mobile communications, and network security. Finally, the book serves as a good reference
book for academic, institutional, or industrial professionals with technical responsibilities in planning, design and development of networks, telecommunications and security systems, and mobile
communications, as well as for Cisco CCNA and CCNP exam preparation.
xv
Laboratorial Introductory Notes
The lab exercises included in this book focus on three tools: the Emona Telecoms Trainer 101
(ETT-101), for which a dual-channel 20MHz oscilloscope is required; the free network analyzer,
Wireshark; and the Cisco Packet Tracer network simulator.
Emona ETT-101 consists of a telecommunications modeling system that brings block diagrams
to life, with real hardware modules and real electrical signals, which are employed in this book to
demonstrate the theory about telecommunications. As alternatives to ETT-101, two other pieces
of laboratorial equipment can be used: the Emona TIMS 301-C Telecommunications Teaching
System and the Emona net*TIMS Telecommunications Teaching System. Emona TIMS 301-C corresponds to ETT-101 with extended capabilities. Emona net*TIMS allows implementation of the
same experiments that TIMS 301-C does, but these can be built and controlled remotely by students
across a LAN or the Internet (multiple students can do their lab work at any time and from any
location in the world). Appendix VI lists experiments that can be implemented with Emona TIMS
(both 301-C and net*TIMS), indicating the chapters that discuss each experiment. The free network
analyzer, Wireshark, is used to demonstrate the theory on networking, namely signaling, message
formats, and network procedures. The Cisco Packet Tracer simulator is used to build networks,
to congure them, and to simulate their responses. Some chapters focus on telecommunications,
and therefore ETT-101 is used extensively. Other chapters focus on networking, and discuss the
utilization of network Wireshark and Packet Tracer. Since the ETT-101 laboratory manual already
describes many experiments, the lab exercises presented in chapters on telecommunications simply
refer to the different ETT-101 experiments. In this case, the student should refer to the descriptions existing in the ETT-101 laboratory manual, namely Volume 1Ð Experiments in Modern
Analog and Digital Telecommunications; Volume 2Ð Further Experiments in Modern Analog &
Digital Telecommunications; and Volume 3Ð Advanced Experiments in Modern Analog & Digital
Telecommunications.
xvii
Author
Mário Marques da Silva (marques.silva@ieee.org) is an associate
professor and the director of the Department of Sciences and
Technologies at Universidade Autónoma de Lisboa, Lisbon,
Portugal. He is also a researcher at Instituto de Telecomunicações in
Lisbon, Portugal. He received his BSc degree in electrical engineering in 1992, and MSc and PhD degrees in electrical and computer
engineering (telecommunications) in 1999 and 2005, respectively,
both from Instituto Superior Técnico, University of Lisbon, Portugal.
From 2005 to 2008, he was with the NATO Air Command
Control and Management Agency in Brussels, Belgium, where he
managed the deployable communications of the new Air Command
and Control System Program. He has been involved in multiple networking and telecommunications projects. His research interests
include networking and mobile communications, namely Internet protocol (IP) technologies and
network security, block transmission techniques, interference cancellation, MIMO systems, and
software-dened radio. He is also a Cisco Certied Network Associate (CCNA) instructor.
He is the author of four books published by CRC Press, Multimedia Communications and
Networking, Transmission Techniques for Emergent Multicast and Broadcast Systems, Transmission
Techniques for 4G Systems, and MIMO Processing for 4G and Beyond: Fundamentals and
Evolution. He has authored dozens of journal and conference papers, is a member of IEEE and
AFCEA, and has been a reviewer for a number of international scientic IEEE journals and conferences. He has also chaired many conference sessions and has been serving in the organizing committee of relevant EURASIP and IEEE conferences.
xix
Introduction to Data
=+
=+
])
1
Communications
and Networking
LEARNING OBJECTIVES
· Describe the fundamentals of communications.
· Identify the key components of networks and communication systems.
· Describe different types of networks and communication systems.
· Identify the differences between a local area network (LAN), a metropolitan area
network (MAN), and a wide area network (WAN).
· Identify the different types of media and trafc.
· Dene the convergence and the collaborative age of the network applications.
1.1 FUNDAMENTALS OF COMMUNICATIONS
Communication systems are used to enable the exchange of data between two or more entities (humans or
machines). As can be seen from Figure 1.1, data consists of a representation of information source, whose
transformation is performed by a source encoder. An example of a source encoder is a thermometer,
which converts temperatures (information source) into voltages (data). A telephone can also be viewed
as a source encoder, which converts the analog voice (information source) into a voltage (data), before
being transmitted along the telephone network (transmission medium). In case the information source is
analog and the transmission medium is digital, a CODEC (COder and DECoder) is employed to perform
digitization. A VOCODER (VOice CODER) is a codec specic for voice, whose functionality consists of
converting analog voice into digital at the transmitter side, and the reciprocal at the receiver side.
The emitter of data consists of an entity responsible for the insertion of data into the communication system and for the conversion of data into signals. Note that signals are transmitted, rather than
data. Signals consist of an adaptation* of data, such that their transmission is facilitated in accordance with the used transmission medium. Similarly, the receiver is responsible for converting the
received signals into data.
The received signals correspond to the transmitted signals subject to attenuation and distortion,
and added with noise and interferences. These channel impairments originate that the received
signal differs from that transmitted. In the case of analog signals, the resulting signal levels do
not exactly translate the original information source. In the case of digital signals, the channel
impairments originate corrupted bits. In both cases, the referred channel impairments originate a
degradation of the signal-to-noise plus interference ratio (SNIR).² A common performance indicator
*
Signals can be, for example, a set of predened voltages that represent bits used in transmission.
²
In linear units, the SNIR is mathematically given by SNIR/(
expresses the power of noise, and I denotes the power of interferences. For the sake of simplicity, the SNIR is normally
only referred to as SNR (signal-to-noise ratio), but where the interference is also taken into account (in this case N stands
for the power of noise and interferences). Furthermore, both SNIR (or SNR) are normally expressed in logarithmic units
SNIR/
as
dB
1010log(
[
SN I
SN I), where S stands for the power of signal, N
.
1
2Cable and Wireless Networks
Transmitted
Amplitude
Received
signal
Receiver
Data
Source
decoder
Received
information
Information
source
Source
encoder
Data
Emitter
signal
Transmission
medium
FIGURE 1.1 Generic block diagram of a communication system.
of digital communication systems is the bit error rate (BER). This corresponds to the number of corrupted bits divided by the total number of transmitted bits over a certain time period.
A common denition associated with information is knowledge. It consists of a person's ability
to have access to the right information, at the right time. The conversion between information and
knowledge can be automatically performed using information systems, whereas information can be
captured by sensors and distributed using communication systems.
1.1.1 AnAlogAnd digitAl SignAlS
Analog signals present a continuous amplitude variation over time. An example of an analog signal
is voice. Contrarily, digital signals present amplitude discontinuities (e.g., voltages or light pulses).
An example of digital data includes the bits* generated in a workstation. The text is another example
of digital data. Figure 1.2 depicts examples of analog and digital signals.
Digital signals present several advantages (relating to analog) such as the following:
· Error control is possible in digital signals: corrupted bits can be detected and/or corrected.
· Because they present only two discrete values, the consequences of channel impairments
can be more easily detected and avoided (as compared to analog signals).
· Digital signals can be regenerated, almost eliminating the effects of channel impairments.
Contrarily, the amplication process of analog signals results in the amplication of signals, noise, and interferences, keeping the SNR relationship unchanged.
· The digital components are normally less expensive than the analog ones.
· Digital signals facilitate cryptography and multiplexing.
· Digital signals can be used to transport different sources of information (voice, data, multimedia, etc.) in a transparent manner.
²
Amplitude
(a)(b)
Time
Time
FIGURE 1.2 Example of (a) analog and (b) digital signals.
*
With logic states 0 or 1.
²
In fact, the amplication process results even in a degradation of the SNR, as it adds the amplier' s internal noise to the
signal at its input. This subject is detailed in Chapter 3.
However, digital signals present an important disadvantage:
· For the same information source, the bandwidth required to accommodate a digital signal
is typically higher than the analog counterpart.* This results in a higher level of attenuation
and distortion.
1.1.2 ModulAtorAnd deModulAtor
As can be seen from Figure 1.3, when the source (e.g., a computer) generates a digital stream of data
and the transmission medium is analog, a MODEM (MOdulator and DEModulator) is employed to
perform the required conversion. The modulator converts digital data into analog signals, whereas
the demodulator (at the receiver) converts analog signals into digital data. An example of an analog
transmission medium is radio transmission, whose signals consist of electromagnetic waves (present a continuous variation in time).
A modem (e.g., asynchronous digital subscriber line [ADSL] or cable modem) is responsible for
modulating a carrier wave with bits, using a certain modulation scheme.² The reverse of this operation is performed at the receiver side. Moreover, a modem allows sending a signal modulated around
a certain carrier frequency, which can be another reason for using such a device.
In case the data is digital and the transmission medium is also digital, a modem is normally not
employed, as the conversion between digital and analog does not need to be performed. In this case,
a line encoder/decoder (sometimes also referred to as a digital modem, nevertheless not accurately)
is employed. This device adapts the original digital data to the digital transmission medium,³ adapting parameters such as levels and pulse duration. Note that, using such a digital encoder, the signals
are transmitted in the baseband.
The output of a line encoder consists of a digital signal, as it comprises discrete voltages that
encode the source logic states. Consequently, it can be stated that the line encoder is employed when
the transmission medium is digital. On the other hand, the output of a modulator consists of an analog signal, as it modulates a carrier that is an analog signal.
In the case of high data rate, the required bandwidth necessary to accommodate such a signal
is also high.¶ In this scenario, the medium may originate a high level of attenuation or distortion at
limit frequency components of the signal. In such a case, it can be a good choice to use a modem that
allows the modulation of the signal around a certain carrier frequency. The carrier frequency can be
carefully selected such that the channel impairments in the frequencies around it (corresponding to
the signal bandwidth) do not seriously degrade the SNR.
The reader should refer to Chapter 6 for a detailed description of the modulation schemes used
in modems, as well as for the description of digital encoding techniques.
§
3Introduction to Data Communications and Networking
Information
source
Source
encoder
Digital
data
Modem
Analog
signal
Analog
transmission
medium
Analog
signal
Modem
Digital
data
Source
decoder
Received
information
FIGURE 1.3 Generic communication system incorporating a modem.
*
As an example, analog voice is transmitted in a 3.4 kHz bandwidth, whereas the digital pulse code modulation (PCM)
requires a bandwidth of 32 kHz (64 kbps).
²
Using amplitude, frequency, or phase shift keying. Advanced modems make use of a combination of these elementary
modulation schemes.
³
Using line codes such as return to zero, nonreturn to zero, and Manchester (as detailed in Chapter 6).
§
Instead of carrier modulated (bandpass), as performed by a modem.
¶
According to the Nyquist theorem, as detailed in Chapter 3.
4Cable and Wireless Networks
1.1.3 trAnSMiSSion MediuMS
Transmission mediums can be classied as cable or wireless. The examples of cable transmission mediums include twisted pair cables, coaxial cables, multimode or single mode optical ber cables, and so on.
In the past, LANs were made of coaxial cables. These cables were also used as a transmission medium for medium- and long-range analog communications. Although coaxial cables were
replaced by twisted pair cables in LANs, the massication of cable television enabled their reuse.
As a result of telephone cables, twisted pairs are still the dominant transmission medium in
houses and ofces. These cables are often reused for data. With the improvement in isolators
and copper quality, as well as with the development of shielding, the twisted pair has become
widely used for providing high-speed data communications, in addition to the initial use for analog
telephony.
Currently, multimode optical bers have been increasingly installed at homes, allowing reaching
throughputs of the order of several gigabits per second (Gbps). Moreover, single mode optical bers
are the most used transmission medium in transport networks. A transport network consists of the
backbone (core) network, used for transferring large amount of data among different main nodes.
These main nodes are then connected to secondary nodes and nally connected to customer nodes.
A radio or wireless communication system is composed of a transmitter and a receiver, using
antennas to convert electric signals into electromagnetic waves and vice versa. These electromagnetic waves are propagated over air. Note that wireless transmission mediums can be either guided
or unguided. In the former case, directional antennas are used at both the transmitter and the receiver
sides, such that electromagnetic waves propagate directly from the transmit into the receive antenna.
The reader should refer to Chapter 4 for a detailed description of cable transmission mediums,
while Chapter 5 introduces the wireless transmission mediums.
1.1.4 SynchronouSAnd ASynchronouS coMMunicAtion SySteMS
Synchronous and asynchronous communications refer to the ability or inability to have information
about the start and end of bit instants.
Using asynchronous communications, the receiver does not achieve perfect time synchronization
with the transmitter, and the communication accepts some level of uctuation. Consequently, start
and stop bits are normally included in a frame² in order to periodically achieve bit synchronization
of the receiver with the transmitter. Note that between the start and the stop bit, the receiver of
an asynchronous communication suffers from a certain amount of time shift. The referred periodic synchronization using start and stop bits is normally included as part of the functionalities
implemented by a modem, when establishing a communication in asynchronous mode of operation.
Normally, asynchronous communications do not accommodate high-speed data rates. They are
typically used for random (not continuous) exchange of data (at low rate).
On the other hand, synchronous communications consider a receiver that is bit synchronized
with the transmitter. This bit synchronization can be achieved using one of the following methods:
· By sending a clock signal multiplexed with the data or using a parallel dedicated circuit
· When the transmitted signal presents a high zero crossing rate, such that the receiver can
extract the start and end of bit instants from the received signal
Synchronous communications are normally employed in high-speed lines, and for the transmission
of large blocks of data. An example of a synchronous communication system is the synchronous
digital hierarchy (SDH) networks, used for the transport of large amounts of data in a backbone.
*
Nevertheless, frame synchronization is required in either case.
²
A group of exchanged bits.
*
5Introduction to Data Communications and Networking
Host 1Host 2
Host 2
Host 2
1.1.5 SiMplexAnd duplex coMMunicAtionS
A simplex communication consists of a communication between two or more entities where the signals ow only in a single direction. In this case, one entity only acts as a transmitter and the other(s)
as a receiver. This can be seen from Figure 1.4. Note that the transmitter may be transmitting signals
to more than one receiver.
When the signals ow in a single direction, but with alternation in time, it is stated that the communication is half-duplex. Therefore, although both entities act simultaneously as a transmitter and
as a receiver (at different time instants), instantaneously, each host acts as either a transmitter or a
receiver [Stallings 2010]. The half-duplex communication is depicted in Figure 1.5.
Finally, when the communication is simultaneously in both directions, it is in full-duplex mode.
In this case, two or more entities act simultaneously as both a transmitter and a receiver. The fullduplex communication is depicted in Figure 1.6. Full-duplex communications normally require
two parallel transmission mediums (e.g., two pairs of wires): one for transmission and another for
reception.
FIGURE 1.4 Simplex communication.
Host 1
Instant t
1
Host 1
Instant t
2
FIGURE 1.5 Half-duplex communication.
Host 1
Host 2
FIGURE 1.6 Full-duplex communication.
6Cable and Wireless Networks
1.1.6 coMMunicAtionSAnd networkS
A point-to-point communication establishes a direct connection (link) between two adjacent end
stations, between two adjacent network nodes (e.g., routers), or between an end station and an adjacent node.
A network can be viewed as a concatenation of point-to-point communications, composed of
several nodes and end stations, where each node is responsible for switching the data, such that
an end-to-end connection is established between two end stations. The examples of point-to-point
communications and of a network are depicted in Figure 1.7. An end-to-end network connection
consists of a concatenation of several point-to-point links, where each of these links can be implemented using a different transmission medium (e.g., satellite and optical ber).
A node of a network can be a router or a private automatic branch exchange (PABX). The former
device switches packets (packet switching), while the latter is responsible for physically establishing
permanent connections, such that a phone call between two end entities is possible (circuit switching). This subject is detailed in Section 1.2.
Depending on the number of destination stations of data involved in a communication, this can
be classied as unicast, multicast, or broadcast. Unicast stands for a communication whose destination is a single station. In case the destination of data is all the network stations, the communication
is referred to as broadcast. Very often broadcast communications are established in a single direction (i.e., there is no feedback from the receiver into the transmitter). Finally, when the destination
of the data is more than a single station, but less than all network stations, the communication is
referred to as multicast.
1.1.7 Switching ModeS
1.1.7.1 Circuit Switching
Circuit switching establishes a permanent physical path between the source and the destination. This
switching mode is used in classic telephone networks. Only after startup, is allowed a synchronous
Transmission
medium
Station 1
(c)
Node A
Station 1
(a)
Node A
(b)
Transmission
medium 1
Node C
Node D
Transmission
medium
Node B
Node E
Transmission
medium N
Station 2
Node B
Transmission
medium 2
Station 2
Node F
Station 3
FIGURE 1.7 Examples of (a, b) a point-to-point communication and (c) a network.
exchange of data. This end-to-end path (circuit) is permanently dedicated until the connection ends.
The time to establish the connection is long, but a delay is assured only because of the propagation
speed of signals. This kind of switching is ideal for delay-sensitive communications, such as voice. If
the connection cannot be established because of lack of resources, it is said that the call was blocked,
but once established, congestion does not occur. All the bandwidth available is assigned to a certain
connection that, for long time periods, may not be used and, in other periods, may not be enough (e.g.,
if that connection is sending variable data rates). For this reason, it is of high cost. In telephone networks, switching is physically performed by operators using PABX. This consists of a switch whose
functionality is typically achieved using space and/or time switching. Space switching consists of
establishing a physical shunt between one input and one output. Because digital networks normally
incorporate multiplexed data into different time slots* (each telephone connection is transported in a
different time slot), there is a need to switch a certain time slot from one physical input into another
time slot of another physical output. This is performed by the time and space switching functionality
of a digital PABX. An example of a circuit switching (telephony) network is depicted in Figure 1.8.
1.1.7.2 Packet Switching
With the introduction of data services, the notion of packet switching has arrived. Packet switching
considers the segmentation of a message into parts, where each part is referred to as a packet (with
²
or variable³ length). As can be seen from Figure 1.9, a digital message is composed of many
xed
bits, while a packet consists of a small number of these bits.
7Introduction to Data Communications and Networking
PABX B
Telephone 1
PABX A
Telephone 2
PABX C
PABX D
FIGURE 1.8 Example of a circuit switching (telephone) network.
Message
Packet 3
Message
segmentation
into packets
11010010......010010110
Packet 1
11010010...
Packet 2
FIGURE 1.9 The segmentation of a message into packets.
Packet 4
Telephone 3
PABX E
Telephone 4
Packet 5
Packet 6
...010010110
*
This is normally referred to as time division multiplexing.
²
For example, asynchronous transfer mode (ATM).
³
For example, multiprotocol label switching or Internet protocol (IP).
8Cable and Wireless Networks
Time periodPacket duration N
Time period
Packet duration N+ 1
Packets are forwarded and switched independently through the nodes of a network, between
the source and the destination. Each packet transports enough information to allow its routing (end
destination address included in a header).
While the nodes of a circuit switching network establish a permanent shunt between one input
and one output, because packet switching considers a number of bits grouped into a packet, the
nodes of a packet switching network only switch data for the duration of a packet transmission. The
following packet that uses the same input or output of a node may belong to a different end-to-end
connection. This is depicted in Figure 1.10. Consequently, packet switching networks make much
better usage of the network resources (nodes) than circuit switching. Note that a node of a packet
switching network is typically a router.
Each node of the network is able to store packets, in case it is not possible to send it because
of temporary congestion. In this case, the time for message transmission is not guaranteed, but
this value is kept within reasonable limits, especially if quality of service (QoS) is offered. Packet
switching is of lower costs than circuit switching, and is ideal for data transmission, because it
allows a better management of the resources available (a statistical multiplexing is performed).
Moreover, with packet switching, we need not assign all of the available resources (i.e., bandwidth)
to a certain user who, for long periods, does not make use of them, the network resources being
shared among several users, as a function of the resources available and of the users' need. The
network resources are made available as a function of each user's need and as a function of the
instantaneous network trafc.
There are different packet switching protocols, such as ATM, IP, frame relay, and X.25. The IP
version 4 (IPv4) does not introduce the concept of QoS, because it does not include priority rules
to avoid delays or jitter (e.g., for voice). Moreover, it does not avoid loss of data for certain types of
services (e.g., for pure data communication), and it does not allow the assignment of higher bandwidth to certain services, relating to other (e.g., multimedia vs. voice). On the other hand, ATM and
IP version 6 (IPv6) have mechanisms to improve the QoS.
1.1.8 connection ModeS
Depending on the end-to-end service provided, the connection modes through networks can be of
two types: connectionless and connection oriented. These modes are used in any of the layers of a
network architecture, such as in the Open System Interconnection reference model, or in the transmission control protocol/IP (TCP/IP) stack.
1.1.8.1 Connection-Oriented Service
In order to provide a connection-oriented service, there is a need to previously establish a connection before data is exchanged, and to terminate it after data exchange. The connection is established
FIGURE 1.10 Switching of packets in different instants.
Node A
Node A
...
9Introduction to Data Communications and Networking
between entities, incorporating the negotiation of the QoS and cost parameters of the service being
provided. The communication is bidirectional, and the data is delivered with reliability. Moreover,
in order to prevent a faster transmitter to overload a slower receiver, ow control is employed (to
prevent overow situations). An example of a connection-oriented service is the telephone network,
where a connection is previously established before voice exchange. In the telephone network, taking, as a reference, two words transmitted one after the other, we do not experience an inversion of
the correct sequence of these words (e.g., receiving the second word before the rst one). The TCP
of the TCP/IP stack is an example of a connection-oriented protocol.
A connection-oriented service is always conrmed,* as the transmitter has information about
whether or not the data reached the receiver free of errors, correcting the situation in case of errors.
This can be performed using positive conrmation, such as the positive acknowledgment with
retransmission (PAR) procedure, or using negative conrmation, such as the negative acknowledgment (NAK).
In the PAR case, when the transmitter sends a block of data, it initiates a chronometer and
expects for the correct reception of an acknowledgment (ACK) message from the receiver within a
certain time frame. In case the ACK message is not received in time, the transmitter assumes that
the message was received corrupted and performs the retransmission of the block of data. In case
the ACK message is received, the transmitter proceeds with the remaining transmission of data. The
advantage of this procedure is that the ACK message sent by the receiver to the transmitter allows
two conrmations: (1) the data was properly received (error control) and (2) the receiver is ready to
receive more data (ow control).
In the case of the NAK, the receiver only sends a message in case the data is received with errors;
otherwise, the receiver does not send any feedback to the transmitter. The advantage is the lower
amount of data exchanged. The disadvantage is that in the PAR case, ow control is performed
together with error control, whereas in the NAK situation, only error control is performed.
The reader should refer to Chapter 2 for a detailed description of the service primitives used in
connection-oriented services.
1.1.8.2 Connectionless
The connectionless mode does not perform the previous establishment of the connection, before
data is exchanged. Therefore, data is directly sent, without prior connection establishment.
As the connection-oriented mode requires a handshaking between the transmitter and the
²
receiver,
this introduces delays in signals. Consequently, for services that are delay sensitive,
the connectionless mode is normally employed. The connectionless mode is also utilized in
scenarios where the experienced error probability is reduced (such as in the transmission of bits
in an optical ber).
Depending on whether the service is conrmed or not, data reliability may or may not be assured.
Even though if data reliability is not assured, such functionality can be provided by an upper layer
of a multilayer network architecture. In such a scenario, there is no need to execute the same functionalities twice.
The connectionless mode can provide two different types of services:
· Conrmed service
· Nonconrmed service
In the case of the nonconrmed service, the transmitter does not have any feedback about whether
or not the data reached the receiver free of errors. Contrarily, in the case of the conrmed service,
although a connection establishment is not required before the data is exchanged (as in the case of
*
On the other hand, the connectionless service can be conrmed or nonconrmed.
²
For example, implementing data retransmission, in order to assure data reliability.
10Cable and Wireless Networks
the connection-oriented service), the transmitter has feedback from the receiver about whether or
not the data reached the receiver free of errors. The reader should refer to the description of the conrmation methods used in conrmed services presented for the connection-oriented service, namely
the PAR and NAK.
As an example, Internet telephony (IP telephony) is normally supported by the nonconrmed
service, specically, by the user datagram protocol (UDP), which is connectionless. However, in IP
telephony, the reordering of packets is performed by the application layer.* Another example of a nonconrmed connectionless mode is the IPv4 protocol, which does not provide reliability to the delivered datagrams and which does not require the previous establishment of the connection before data
is sent. In case such reliability is required, the TCP is utilized as an upper layer (instead of the UDP).
The serial line IP is an example of a data link layer protocol that is nonconrmed and connectionless.
The reader should refer to Chapter 2 for a detailed description of the service primitives used in
conrmed and nonconrmed connectionless services.
1.1.9 network coverAge AreAS
Packet switching networks may also be classied as a function of the coverage area. Three important areas of coverage exist: LANs, MANs, and WANs.
A LAN consists of a network that covers a reduced area such as a home, ofce, or small group of
buildings (e.g., an airport), using high-speed data rates. A MAN consists of a backbone (transport
network) used to interconnect different LANs within a coverage area of a city, a campus, or similar.
This backbone is typically implemented using high-speed data rates. Finally, a WAN consists of a
transport network (backbone) used to interconnect different LANs and MANs, whose area of coverage typically goes beyond 100km. While the transmission medium used in a LAN is normally the
twisted pair, optical ber, or wireless, the optical ber is among the most used transmission medium
in a MAN and WAN.
1.1.10 network topologieS
A network topology is the arrangement of the devices within a network. The topology concept is
applicable to a LAN, a MAN, or a WAN. In the case of a LAN, such a topology refers to the way
hosts and servers are linked together, while in the MAN and WAN cases, this refers to the way nodes
(routers) are linked together. For the sake of simplicity, this description refers to hosts and servers
(in the case of LAN) and nodes (in the case of MAN and WAN) just as hosts.
A bus topology is the topology where all hosts are connected to a common and shared transmission
medium. This topology is depicted in Figure 1.11. In this case, the signals are transmitted to all hosts
Host
Common shared transmission medium
FIGURE 1.11 Bus topology.
*
These functions are carried out by the real-time protocol (RTP).
²
In fact, both workstations and routers are hosts.
Host
Bus
HostHost
Host
²
11Introduction to Data Communications and Networking
and, because the host's network interface cards (NIC) are permanently listening to the transmitted
data, they detect whether or not they are the destination of the data. In case the response is positive,
the NIC passes the data to the host; otherwise, the data is discarded [Monica 1998]. This topology
presents the advantage that, even though if a host fails, the rest of the network keeps running without
problems. The main disadvantage of this topology relies on the high overload of the whole network
(including all network hosts) that results from the fact that all data is sent to all network hosts.
In a ring topology, the cabling is common to all the hosts, but the hosts are connected in serial.
This topology is depicted in Figure 1.12. Each host acts as a repeater: each host retransmits in a termination, and the data received in the other termination. The main disadvantage of this topology is
that if a host fails, the rest of the network is placed out of order. This topology is normally utilized in
SDH networks (MAN and WAN), where double rings are normally utilized to improve redundancy.
The token ring technology used in LAN is also based on the ring topology.
A star topology includes a central node connected to all other hosts. The central node repeats or
switches the data from one host into one or more of the other hosts. Because all data ows through
this node, this represents a single point of failure. This topology is depicted in Figure 1.13.
A tree topology is a variation of the star topology. In fact, the tree topology consists of a star
topology with several hierarchies. This topology is depicted in Figure 1.14. The central node is
responsible for repeating or switching the data to the hosts within each hierarchy. In case the destination of the data received by a certain central node refers to another hierarchy, such central node
forwards the data to the corresponding hierarchy central node, which is then responsible for forwarding the data to the destination host.
Finally, in a mesh topology each host is connected to all* or part² of the other hosts in the network. This topology is depicted in Figure 1.15. The advantage of such conguration relies on the
existence of many alternative pathways for the data transmission. Even though if one or more paths
are interrupted or overloaded, the remaining redundancies represent alternative paths for the data
transmission. The drawback of such a topology is that the large amount of cabling is necessary to
implement it.
Host
FIGURE 1.12 Ring topology.
*
Complete mesh topology.
²
Incomplete mesh topology.
Host
Host
Host
Ring
Host
Host
Host
Host
12Cable and Wireless Networks
Host
Host
FIGURE 1.13 Star topology.
Host
Host
Central
node
Host
Central
node 1
Host
Host
Host
...
Central
Host
node 2
...
Host
Host
Host
FIGURE 1.14 Tree topology.
It is worth noting that there are two different types of topologies: physical topology and logical
topology. The physical topology refers to the real cabling distribution along the network, while
the logical topology refers to the way the data is exchanged in the network. A physical star topology with a repeater (a hub)* as a central node presents a medium common and shared by all network hosts. In such a case, the logical topology is the bus topology (common and shared medium).
*
A hub/repeater repeats in all other outputs the bits received in one input. In addition, it acts as a regenerator.
Host
13Introduction to Data Communications and Networking
Host
Host
Host
Host
FIGURE 1.15 Mesh topology.
On the other hand, a physical star topology with a switch* as a central node corresponds, as well,
to a logical star topology. Moreover, a logical ring topology corresponds to a physical star topology with a central node that rigidly switches the data to the adjacent host (left or right).
1.1.11 clASSificAtionof MediAAnd trAffic
Different media can be split into three groups [Khanvilkar et al. 2005]:
· Text: Plaintext, hypertext, ciphered text, and so on.
· Visuals: Images, cartography, videos, videoteleconference (VTC), graphs, and so on.
· Sounds: Music, speech, other sounds, and so on.
While the text is inherently digital data (mostly represented using a string of 7-bit ASCII characters), the visuals and sounds are typically analog signals, which need to be digitized rst, in order
to allow its transmission through a digital network, such as an IP-based network (e.g., the Internet
or an intranet). As can be seen from Figure 1.16, the multimedia is simply the mixture of different
types of media, such as speech, music, images, text, graphs, and videos.
Media
Multimedia
Text
Plaintext
Hypertext
Ciphered text
Visuals
Images
Cartography
Videos
Graphs
Sounds
Music
Speech
Sounds
FIGURE 1.16 Basic types of media.
*
A switch only switches data to the output where the destination host is located. This is performed based on the address of
the destination.
14Cable and Wireless Networks
Traffic
Chat
Discrete
Audio or video
streaming
Delay tolerant
FIGURE 1.17 Classication of trafc.
RT
Telephony
VTC
Continuous
Delay intolerant
Non RT
File transfer
Web browsing
Telephony
VTC
When media sources are being exchanged through a network, it is generically referred to as
trafc. As depicted in Figure 1.17, the trafc can be considered as real time (RT) or non-real time
(NRT). While RT trafc is delay sensitive, NRT media is not. An example of RT trafc is telephony
or VTC, whereas a le transfer or the web browsing can be viewed as NRT trafc.
RT trafc can also be classied as continuous or discrete. Continuous RT trafc consists of a
stream of elementary messages with interdependency. An example of continuous RT trafc is telephony, whereas the chat is an example of discrete RT trafc.
Finally, RT continuous trafc can still be classied as delay tolerant or delay intolerant. RT continuous delay-tolerant trafc can accommodate a certain level of delay in signals, without sudden
performance degradation. Such tolerance to delays results from the use of a buffer that stores in
memory the difference between the received data and the played data. In case the transfer of data is
suddenly delayed, the buffer accommodates such delay, and the media presented to the user does not
translate such delay introduced by the network. Video streaming is an example of a delay-tolerant
media. Contrarily, the performance of delay-intolerant trafc degrades heavily when the data transfer is subject to delays (or variation of delays). An example of RT continuous and delay-intolerant
media is telephony or VTC. IP telephony or VTC allows a typical maximum delay of 200ms, in
order to achieve an acceptable performance.
1.2 PRESENT AND THE FUTURE OF TELECOMMUNICATIONS
Current and emergent communication systems tend be IP based and are meant to provide acceptable QoS in terms of speed, BER, end-to-end packet loss, jitter, and delays for different types of
trafc.
Many technological achievements have been made in the last few years in the area of communications and others are planned for the future to allow the new and emergent services. However,
whereas in the past new technologies pushed new services, nowadays the reality is the opposite: end
users want services to be employed on a day-by-day basis, whatever the technology that supports it.
Users want to browse over the Internet, get e-mail access or use the chat, establish a VTC, regardless of the technology used (e.g., xed or mobile communications). Thus, services must be delivered
following the concept of anywhere and at anytime. Figure 1.18 presents the bandwidth requirements
for different services.
Teleconf.
Web-br
IP teleph.
E-mail
Teleph.
15Introduction to Data Communications and Networking
3D holographics
Virtual reality
5 sec. CD download
Multichannel TV
Video streaming
HDTV
Video on demand
Video streaming (VHS)
Multiplayer games
Video teleconf.
MP3 streaming
Online games
0.2560.064
110100+
Bandwidth (Mbps)
FIGURE 1.18 Bandwidth requirements of the different services.
1.2.1 convergence
The main objective of the telecommunications industry is to create conditions to make the convergence a reality [Raj et al. 2010]. The convergence of telecommunications can be viewed in different ways. It can be viewed as the convergence of services, that is, the creation of a network able to
support different types of service, such as voice, data (e-mail, web browsing, database access, les
transfer, etc.), and multimedia, in an almost transparent way to the user [Raj et al. 2010].
The convergence can also be viewed as the complement between telecommunications, information systems, and multimedia, in a way to achieve a unique objective: make the information available to the user with reliability, speed, efciency, and at a low price. According to the Gilder law, the
speed of telecommunications will increase three times every year in the next 20years, and according to the Moore law, the speed of microprocessors will duplicate every 18months.
The convergence can be viewed as the integration of different networks in a single one, in a
transparent way to the user. It can also be viewed as the convergence between xed and mobile concepts [Raj et al. 2010], as the mobile is covering indoor environments (e.g., femtocells* of long-term
evolution [LTE]) allowing data and television/multimedia services, traditionally provided by xed
services, whereas xed telecommunications are giving mobility with the cordless systems, whose
example is the Digital European Cordless Telephone standard. There are terminals that are able to
*
A femtocell is a cellular base station for use at home or in ofces that creates an indoor cell, in locations where cellular
coverage is decient, inexistent, or to provide high-speed data services. It typically interconnects with the service
provider via broadband xDSL or cable modem.
16Cable and Wireless Networks
operate as cellular phones or as xed network terminals. New televisions not only receive the TV
broadcast but also allow browsing over the Internet.
The convergence is viewed by many people as the convergence of all the convergences, which
will lead to a deeply different society, whose results can already be observed nowadays with the use
of the following services:
· Telework
· Teleme dicine
· Web-TV
· E-Banking
· E-Business
· Remote control over houses, cars, ofces, machines, and so on
· VTC
Human lives, organizations, and companies will tend to increase their efciency, with the new communication means, and with the increase of the available information, as well as with multicontact.
With technological evolutionsÐ increase of user data rates, improved spectral efciency, better performances (lower BER), increase of network capacity, and decrease of latency (RTcommunications)Ð
and with the massication of telecommunications as a result of lower prices (as a result of technological
evolution and the increase of competition), it is expected that virtual reality and 3D holographic will be
a reality in the near future.
1.2.2 collAborAtive Ageofthe network ApplicAtionS
While the convergence approach was based on the ability to allow information sharing using a common network infrastructure, the new approach consists of the use of the network as an enabler to allow
sharing of knowledge. It consists of the ability to provide the right information to the right person at
the right time. For this to be possible, a high level of interactivity made available to each Internet user
is required. In parallel, business intelligence is an important platform that allows decision makers to
receive the ltered information* required for the decision to be made in a correct moment. The concept of Internet of Things enables the knowledge by making available a large amount of data captured
by multiple machines and sensors, and by enabling machine-to-machine communications. Moreover,
to enable the sharing of knowledge, there is a need to complement the Internet of Things with the
processes and applications. This is required to process the data captured by sensors and machines.
We observe, nowadays, an explosion of ad hoc applications that allow any Internet user to inject
nonstructured information (e.g., Wikipedia) into the Internet world, in parallel with an increase
of mobile-cloud and peer-to-peer applications such as Torrent, eMule, and IP telephony. Social
networks are currently being used by millions of people that allow the exchange of unmanaged
multimedia by groups of people just to share information or by groups interested in the same subject. Note that this multimedia exchange can be text, audio, video, multiplayer games, and so on.
Thiscan only be possible with the ability of the IP to support all types of services in parallel with
the provision of QoS by the network, that is, with the convergence as a support platform. This is the
new paradigm of the modern society: the collaborative age. The collaborative age of the Internet
can also be viewed as the transformation of man-to-man communication into man-to-machine and
machine-to-machine communication, using several media, and where the source or destination
party can be a group instead of a single entity (person or equipment).
Figure 1.19 shows the evolution of the network usage. Initially, this was viewed merely for the data
applications. Afterward, as referred to in Section 1.2.1, convergence was an important issue to allow
²
*
For example, key performance indicators.
²
This also presents a relationship with big data analysis.
17Introduction to Data Communications and Networking
DataConvergenceCollaborative media
Data
applications
(E-mail, messaging, web
browsing, file transfer, data
base access, etc.)
(E-mail, messaging, web
browsing, file transfer, data
(Video streaming, VTC,
interactive video, HDTV,
Data
applications
base access, etc.)
Voice
applications
(IP telephony)
Video
applications
etc.)
applications
(E-mail, messaging, web
browsing, file transfer, data
base access, etc.)
applications
(IP telephony, HD audio,
applications
(Video streaming, VTC,
interactive video,
3D HDTV, etc.)
applications
(YouTube, Facebook,
MySpace, Torrent, eMule,
Wikipedia, WikiLeaks, etc.)
FIGURE 1.19 Evolution of network applications: from data to collaborative tool.
Data
Voice
etc.)
Video
Ad-hoc
a better usage of the network. An increase in the level of Internet users interactivity made the Internet
world a space for deep collaboration between entities, but with a higher level of danger as well.
1.2.3 trAnSitiontowArdthe collAborAtive Age
To reach demands of the modern society, in terms of both convergence and collaborative services,
several problems need to be solved from the scientic and industrial community. From Section 1.2.2,
we may conclude that the convergence can be viewed as an important requirement to support the
collaborative services.
Although we observe an enormous demand for convergence, we see that there are still problems
that need to be solved. An example is the universal mobile telecommunication system (UMTS), which
still treats voice and data in different ways, as data is IP based whereas voice is still circuit switching
based. The LTE is the cellular standard that deals with this issue and makes the all-over-IP a reality.
From the point of view of services, the total digitalization of several information sources and
the use of efcient encoding and compressing data algorithms are very important. The information sources can be voice, fax, images, music, videoconference, e-mail, web browsing, positioning
systems, high-denition television, and pure data transmission (database access, le transfer, etc.).
Different services need different transmission rates, different margin of latencies and jitter, different performances, or even xed or variable transmission rates. Several MPEG protocols for voice or
video, those already existent and those that are still in the research and development phase, intend to
perform an adaptation of several information sources to the transmission media, allowing a reduction of the number of encoded bits to be transmitted.
18Cable and Wireless Networks
Error
Different services present different QoS requirements, namely:
· Voice communications are delay sensitive, but are low sensitive to loss of data, and require
low data rate but approximately constant.
· Iterative multimedia communications (e.g., web browsing) are sensitive to loss of data,
requiring considerable data rate, with a variable transmission rate, and are moderately
delay sensitive.
· Pure data communications (e.g., database access and le transfer) are highly sensitive to
loss of data, requiring relatively variable data rate, without sensitivity to delay.
Jitter is dened as the delay variation through the network. Depending on the application, jitter can
be a problem, or jitter issues can be disregarded. For instance, data applications that only deliver
their information to the user if the data is completely received (reassembling of data) pay no attention to the jitter issues (e.g., le transfer). This is totally different if voice and video applications are
considered; those applications degrade immediately if jitter occurs.
The transmission of data services (e.g., pure data communications and web browsing) through
most of the reliable mediums (e.g., optical ber and twisted pair) usually considers error detection
algorithms jointly with automatic repeat request* (ARQ), instead of error correction (e.g., block
coding or forward error correction). This happens because these services present very rigid requirements in terms of BER, whereas not very demanding in terms of delay sensitivity (in this case, stopping the transmission and requesting for repetitions are not crucial). Note that the utilization of error
correction requires more redundant bits per frame than error detection (amount of additional data
beyond the pure information data). This can be seen from Figure 1.20.
Nevertheless, the transmission of data services through a nonreliable medium (e.g., wireless) is
normally carried out using error correction, as the number of repetitions would be tremendous, creating much more overhead (and corresponding reduction of performance due to successive repetitions) than the overhead necessary to encode the information data with error correction techniques.
A similar principle is applied to services that are delay sensitive (voice), where, to reduce latency,
error correction is normally a better choice, instead of error detection.
These are the notions that introduce the QoS concept, implying that each service will impose
certain requirements. For the convergence to become a reality, the network should be able to take
all these requirements into account.
Taking into account all the previously described factors, one that presents a great contribution
to support the new collaborative services is the maximum transmission rate, as it is associated with
the user data rate. The factors that limit the use of higher transmission rates are several sources of
interference and noise. The effects of noise can be minimized through the use of regenerators, as
detection
Error
correction
FIGURE 1.20 Types of error control: the error detection and error correction. Their differences in terms of
the amount of redundant bits (N>M).
*
ARQ works associated with error detection. The transmitter sends groups of bits (known as frames), which are subject
to an encoding in the transmitter. The decoding process performed in the receiver allows this station to gain knowledge
about whether or not there was an error in the propagation of the frame. In the case of error, the receiver requests a
repetition of the frame from the transmitter.
Packet
Packet
M Redundant bits
N Redundant bits
19Introduction to Data Communications and Networking
T
S
well as advanced detection algorithms (e.g., matched lters). Interferences tend to increase with the
increase in the used bandwidth (which corresponds to an increase of transmission rates), this being
the main limitation of the use of higher data rates.
The challenge facing the today's telecommunications industry is how to continually improve the
end-user experience, to offer appealing services through a delivery mechanism that offers improved
speed, service attractiveness, and service interaction. In order to deliver the required services to the
users with the minimum cost, the technology should allow better and better performances, higher
throughputs, improved capacities, and higher spectral efciencies.
What can be done in order to increase the throughput of a wireless communication system?
One can choose a shorter symbol duration
. This, however, implies that a larger fraction of the
frequency spectrum will be occupied, because the bandwidth required by a system is determined
by the baud rate 1/ST . Wireless channels are normally characterized by multipath propagation
caused by reections, scattering, and diffraction in the environment. The shorter symbol duration might therefore cause an increased degree of intersymbol interference (ISI) and thus performance loss. As an alternative to the shorter symbol duration, one may choose using a multicarrier
approach, multiplexing data into multiple narrow subbands, as adopted by orthogonal frequency
division multiplexing (OFDM) [Marques da Silva et al. 2010]. The OFDM technique has been
selected for LTE, as opposed to wideband code division multiple access that is the air interface
technique that has been selected by European Telecommunications Standard Institute for UMTS.
Thus, the problem of ISI can be mitigated. But still, the requirement for increased bandwidth
remains, which is crucial with regard to the fact that the frequency spectrum has become a valuable resource. This imposes the need to nd schemes able to reach improved spectral efciencies, such as higher order modulation schemes, the use of multiple antennas at transmitter and
at receiver such as multiple input multiple output systems, more efcient error control, and so on
[Marques da Silva etal. 2010].
CHAPTER SUMMARY
This chapter provided an introduction to multimedia communications and networking, including
the study of most important fundamentals of communications and future trends.
It was described that digital signals allow regeneration, multiplexing, and error control, functionalities not possible when analog signals are employed. Nevertheless, it was viewed that digital
signals tend to require a higher bandwidth than the analog counterpart.
It was also viewed that the modem is employed when the transmission medium is analog, whereas
the digital encoder (also referred to as the line encoder) is employed with digital transmission mediums. Moreover, the modem sends carrier modulated signals, that is, signals modulated around a certain carrier, whereas the digital encoder sends baseband signals, that is, signals modulated around
a null frequency.
It was shown that transmission mediums can be cable or wireless. In the latter case, the difference between guided and unguided wireless transmission mediums was described. Among the
cable transmission mediums, the optical ber is the most resistant to interferences, and supports the
higher bandwidth. Moreover, single mode optical bers support higher bandwidths than multimode
optical bers.
We have viewed that synchronous communications allow higher data rates than asynchronous
communication systems. Synchronous communications extract the synchronism reference from the
received signal, or using an additional transmission pair or channel. Contrarily, asynchronous communication systems need to periodically use start and stop bits for allowing the receiver to determine the bit transition instants.
It was shown that simplex communications send signals only in a single direction, whereas
duplex communications allow bidirectional communications. In the case of full duplex, two channels are required to allow simultaneous bidirectional communications.
20Cable and Wireless Networks
It was described that a network is composed of a concatenation of point-to-point links, which
can be of different types. In this case, intermediate nodes are responsible for linking the required
sequence point-to-point links.
We have also viewed that circuit switching uses all assigned resources during the connection,
whereas packet switching allows a more efcient use of the network resources. In case of packet
switching, this can be of two modes: connection oriented or connectionless. The connection- oriented
mode provides ow control and error control, which allows the service being conrmed. It was
described that the connectionless mode can provide a conrmed service, or a nonconrmed service.
The difference between a LAN, a MAN, and a WAN was described. The LAN is used within an
ofce, or a house. A MAN is used to cover typically a city, or a university campus, being used to
interconnect different LANs. Finally, a WAN corresponds to a network that typically covers a wide
territory, such as a country, being also employed to interconnect different LANs.
The logical topology corresponds to the way data is interchanged, whereas the physical topologycorresponds to the way network devices are physically interconnected. Bus, star, ring, tree, and
mesh are examples of topologies that can be employed.
It was described that the trafc consists of the exchange of media sources through a network, or
through a communication system. Moreover, media can be text, visual, or sounds. In addition, trafc can be RT, or NRT, discrete, or continuous. In the case of RT, and continuous trafc, this can be
delay tolerant, or delay intolerant.
REVIEW QUESTIONS
1. What are the advantages of using digital communications relating to analog communications? What are the disadvantages?
2. What are the reasons that may imply the use of a modem?
3. What is the difference between simplex, half-duplex, and full-duplex communication?
4. What is the physical topology used to implement a logical bus? In such a case, what is the
central node?
5. What is the difference between unicast, multicast, and broadcast communication?
6. What is the difference between an analog and a digital signal?
7. What is the difference between a LAN, MAN, and WAN?
8. What is the difference between a connectionless and connection-oriented service?
9. What is the difference between a point-to-point communication and a network?
10. What is the difference between a circuit switching and a packet switching network? Give
examples of networks based on these two switching types.
11. What are the most important QoS requirements?
12. What is the convergence of telecommunications?
13. What is the difference between physical topology and logical topology?
14. Which types of media do you know?
15. How can the different types of trafc be grouped?
16. What is the collaborative age of the telecommunications?
LAB EXERCISES
1. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 1 of its laboratory manual, perform experiment 1Ð Setting up an oscilloscope.
2. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 1 of its laboratory manual, perform experiment 2Ð An introduction to Telecoms Trainer 101.
Network Protocol
2
Architectures
LEARNING OBJECTIVES
· Describe the network protocol architecture concept.
· Describe the Open System InterconnectionÐ Reference Model (OSI-RM).
· Describe the transmission control protocol/Internet protocol (TCP/IP).
· Describe the functions of each layer of the OSI-RM and TCP/IP.
2.1 INTRODUCTION TO THE NETWORK ARCHITECTURE CONCEPT
The problem of interconnecting terminals in a network is a complex task. The approach of trying
to solve all the problems without segmentation of functions in groups becomes an equation with a
very difcult solution. Therefore, the traditional solution is to group functionalities into different
layers and allocate each group to a different layer. This is called the network protocol architec-ture, also commonly known as the network architecture. This approach only denes what is to
be done by each layer, but not how such functionalities are to be implemented by the layer, whose
responsibility belongs to the protocol of the layer individually. This approach leaves room for a
layer to improve (due to, e.g., technological evolutions), without implications in the remaining layers, as long as the interface between a certain layer and its adjacent layers is kept as specied by
the network protocol architecture. In this sense, the network architecture denes the number of
layers, what is to be done by each layer, and the interface between different layers. Note that a network architecture not based on layers would not allow changing the how to do without changing
the architecture itself and without changing the remaining functions of the network architecture.
There are different network architectures. The International Organization for Standardization
(ISO) created the widely known OSI-RM, as depicted in Figure 2.1. Layers can also be identied
by numbers, starting from the lower (physical layerÐ layer 1) up to the upper layer (application
layerÐ layer 7).
This seven-layer architecture model denes and describes a group of concepts applicable to communication between real systems composed of hardware, physical processes, application processes,
and human users [Stallings 2010].
This architecture can be split into two groups: the four lower layers (from the physical up to the
transport layer) being responsible for assuring a reliable communication of data between terminal
equipment; the three upper layers (from the session up to the application layer), with a higher level
of logical abstraction, interfacing with the user application. Note that the OSI-RM is only a reference model, and the systems implemented use more or less parts of this model. Therefore, we may
view the TCP/IP stack, in use in the Internet world, as the most used real implementation closer to
the OSI-RM.
To better understand this network architecture concept, let us consider Peter in Boston who
writes a letter to Christine in Bristol. The letter is written using a specic protocol, starting by Dear Christine and ending by Warm regards and signature. The letter is inserted into an envelope, to
21
22Cable and Wireless Networks
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
FIGURE 2.1 Layers of the OSI-RM.
Layer 7
Layer 6
Layer 5
Layer 4
Layer 3
Layer 2
Layer 1
which the destination address (Christine's address) is written in a specic location. The envelope is
transported and delivered to the post ofce, in Boston, where a stamp is added in a specic location.
This post ofce will send the letter through the post ofce network, which may use several means
of transportations (van, bus, airplane, train, etc.) to allow the delivery of the letter at the post ofce
in Bristol. In Bristol, the letter will be distributed between zones based on the destination address,
and delivered to the postman, who will post the letter at Christine's postbox.
We may view the transportation of the letter as a process composed of several layers. An upper
layer (application layer) that corresponds to the communication between Peter and Christine using a
specic protocol (the letter starts by Dear Christine and ends by Warm regards and signature). This
protocol species what and where in the letter it is to be written and only refers to the agents of this
layer (Peter and Christine), not any of the intermediate agents (e.g., post ofce, plane, and postman).
This is control data, that is, overhead. Moreover, this communication follows a protocol that consists
of a set of procedures that are to be followed between the two entities.
Although the communication between them is supported by the lower layers (envelopes, post,
airplane, etc.), one may say that there is a virtual circuit between Peter and Christine. In Figure 2.2,
the application layer of the source and destination is linked by a dashed line, representing a virtual
circuit. As in the case of the circuit between Peter and Christine, there is no direct connection
between them. The lower layer (presentation) is used as a service (that is also supported by another
lower layer) to allow the data arriving at the destination.
In the second stage, the letter was inserted into an envelope, and the destination address was
written on the proper location. The second stage also has its own protocol, which includes the added
overhead (destination address) to be read by the post ofce network (lower layer), essential to allow
the letter being forwarded to the destination. Once the letter reaches the post ofce in Boston, a
stamp is added to the envelope on a specic location. This is a procedure that is recognized by
the worldwide post ofce protocol, essential to allow the letter being forwarded from the United
States/Boston to the destination address in United Kingdom/Bristol. An employee at the post ofce
in Boston will take the letter (together with many others) into the airport using a van as a mean of
transportation. In Bristol, another employee will collect the letter from the airport, and transport it
to the post ofce by a van.
23Network Protocol Architectures
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
FIGURE 2.2 Communication of two workstations using the OSI-RM.
Virtual channel
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
Physical
(real) channel
The address is composed of two parts: the country/city and the street name and number. The
country/city pair is the information necessary to decide about the route to use in order to forward
the letter from the source to the destination, through the worldwide post ofce network. This can be
viewed as the network layer control data (overhead). A decision has to be made about whether to
follow a direct ight from Boston to Bristol (in case there is one) or to send it through London. In the
latter case, London acts as an intermediate node (router). Such a node receives the letter, reads the
destination address, and decides about how to forward it to Bristol. This is the function of a router,
which belongs to the network layer (layer 3). It receives the data from the physical layer (bits), and
it goes up the several layers up to the network layer, removing the layer 1 and layer 2 control data.
It reads the destination address (country/city) and decides about the best output interface to forward
the data into the destination. Again, the router adds new layer 2 and layer 1 control data and sends
it through the physical layer.
Returning to our example, the street name/number pair is the information necessary to decide
about how to forward the letter within the destination city (Bristol). Note that this additional control
data is to be processed by a lower layer. In this case, one may say that this control data is to be processed by the data link layer (DLL; switch), which is responsible for forwarding the letter within the
city (point to point connection between Bristol post ofce and Christine's house). A switch belongs
to the DLL and is responsible for forwarding data within a LAN, whereas a router is responsible for
forwarding data between different LANs. The switch receives the data from the physical layer (bits),
removes the layer 1 control data, and reads the layer 2 control data. Using the example, it reads the
destination address (street name/number) and decides about the best output interface to forward the
data to the destination. We may conclude that the country/city pair can be viewed as a LAN (layer 3
address), and the interconnection between different LANs (cities) is performed at layer 3 routing.
Similarly, the street name/number can be viewed as the physical address of the terminal (layer 2
address), and the interconnection within the city (i.e., between streets and numbers) is performed at
layer 2 switching.
From the example we conclude that each layer of a network architecture performs different functionalities, each presenting its own protocol, and a different overhead.
24Cable and Wireless Networks
2.2 OPEN SYSTEM INTERCONNECTION—REFERENCE MODEL
Section 2.1 described that the network protocol architecture deals with functionalities performed by
each layer, as well as the type of interfaces between different layers. It was seen that a specic layer
provides services to the upper layer and makes use of the services provided by the lower layer. The
denition of how these functionalities are carried out by each layer is not specied by the network
architecture. This is specied by the protocol adopted by each layer.
The OSI-RM consists of an abstract network architecture model, being implemented in part by
different network protocol architectures. As will be described, the TCP/IP includes many of the
concepts specied by the OSI-RM.
As can be seen from Figure 2.3, the message format of each layer is referred to as the protocol data unit (PDU) preceded by a letter corresponding to the layer. The message format of the
application layer is the application PDU (APDU). The message format of the presentation layer
is the presentation PDU (PPDU). The message format of the session layer is the session PDU
(SPDU). The message format of the transport layer is the transport PDU (TPDU).* The message
formatof the network layer is the network PDU (NPDU), also known as packet.² The message format
of the DLL is the LPDU, also known as frame.³ Finally, the message format of the physical layer
is the bit. As can be seen from Figure 2.3, the nth layer service data unit (n-SDU) corresponds
(n+1)-PDU received from the upper layer, being encapsulated into the n-PDU. Moreover, the
nth layer protocol control information (n-PCI) corresponds to the overhead generated in the nth
layer, which may include elds such as source and destination addresses, redundant bits for error
detection or correction, and acknowledgment numbers for error control and ow control.
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
TPCI
OverheadData
PPCI
SPCI
TPDU
NPDU/packet
LSDULPCI
LPDU/frame
Bits
NSDUNPCI
APDU
PSDU
PPDU
SSDU
SPDU
TSDU
FIGURE 2.3 Protocol data units of different layers.
*
In the TCP/IP, TPDU is called segment.
²
Its main purpose is to allow it being forwarded throughout the entire network (i.e., between LANs).
³
A frame is composed of a group of information bits to which control bits are added in order to allow performing error
control and ow control.
25Network Protocol Architectures
2.2.1 Seven-Layer OSI-rM
Figure 2.4 depicts several layers of the OSI-RM, describing generically the functions provided by
each layer.
A brief description of each layer is provided in the following.
2.2.1.1 Physical Layer
The physical layer is layer one of the seven-layer OSI model of computer networking, also known as
the OSI-RM. This layer is responsible for the transmission of the data received from the upper layer
(DLL), in the form of bits, between adjacent nodes (point-to-point
*
). As shown in Figure2.5, the link
Provides generic services
related to the application
Shapes the data, defining the
data format/representation
Keeps the dialog and synchronization
between communicating entities
Provides QoS and may assure a reliable
end-to-end transfer of data
Assures end-to-end connection, and
routing in the network
Allows a reliable data flow between
adjacent devices
(using the same medium)
Assures the transmission of bits between
adjacent devices through the physical
medium
FIGURE 2.4 Generic description of the OSI-RM layers.
Node B
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
Responsibility
of the system
Responsibility
of the network
Station 1
Station 2
Node A
Node C
Node D
Node E
Station 3
Station 4
FIGURE 2.5 Example of a packet switching network.
*
In the sense of a network, point-to-point refers to the interconnection between two adjacent routers (nodes), between a
host and an adjacent router, or between two adjacent hosts.
26Cable and Wireless Networks
bits
between adjacent nodes can be the link between station 1 and node A, or between nodeAandnodeB,
and so on. It is responsible for the representation of bits to be transmitted through a transmission
medium.* Such representation includes the type of digital encoding or modulation scheme to use
(voltages, pulse duration, amplitude, frequency or phase modulation, etc.) [Marques da Silva et al.
2010]. This layer also species the type of interface between the equipment and the transmission
medium, including the mechanical interfaces (e.g., RJ45 connector).
Synchronization issues are also dealt with by this layer. This includes the ability of a receiver to
synchronize with a transmitter (start and end of bit instants), before bits are transferred.
This layer aims to optimize the channel capacity as dened by Shannon² [Shannon 1948], making use of encoding techniques (or modulation schemes), multiple transmit and receive antennas,
regenerators, equalizers, and so on. Although the physical layer may use error control, the provision
of reliability to the exchanged data is normally a functionality to be provided by the DLL.
2.2.1.2 Data Link Layer
This layer is responsible for providing reliability to the data exchanged by the physical layer. This
reliability is provided by the use of error control and ow control. Note that the DLL (as well as the
physical layer) focuses on the point-to-point exchange of data.
³
The exchange of bits performed by the physical layer is subject to noise, interferences, distortion,
and so on. All of these channel impairments may originate corrupted bits, which degrades the performance. The DLL makes use of error control techniques to keep the errors at an acceptable level.
Depending on the medium that is being used to exchange data, error control can be performed using
either error detection or error correction techniques. In the case of error detection, codes such as
cyclic redundancy check (CRC) or parity bits are used to allow errors being detected on the receiver
side, and the receiver may request the retransmission of the frame. However, if the medium is highly
subject to noise and interferences (e.g., wireless medium), the choice is normally the use of error correction. In the latter case, the level of overhead per frame is higher, but it avoids successive retransmissions, which also translates in a decrease of overhead. Note that, in both cases, the DLL handles
blocks of bits to which the corresponding overhead is added (redundant bits to allow error detection
or error correction, as well as this layer address). These blocks of bits, with a specic format depending on the protocol of the DLL, are the previously mentioned LPDU, commonly known as frame.
Figure 2.6 shows the decomposition of an LPDU, as composed of an NPDU (packet received
from the upper layer on the transmitting side) plus this layer overhead. The LPDU overhead is the
startup ag, the source and destination address, the redundant bits for error control, and the end ag.
Flags are used to allow synchronization, that is, for the receiver to identify the beginning and end of
a frame. It is composed of a sequence of bits with a low probability of occurrence in the information
part of a frame.
In the example of Section 2.1, the letter follows different types of transportation corresponding
to each point-to-point connection (DLL). Peter walked from home to the post ofce, then a van took
the letter into the airport, a ight was taken from Boston airport until Bristol airport, another van to
Bristol post ofce, and so on. Each different type of transportation has its own protocol. Similarly,
the end-to-end path is composed of a concatenation of links (DLL). Each link may have a different
Start flag
FIGURE 2.6 Frame decomposition.
*
This transmission medium can be a twisted pair, coaxial cable, ber optic cable, satellite link, wireless, and so on.
²
Shannon capacity is dened in Chapter 3.
³
For example, between adjacent routers, or between a host and a router.
PacketAddress
Redundant
End flag
27Network Protocol Architectures
transmission medium* and different DLL protocol² running over it. Note that the DLL also includes
the interchange of data between hosts within the same LAN. This interconnection of hosts within a
LAN is achieved using a hub (repeater of bits), a switch, or a bridge. The functions of these devices
rely on allowing the distribution of data within a LAN. These devices are dened in Chapter 12.
It is worth referring that another important functionality of the DLL is the ow control. Normally,
the transmitter can transmit faster than the receiver is able to receive. To avoid loss of bits, the
receiver needs to send feedback (control data) to the transmitter about whether or not it is ready to
receive more frames. This is achieved through ow control. The ow control protocol can be associated with the error control protocol as follows: when a receiver checks the existence of errors (using,
e.g., CRC or parity bits) and sends a feedback message to the transmitter informing that it is ready
to receive the following frame (meaning that the previously received frame was free of errors), the
two protocols (error control and ow control) work together.
2.2.1.3 Network Layer
This layer relies mainly on routing of packets along the network as well as addressing issues. The
network layer is the rst one (from the bottom) that takes care of end-to-end issues.
As shown in Figure 2.5, end-to-end connection is the connection between station 1 and station 3,
or between station 2 and station 4, and so on.
Let us focus again on the example of Section 2.1. The letter had to be sent from Boston to
Bristol. The post ofce in Boston had to decide about the best way to make the letter arrive in
Bristol. It could be using a direct ight (in case there is one), or through London, and so on. This
is the decision that has to be made by the network layer. Allow the NPDU to reach the destination
throughthebest path. In case there is no direct ight, an intermediate router in London would have
to read thedestination address (country/city pair) and decide about the next hop to reach Bristol.
Therefore, the network layer is responsible for the end-to-end routing of the NPDU in the network.
There are two different basic modes of routing:
· Datagram
· Virtual circuit
³
§
In the datagram mode, each NPDU carries the destination address and each node (router) has to
decide about the best way to forward the NPDU in order to reach the destination. On the other hand,
in the virtual circuit mode, each NPDU has information about only the virtual circuit to which it
belongs. Several channels owing in the network would belong to the same virtual circuit, and the
node (router) only has to know the output interface corresponding to a certain virtual circuit.
The virtual circuits are established in advance, before the data is exchanged. In this case, all NPDUs
of a certain connection follow the same predened path. Contrarily, in the datagram mode, each node
decides the following path, and different NPDUs of the same connection may follow different paths.
In the virtual circuit mode, the routing tends to be faster as the amount of decision that has to be
taken by routers is lower. Different packets with different destination addresses may belong to the
same virtual circuit in a specic part of the path. Looking into the network depicted in Figure 2.5,
let us consider that station 1 needs to send data to station 3. A possibility could be sending packets
through node A, node B, and node E. The NPDU has an identier that identies the virtual circuit,
and the node only needs to read this identier to nd the output interface to use, not having to know
the nal destination address of the packet. Note that the router does not make any decision. Routers
*
For example, wireless, twisted pair, satellite, and optical bers.
²
For example, IEEE 802.11, IEEE 802.3, and point-to-point protocol (PPP).
³
For example, IP.
§
For example, X.25 or MPLS.
28Cable and Wireless Networks
only have to read the virtual channel identier, which is shorter (in number of bits) than the destination address, and thus, the level of overhead is reduced.
In case the datagram mode is in use, node A receives packets from station 1, reads the destination address (a eld within the packet whose length is longer than the virtual channel identier),
and decides about the following node to use in order to make the packet arrive the nal destination.
Note that, in the datagram case, as the network changes dynamically in time, different packets may
follow different paths, and the packets may reach the destination out of order. In this case, another
layer would have to make the reordering of packets.
*
In both cases, routers make use of routing tables. In the datagram mode, a routing table stores
information about the output interface to which packets should be sent to in order to reach a certain
destination address. In the virtual circuit mode, a routing table stores information about the output
interface that corresponds to a certain virtual circuit. Note that the virtual circuit mode allows data
to be forwarded faster, but the construction of the routing table is more complex (and requires a
higher level of overhead) than in the case of a datagram. The IP is based on a datagram mode.
2.2.1.4 Transport Layer
This layer is responsible for making sure that end-to-end data delivered by the network layer has
the required quality of service (QoS) (reliability, delay, jitter, bandwidth, etc.). In other words, it is
responsible for providing the desired service to the upper layers. Depending on the classication
of the service provided, there are two different types of connections that inuence the provision of
this layer QoS:
· Connection oriented
· Connectionless
A connection-oriented service is a service provided by a layer that comprises three different
phases: (1) connection setup, (2) data exchange, and (3) connection termination. A connectionoriented service assures that packets that reach the receiver follow the transmission order. In addi-
²
tion, it makes use of error control techniques to provide reliability to the delivered data.
Although
connection-oriented services bring benets in terms of data reliability, it demands more processing
from both transmitter and receiver, which translates in additional resources and time (e.g., time
to establish before sending data, delay due to request for repetition of packets, and reordering of
data at the receiver). On the other hand, a connectionless service is minimalist in terms of processing, but the delay is also minimized. In this mode, there is no need to make a connection setup
before transmission. Reordering of packets is not performed at the receiver.³ Let us consider the
IP telephony service. As previously described in Section 1.2, an important characteristic of the
voice communication service is that it is delay sensitive, whereas not very sensitive to loss (or corrupted) of data. Therefore, the use of a connection-oriented transport layer does not seem to be a
good idea, as it may introduce delays.§ Therefore, the IP telephony service is normally supported
in connectionless mode as it minimizes the delay, whereas the errors that may occur are normally
not critical for the message to be understood.
*
Normally, for services that require the reordering of packets, this is performed by the transport layer. Nevertheless, in
some cases, this can be done by another upper layer (e.g., IP telephony service).
²
In the TCP/IP stack, the connection-oriented service of the transport layer is implemented using the TCP.
³
In the TCP/IP stack, the connectionless service of the transport layer is implemented using the user datagram protocol
(UDP). Moreover, the IP (layer 3) is based on the connectionless mode.
§
In fact, this delay is subject to uctuations, which is called jitter. The level and variation of delay would depend on the
amount of requests for repetition.
¶
Typically, a voice service has an acceptable quality with a bit error rate of the order of 10−3, whereas most of other data
services require much more reliable data.
¶
29Network Protocol Architectures
Let us now consider a le transfer between two terminals through the network. This service is
highly sensitive to loss of data (otherwise the le would be corrupted), whereas not very sensitive to
delay. It is important to make sure that packets arrive the destination in the correct order, and free
of errors. Therefore, it is clear that the service provided by the transport layer should be connection
oriented.
In addition to the above-mentioned transport functions, this layer may also offer other functionalities: let us consider the case where the network layer has a 512kbps connection established, and
where the session layer is requesting a 1024kbps connection. In this situation, the transport layer
may establish two 512kbps network connections and, in a transparent manner, offer a 1024kbps
connection to the session layer. A similar function may be offered when the maximum NPDU
length is lower than the NPDU length being requested by the session layer (SPDU). In this situation,
the transport layer performs the segmentation of the SPDU into two (or more) NPDUs.
2.2.1.5 Session Layer
This layer allows the mechanisms for setting up, managing, and closing down sessions between
end-user application processes. While supported in the transport layer, it consists of requests and
responses between applications. Logical sessions need to be properly managed in terms of which
station can send data and when. As an example, although the lower layers can be full-duplex, some
applications are specied to communicate in half-duplex. Another function of the session layer
consists of the ability to reestablish the connection in case of failure. In this context, synchronism
points are inserted such that, in case of failure, the session is reestablished from the last synchronism point correctly processed. Furthermore, the session layer may ask its counterpart (destination
session layer) about whether or not the data received before a certain synchronism point has been
properly processed.
2.2.1.6 Presentation Layer
This layer is responsible for formatting and delivering of data to the application layer for processing.
Different workstations use internally different representations of data. However, the data exchanged
along a network needs to follow a common representation; otherwise, the communication could not
succeed. This denition is performed by the presentation layer. The interconnection of different
networks that use different presentation layers requires a gateway. A gateway can be seen as a device
that is able to understand two (or more) languages and is able to translate one language into another.
It is still worth noting that, although it can be performed by other layers, because encryption can be
viewed as a different way of representing data, it is typically performed by the presentation layer.
2.2.1.7 Application Layer
This layer of the network architecture is responsible for interfacing with the application program of
the user. Note the difference between an application layer of a network architecture and an application program. The former has some attributes in the exchange of data in the network, whereas the
latter is only a specic application resident in hosts. These attributes include the denition of elds
and rules of interpretation for these elds. It provides services specic to each kind of application
(le transfer, web browsing, e-mail, etc.). As an example, Microsoft Outlook is an e-mail application, whereas CCITT X.400 is an e-mail application layer that denes the way functionalities are
carried out.
2.2.2 ServIce acceSS POInt
Each layer has its own addressing format. The purpose of an address is to allow the corresponding
layer to identify whether or not a certain host is the destination of a PDU. In addition, the source
address allows the identication of who was the sender of such message that is circulating in the
network.
30Cable and Wireless Networks
The address of each layer, referred to as the service access point (SAP), is preceded by a letter
corresponding to the layer, and is part of the layer control data. As previously mentioned, a specic
layer (N) communicates with the upper layer (N + 1) to offer services, and communicates with the
lower layer (N − 1) to use its services. This concept can be seen in Figure 2.7. A SAP can be viewed
as the interface between adjacent layers. Note that a layer may communicate with adjacent layers
using more than one SAP.
The SAP of layer N is the address of the interface between layer N and layer N + 1. The
address between the application layer and the presentation layer is the presentation SAP (PSAP).
The address between the presentation layer and the session layer is the session SAP (SSAP). The
address between the session layer and the transport layer is the transport SAP (TSAP).* The address
betweenthetransport layer and the network layer is the network SAP (NSAP). The same principle
applies to the other layer SAP. Note that the physical address is the DLL address used in the interface between the DLL and the network layer.
Returning to the example of Section 2.1, the two post ofce employees, in Boston and Bristol,
respectively, have a virtual circuit between them (dashed lines in Figure 2.2), but this virtual circuit
is supported by a lower layer, which is the airplane. Therefore, we may view the airplane as the
physical layer (lower layer), being the unique that represents a real circuit [Forouzan 2007].
As can be seen from Figure 2.8, the communication between two terminals is performed by different layers. Each different layer will develop a specic function, and have its own protocol.
Note that each upper layer uses the services made available by the lower layer to establish a
virtual circuit with its counterpart layer at the destination address. Furthermore, on the transmitter
side,² each layer adds specic control data (overhead), essential to allow the message (or part of it)
being forwarded to its counterpart layer at the destination address. On the transmitter side, the lower
layer receives the user data and control data from the upper layer and considers all of this data as
user data (this layer control data is also to be added). The control data of a specic layer is only of
interest to the corresponding layer at the receiver, and follows a specic protocol depending on the
SAP
SAPSAP
SAPSAPSAP
FIGURE 2.7 Service access point as the interface between layers.
*
In the TCP/IP stack, TSAP is known as the port number, the NSAP is known as the IP address, and LSAP is known as
the medium access control (MAC) address, hardware address, or physical address.
²
Note that each station normally acts simultaneously as a transmitter and as a receiver. Nevertheless, for the sake of
simplicity, in this description, we assume that a station acts as a transmitter and another one as a receiver. In reality, their
functions alternate in time (half-duplex) or both stations may even act simultaneously as a transmitter and a receiver
(full-duplex).
SAP of layer N+ 1
Layer N+ 1
SAPs of layer N
Layer N
SAPs of layer N− 1
User data
31Network Protocol Architectures
Layer N
...
Layer 3
Layer 2
Layer 1
Transmission of data
through the network
Layer N
...
Layer 3
Layer 2
Layer 1
User data
User data
User data
Control data N
User data
User data
Control data N
User data
User data
FIGURE 2.8 User data and control data added by several layers.
User data
User data
Control data 3
Control data 2
Control data 1
Control data 3
Control data 2
Control data 1
specications of the layer. The receiver side removes the control data previously added by the corresponding layer at the transmitter and delivers the user data to the upper layer.
The service provided by a layer to its upper layer is dened as a group of elementary services.
Each of these elementary services is implemented using service primitives. As can be seen from
Figure 2.9, the four basic service primitives considered by the OSI-RM are as follows:
· Request
· Indication
· Response
· Conrm
The request is a service primitive sent by a source upper layer ([N + 1]-layer) to its adjacent lower
layer (N-layer), being transmitted along the network until the destination counterpart layer as an
indication primitive (from the destination N-layer to the destination [N + 1]-layer). In case the service is conrmed,* the service primitive response is sent from the destination upper layer ([N +
1]-layer) to its adjacent lower layer (N-layer), meaning that such indication message was properly
received by the destination layer, whereas on the source side this response is delivered from the
lower layer (N-layer) to its adjacent upper layer ([N + 1]-layer) in the form of a conrm primitive.
*
An example of a conrmed service is the TCP, whereas the UDP is not conrmed. Both of these protocols are layer 4
protocols of the TCP/IP stack.
32Cable and Wireless Networks
Response
EmitterReceiver
Request
Source
Layer N+ 1Layer N+ 1
NSAPNSAP
Layer NLayer N
Confirm
Indication
Destination
FIGURE 2.9 Service primitives.
As can be seen from Figure 2.10, a connection-oriented service comprises the exchange of the
four service primitives for each of the following elementary operations:
· Connection setup
· Exchange of each group of bits (frame or segment)
· Connection termination
As described in Chapter 1, a connectionless service can be conrmed or nonconrmed. As can
be seen from Figure 2.11, a conrmed connectionless service includes only the service primitives
associated with the transmission of data, including response and conrm primitives, necessary for
Connect request
Connect confirm
Data_1 request
Data_1 confirm
...
Data_N request
Data_N confirm
Disconnect request
Disconnect confirm
FIGURE 2.10 Service primitives of connection-oriented services.
Connect indication
Connect response
Data_1 indication
Data_1 response
...
Data_N indication
Data_N response
Disconnect indication
Disconnect response
33Network Protocol Architectures
Emitter
Receiver
Data_1 request
Data_1 confirm
...
Data_N request
Data_
N
confirm
FIGURE 2.11 Service primitives of conrmed connectionless services.
EmitterReceiver
Data_1 request
...
Data_N request
Data_1 indication
Data_1 response
...
Data_N indication
Data_N
response
Data_1 indication
...
Data_N indication
FIGURE 2.12 Service primitives of nonconrmed connectionless services.
the conrmation of the service. Contrarily, as can be seen from Figure 2.12, a nonconrmed connectionless service makes use of only the rst two primitives (request and indication).
The reader should refer to Chapter 1 for the description of a nonconrmed service.
2.3 OVERVIEW OF THE TCP/IP ARCHITECTURE
The TCP/IP architecture adopted by the Internet* is the most used real implementation of the
OSI-RM. Nevertheless, while the basis is the same, there are some differences between these two
architectures. While the OSI-RM is a seven-layer architecture, the TCP/IP model is composed of
only ve layers. This can be seen from Figure 2.13.
Similar to the OSI-RM, the layer N of the TCP/IP model uses the services made available by
the layer N − 1, and provides services to the layer N + 1. In addition, the layer N on the transmitting side has a virtual circuit with the layer N on the receiving side. Naturally, this virtual circuit is
established making use of the services provided by the lower layers (the only real circuit is the one
established by the physical layer). In addition, Figure 2.14 shows the control data added by each different layer of the TCP/IP, on the transmitter side. Note that this control data is inversely removed
on the receiver side.
and session layers of the OSI-RM (see Figure 2.13). In addition, some bibliography refers to the two
*
The TCP/IP application layer includes functionalities assigned to the application, presentation,
The TCP/IP architecture is, sometimes, also known as the Internet model.
34Cable and Wireless Networks
TCP/IP model
OSI reference model
Application layer
Presentation layer
Application layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
Transport layer
Internet layer
Data link layer
Physical layer
FIGURE 2.13 Comparison between the OSI-RM and the TCP/IP model. (Data from Marques da Silva, M.,
Multimedia Communications and Networking, 1st edition, CRC Press, Boca Raton, FL, March 2012.)
MAC header
LLC header
IP header
TCP or UDP
header
Application data
MAC trailer
Application layer
Transport layer
Logical link control
Data
link
Medium access control
layer
Internet layer
Physical layer
Layer 5
Layer 4
Layer 3
Layer
Layer 1
2
Information
Segment
data
Stream
Overhead
Overhead
Datagram
LLC PDU
Frame
FIGURE 2.14 Description of overhead added by different TCP/IP layers and identication of message
formats.
lower layers of the TCP/IP (physical layer and DLL) as the network access layer. In fact, the func-
tionality of these two layers is to provide access to the network.
The following subsections provide a generic description of each of the layers of the TCP/IP
architecture. This description will start by the layer with a higher level of abstraction (application
layer), as it is closer to the hosts that make use of the network protocol architecture to interchange
data with a remote host.
*
The reader should refer to Chapters 8 through 13 for a detailed description of each of the layers and their protocols.
*
35Network Protocol Architectures
The OSI-RM is dened such that each layer can only make use of services made available by the
adjacent lower layer. In the TCP/IP architecture, the level of exibility is much higher as any layer
may invoke a service of any of the other layers, not only the lower layers but also the upper layers.
2.3.1 aPPLIcatIOn Layer
The TCP/IP application layer incorporates most of the functions dened for the three upper layers of
the OSI-RM (session, presentation, and application layers). In this case, the application layer deals
with all of the issues related to communication of user processes, whereas the OSI-RM splits these
functionalities into three layers.
The reader should refer to the session, presentation, and application layers of the OSI-RM. In addition, the reader should refer to Chapter 8 for a detailed description of the TCP/IP application layer.
2.3.2 tranSPOrt Layer
As described for the OSI-RM, this layer is responsible for the provision of QoS. Such functionalities include the implementation and control of service requirements such as reliability of data,
delay, jitter, low or high bit rate, and constant or variable bit rate. From Figure 2.14, it is seen
that the application layer generates data that is segmented and delivered to the transport layer.
The message format of the transport layer of the TCP/IP is called segment, and can be of two
different types:
*
· UDP
· TCP
While the TCP is connection oriented, it requires the setup of the connection before the data is
exchanged. Making use of error detection and CRC, it provides reliability to the packets delivered.
The provision of reliability is performed through the use of error detection (CRC codes) associated
with the positive acknowledgment with retransmission (PAR) and the sliding window protocol.
In this sense, a receiving station acknowledges the good reception of packets. In case an error is
detected, the transmitter timer reaches the timeout³ without receiving the acknowledgment, and
the packet is retransmitted. Note that this successive repetition of packets introduces variable delay
(jitter) in signals.
In fact, the TCP performs other functions, namely it assures that:
· Data is delivered with reliability.
§
· Packets are received in the correct sequence.
· Packet losses are detected and corrected.
· The duplication of packets is avoided.
Therefore, this mode is ideal for services that require reliable data, without presenting sensitivity to delay or jitter. Figure 2.15 depicts several protocols and technologies that can be used by
different TCP/IP layers. From this gure, it can be seen that some application layers supported
by the TCP are le transfer protocol, telnet, simple mail transfer protocol (SMTP), or hypertext
transfer protocol.
*
As an example, the open shortest path rst (OSPF) of the TCP/IP architecture is a layer 3 protocol that is used to create
routing tables. This protocol invokes the UDP, which is a layer 4 protocol.
²
Refer to Chapter 12 for a detailed description of the sliding window protocol.
³
The transmitter starts the chronometer (i.e., a timer) whenever a packet is transmitted.
§
In fact, there are limits to reliability of data. The TCP does not guarantee 100% of error-free packets, but keeps it at an
acceptable level for the more demanding services in terms of error sensitivity.
²
36Cable and Wireless Networks
Application layer
DNS
SNMP
NFS
TFTP
UDP
IEEE 802.2
IEEE 802.3IEEE 802.5
HDLC
Transport layer
Internet layer
IP
Data link layer
PPP
FTP
HTTP
TCP
Logical link control
SLIP
Medium access control
IEEE 802.11
SMTP
Telnet
sublayer
sublayer
Physical layer
Optical fiber
Coaxial
Twisted pair
Satellite
Microwave
line-of-sight
Radio
FIGURE 2.15 Example of some protocols and technologies used by different TCP/IP layers. (Data from
Marques da Silva, M., Multimedia Communications and Networking, 1st edition, CRC Press, Boca Raton,
FL, March 2012.)
The UDP transport protocol is connectionless and provides a nonconrmed service, and therefore the data is exchanged without requiring the preliminary setup of the connection. In this case,
the delivery of data to the end station is based on the best effort. While not providing data reliability,
it presents the advantage of not introducing delay in signals. Therefore, this is the ideal mode for
services that can resist to some level of errors. Such examples are the simple network management
protocol (SNMP), the domain name system, or the network le system. Note that although the UDP
does not provide reliability, such reliability can be provided by the upper layers, or by other means
than error detection and retransmission. The SNMP uses UDP because transmission of data is very
redundant in time (repeated from time to time). In the case of the IP telephony, reordering of packets
is performed by the application layer.
The TSAP of the TCP/IP stack is the concatenation of the IP address with the port number. The port
number is composed of a 16-bit address (between 0 and 65 535), referenced using the decimal notation.
The reader should refer to Chapter 9 for a detailed description of the TCP/IP transport layer.
37Network Protocol Architectures
2.3.3 Internet Layer
Similar to the network layer of the OSI-RM, the Internet layer of the TCP/IP model is the most
important, and deals with routing issues between different networks. Nevertheless, actions are to be
taken by each node (router) of a network.
The IP has been developed and standardized in different versions. This protocol started with
version 1. Versions 2 and 3 were dened but were replaced by version 4 (IPv4), which has been
the most used version of the IP. Version 5 was developed, being a specic protocol optimized for
data streams (voice, video, etc.). Finally, the new IP, initially entitled Internet protocol of the next generation (IPng) during the development phase, was standardized by the RFC 2460 (request for
comments) as the IP version 6 (IPv6). It is worth noting that some authors refer to the Internet layer
as the network layer. In this book, these terms are used interchangeably.
In the example of Section 2.1, where Peter sent a letter to Christine, the interconnection of different
cities was viewed as the interconnection of LANs, as we can view a city as a LAN. Note that, between
two cities, a letter may follow different paths (e.g., direct ight and through a third city). Furthermore,
the forwarding of packets within a city (i.e., between houses) was viewed as the layer 2 switching.
We have seen in the denition of the network layer of the OSI-RM that it can be based on a datagram or a virtual circuit mode, depending on whether all packets between the source and destination follow the same path (virtual circuit) or, eventually, different paths (datagram). The IP is based
on the datagram method, being connectionless. Therefore, because the network changes dynamically in time, different packets may follow different paths, and the packets may reach the destination
out of order. In case reordering of packets is necessary for the service to be supported, another layer
(normally this is performed by the TCP in layer 4) would have to perform it.
In the IP, each node (router) has to decide about the best way to forward packets, and these packets transport information about the destination address. From Figure 2.14, we see that a datagram
is composed of the layer 4 data (segment) plus the IP header. In the case of the OSI-RM, different
control data is added at the source and removed at the destination. The exchange of data along the
network depicted in Figure 2.16 is as follows:
*
· In the source host (leftmost host), the application layer of the TCP/IP stack receives the
application data, formats it for transmission, establishes a session with the remote host
(rightmost host), segments it, and transfers it to the transport layer.
· The transport layer receives the data from the application layer, adds the necessary overhead, including the source and destination port address (in the case of the TCP, this includes
the addition of CRC redundant bits, as well as the sequence number), and transfers it to the
Internet layer.
Application layer
Transport layer
Internet layer
Data link layer
Physical layer
FIGURE 2.16 Example of a network using the TCP/IP.
*
Note that IPv4 designates NPDU (packet) as a datagram. IPng, known as IPv6, returns the packet designation.
Router 1
Internet layer
Data link layer
Physical layer
Router 2
Internet layer
Data link layer
Physical layer
Application layer
Transport layer
Internet layer
Data link layer
Physical layer
38Cable and Wireless Networks
· The Internet layer of the source host adds the layer 3 overhead, namely the source and
destination IP address, and passes it to the DLL for point-to-point transfer.
· The DLL receives the packet from layer 3 and adds the layer 2 overhead (source and destination MAC address, ags, error control redundant bits,* and other control data). Afterward,
the frame (layer 2 PDU) is transferred to layer 1.
· The physical layer of the source host receives the frame from the upper layer, formats the
bits for transmission² (type of modulation scheme or digital encoding, bit synchronization,
etc.), and starts the transmission of symbols to the leftmost router (being directly connected
to the host).
· The physical layer of the router receives the symbols, performs equalization (if applicable),
converts the symbols into bits, and transfers them to layer 2 of the router.
· The DLL of the router groups the bits into the frames, and checks for errors. If an error is
detected in a frame, there are two possibilities:
· If error correction is used by layer 2, the frame is corrected by layer 2 of the router.
· If error detection and automatic repeat request is used by layer 2, a frame is sent from the
router to the source host, requesting for the retransmission of the corrupted frame. This
frame has to go to layer 1 of the router for transmission. Afterward, the corresponding
bits are received by the host, and passed to layer 2. The host layer 2 understands that
this frame corresponds to a request for repetition, and starts the retransmission of the
frame, being then transferred into layer 1 for transmission. In the router, its layer 1
receives the bits and passes them again to layer 2 for another error check.
· Once the frame is assumed to be correctly received by layer 2 of the router, this layer overhead is removed and the packet is transferred to layer 3.
· The router's layer 3 reads the destination address and consults its routing table. From this
table, it extracts the output interface that should be used to send the packet. In the example
of Figure 2.16, this is the next router. Therefore, the packet is transferred again to layer 2,
where the new control data is added, and passed to layer 1 for transmission. This process
is repeated until the packet reaches the destination host.
· At the destination host (rightmost host of Figure 2.16), the data goes up the several layers
until the transport layer. This layer may have two different procedures:
· In case the TCP is in use, the protocol checks for segment errors. If an error is detected,
as the PAR is used by the TCP, it does not acknowledge the reception of the frame and
layer 4 of the source host reaches the timeout and starts the retransmission of such
packets. Furthermore, once the packet is correctly received by layer 4 of the destination host, it checks the correct sequence of packets (to avoid the wrong sequence of
packets, duplication of packets, or the absence of packets), corrects it (if necessary),
removes the layer 4 overhead, and transfers it to the application layer.
· In case the UDP is in use, it removes only the layer 4 overhead and transfers it to the
application layer.
· The application layer reassembles the several packets received from the source host (transferred from the transport layer), makes the necessary conversion of data, and transfers it to
the application process.
The NSAP of the TCP/IP stack is the IP address. The Internet Assigned Numbers Authority (IANA)
is responsible for the global coordination of the IP addressing, including the assignment of IP
*
The TCP/IP does not dene the type of error control technique to be used by the DLL. It depends on the data link protocol
adopted in each point-to-point connection. It can be based on error detection (e.g., parity bits, or CRC) or based on error
correction (e.g., forward error correction).
²
On the transmitter side, each bit, or group of bits, is encoded into one or more symbols. Conversely, on the receiver side,
symbols are converted into bits. Symbols are transmitted, not bits. Symbols are generated using a modulator or digital
encoder technique.
39Network Protocol Architectures
Router
10101101.00001111.11111100.00000011
address groups. As can be seen from Figure2.17, an IPv4 address is composed of 32bits, grouped
into four octets,* that is, four groups of eight binary numbers. Nevertheless, for the sake of simplicity, it is normally displayed in four groups of decimal numbers, using the dotted-decimal notation.
The IPv4 address space is divided into classes, from A to E (see Table 2.1). There are different
possible ways to identify the class of an IPv4 address. Performing the conversion of the leftmost
octet from decimal into binary, and observing the position of the leftmost zero, from Table 2.1, the
address class can be identied. Class A has the leftmost zero in the most signicant bit (MSB). Class
B has the leftmost zero in the second position, and the MSB is 1. Class C has the leftmost zero in
the third position, and the two leftmost bits are 1.
A router is a layer 3 device that is used to interconnect two or more networks. In addition, its two
or more interfaces are normally of different types, that is, the DLL protocols in each of the interfaces are different. As an example, a router is normally used to interconnect a domestic Ethernet
LAN with a WAN, which allows reaching the Internet service provider (ISP).² The router is connected to adjacent devices using different links³ (at the DLL level). In the example of Figure 2.18,
the router has two network interface cards (NIC). Each NIC is connected to each of the networks to
which the router is connected to. Moreover, each NIC presents a different IP address.
Routing algorithms as well as IPv4 and IPv6 protocols are described in Chapters 10 and 11.
173.. .152523
FIGURE 2.17 An example of an IPv4 address in both binary and dotted-decimal notation.
TABLE 2.1
Mapping between the Address Class
andthe Leftmost Octet Value
Binary Range
Class
Class A0XXXXXXX
Class B10XXXXXX
Class C110XXXXX
Class D1110XXXX
Class E11110XXX
IP address
belonging to
Class A
Class A Network
NIC 1
FIGURE 2.18 Example of a router with two NIC.
*
The theoretical limit of the IPv4 address space is 232, corresponding to 4 294 967 296. Because of the rapid growth of the
Internet, the available address space is depleted. IPv6 solves this problem, as its address is composed of 128bits, which
makes a much wider address space available for the Internet world.
²
Using, for example, an ADSL or cable modem.
³
For example, it connects to a LAN using the IEEE 802.3 protocol and it connects to the ISP through a WAN using, for
example, the PPP.
oftheLeftmost Octet
IP address
belonging to
Class B
NIC 2
Class B Network
40Cable and Wireless Networks
2.3.4 Data LInk Layer
As already mentioned for the OSI-RM, this layer refers to point-to-point communication between
devices. The DLL is composed of two sublayers (see Figure 2.19):
· Logical link control (LLC) that deals with ow control and error control. This layer aims
to mitigate the negative effects of the channel impairments experienced in the physical
layer, such as attenuation, noise, interferences, and distortion.
· MAC that determines when a station is allowed to transmit within a LAN. Note that this
sublayer only exists in the case of a LAN (and some types of MAN). When stations share
the transmission medium in a LAN, it is said that the access method is with collisions
(e.g., Ethernet). In this case, the MAC sublayer is responsible for dening when a station
is allowed to transmit in such a way that collisions among transmissions from different
stations is avoided (which originate errors). On the other hand, when stations of a LAN
do not share the transmission medium, it is said that the method is without collisions (e.g.,
token ring).
In the example of Section 2.1, the layer 3 switching (routing) is responsible for nding the best
path in order to forward the letter between cities. This could be a direct ight, through London,
and soon. On the other hand, each elementary link of the full path between the source and the
destination can be viewed as a DLL. Each of the links may use a different mode of transportation:
car, van, bus, plane, ship, and so on. Moreover, we have seen that a city can be viewed as a LAN,
and the distribution of letters within the city can be viewed as a layer 2 switching. Note that end-toend (layer 3) forwarding is based on the country/city part of the address, whereas the distribution
within a city is based on the street/number part of the address. Similarly, the DLL is responsible for
the connection between two successive routers, or between a router and a host (even though if it is
performed through a switch or a hub). Each connection may use a different layer 2 protocol (e.g.,
HDLC, PPP, and IEEE 802.3*), and a different communication medium (e.g., satellite, optical ber,
and twisted pair).
In order to allow a better understanding of the differences between layer 2 (DLL) and layer 3
(Internet layer), let us analyze Figure 2.20.
The connection between router 1 and router 2 refers to layer 2 (point-to-point connection). The
same applies to the connection between router 1 and the hosts in its LAN (197.139.18.0). If the host
with the IP address 193.139.18.2, in the LAN connected to router 1, needs to exchange data with
a host connected to router 2 (e.g., 197.139.18.2), the layer 3 protocol is used to forward the packets between the source and the destination. It deals with routing of packets in intermediate nodes
²
Logical link control (LLC)
sublayer
Medium access control (MAC)
FIGURE 2.19 Data link layer and its sublayers.
*
IEEE 802.3 corresponds to a standardization of the widely used Ethernet technology. This technology was developed
by the consortium Digital, Intel, and Xerox. For this reason, Ethernet was also known as DIX. The standard IEEE 802.3
presents some variations to the Ethernet, being, for this reason, also known as Ethernet II or DIX II. This standard denes
the physical (type of cable) and the MAC sublayer. This standard is detailed in Chapter 12.
²
In this case, the communication mediums refer to the physical layer.
sublayer
Data link
layer
(DLL)
197.139.18.2
LAN
Switch
197.139.18.0
41Network Protocol Architectures
197.139.18.3
...
171.139.0.0
WAN
193.139.18.2
197.139.18.254
Router 1
Switch
193.139.18.0
…
LAN
191.139.18.1
193.139.18.1
193.139.18.254
197.139.18.1
Router 2
191.139.18.2
191.139.18.0
WAN
193.139.18.3
...
FIGURE 2.20 Layer 2 and layer 3 switching.
(router1 and router 2), based on the destination IP address (197.139.18.2). Packets use different
point-to-point connections, with different layer 2 protocols. The LAN connected to router 1 may use
the IEEE 802.3/IEEE 802.2 protocols,* the connection between the two routers may use the PPP,
and the connection between router 2 and hosts may be supported on the IEEE 802.5 (token ring)/
IEEE 802.2 protocols.
A switch is responsible for the distribution of frames (instead of packets) within a LAN, based on
the destination MAC address² (instead of the IP address). Note that the MAC address is composed
of 48bits and represented by six groups of two hexadecimal digits (e.g., 00-1f-33-ac-c5-bb). In fact,
a switch learns the MAC address of each device connected to each interface. Every time a frame is
received in a certain interface, if this interface is not associated with any MAC address, the mapping is added to the table. Such a MAC address table maps interfaces to MAC addresses.
In addition, before a host or a router sends frames to a certain destination, that host needs to
nd the MAC address that corresponds to the destination IP address included in the packet. This is
required because a packet is encapsulated into a frame for local transmission, and one of the frame's
overhead is the destination MAC address. Such mapping is listed in the address resolution protocol
(ARP) table. If the IP address is not listed in the ARP table, then the ARP procedure needs to be
implemented as follows [RFC 826]: when a host has a packet to send or to relay, it tries to nd the
destination IP address in the ARP table, in order to extract the corresponding MAC address. In case
there is no entry table corresponding to such an address, it broadcasts (in the LAN) an ARP packet
that contains information about a desired IP address. The station with such an IP address answers
with a hello packet and the station extracts its MAC address, inserting a new line into the ARP table
*
IEEE 802.2 is an LLC protocol, whereas IEEE 802.3 consists of a MAC protocol.
²
This is also called the physical address or the hardware address.
42Cable and Wireless Networks
with the mapping. The entry to this table is kept for a certain period of time. After a certain period
without trafc to be passed to (or received from) this station, the entry to this table is removed, and
the procedure is restarted, when required. Figure 2.21 shows an example of an ARP table.
Although most of the LANs being implemented nowadays use a switch as the central node, the
IEEE 802.3 protocol was standardized for use with a hub as a central node.* While the switch performs the layer 2 switching based on the destination MAC address, the hub simply acts as a repeater:
it repeats in all other outputs the bits received in one of its inputs. Therefore, the medium becomes
common, and when a station sends data, all other stations in the same LAN receive such data.
The carrier sense multiple access with collision detection² is used to dene when a host is allowed
to transmit within a LAN.
In terms of performance, because the hub broadcasts data through the LAN, the amount of collisions tends to be high, for medium to high trafc load (typically above 20%). On the other hand, the
switch mitigates this problem as it allows having half of the stations transmitting to the other half of
the stations in the LAN (considering a half-duplex network).
The DLL is composed of two sublayers: the LLC sublayer that deals with error control and ow
control, and the MAC sublayer that determines when a station is allowed to transmit and using
which format. Note that the MAC sublayer exists only in LANs. In the example of Figure 2.20, the
LAN connected to router 1 could be an IEEE 802.3 LAN. In the interconnection between the two
routers, because there is only one pair of stations (router 1 and router 2), there is no need to coordinate authorizations to transmit, and therefore, the MAC sublayer does not exist.
The reader should refer to Chapter 12 for a detailed description of several DLL and sublayer
protocols.
2.3.5 PhySIcaL Layer
This is the lowest layer of the TCP/IP stack and, as the DLL, also refers to point-to-point interchange of data. As dened for the OSI-RM, this is the only layer where data is physically moved
across the nodes. On the other hand, the other layers only create messages that are passed to lower
(on the transmitter side) or to upper (on the receiver side) layers. The type of data interchanged
by the physical layer³ is bits. This layer is subject to all impairments of the transmission medium
such as interference, noise, distortion, and attenuation. In addition, this layer deals with transmission parameters (that may mitigate the above-mentioned channel impairments) such as modulation
schemes or digital encoding techniques, bandwidth, transmission rate, transmission power, equalization, and advanced receivers to mitigate channel impairments.
The physical layer is only responsible for the transmission of bits, whereas error control is typically provided by layer 2. Nevertheless, in some cases, the physical layer may also adopt some error
control techniques. Such an example is the transmission of bits through a wireless medium. In this
Internet AddressPhysical AddressType
192.168.0.100-1f-33-ac-c5-bbDynamic
192.168.0.255ff-ff-ff-ff-ff-ffStatic
224.0.0.2301-00-5e-27-43-18Static
239.255.255.25101-00-5e-14-08-fdStatic
239.255.255.24901-00-5e-7f-fc-faStatic
255.255.255.255ff-ff-ff-ff-ff-ffStatic
FIGURE 2.21 Example of a MAC address table.
*
This corresponds to the worst-case scenario.
²
This protocol is dened in detail in Chapter 11.
³
Physical layer is also referred to as PHY.
43Network Protocol Architectures
case, because of the high probability of error, this layer may adopt the FEC,* which is a type of error
correction technique.
The important functionalities of the physical layer include the following:
· Encoding of signals: The way bits are sent over the network. Such functionalities include
the decision about the type of modulation scheme or digital encoding technique to use, and
the voltages and powers.
· Transmission and reception of bits: The choice of the bandwidth to use, the transmission
rate, whether or not an equalizer is adopted on the receiver side, whether or not regenerators are necessary in the transmission path, decision about the use of multiple transmit and
receive antennas, and so on.
· Mechanical specications: The denition of the type of connectors (such as RJ45, RJ11,
and BNC) and cables to use (e.g., UTP twisted pair, STP twisted pair, coaxial cable, and
optical ber).
· Physical topology of the network: The denition of a physical topology to use within a
network such as star, ring, tree, or mesh. It also includes the denition of whether cabling
should be half- (e.g., one cable pair) or full-duplex (e.g., two cable pairs).
For a detailed description of the physical layer, the reader should refer to Chapters 3 through 7.
CHAPTER SUMMARY
This chapter provided a view about the network protocol architectures.
We provided an introduction to the network protocol architecture concept, with a view about
its aim, and how encapsulation and deencapsulation of user data are performed. It was described
that the network protocol architecture approach only denes what is to be done by each layer, but
not how such functionalities are to be implemented by the layer, whose responsibility lies with the
protocol of the layer individually. Moreover, it was observed that the network protocol architecture
denes the number of layers, what is to be done by each layer, and the interface between different
layers. This approach leaves room for a layer to improve, due to technological evolutions or other
reasons, without implications in the remaining layers.
An introduction to the OSI-RM was also given. It was observed that this reference model was
created by the ISO. The seven layers of the OSI-RM were described, including the functionalities
performed by each layer. It was observed that this architecture can be split into two groups: the
four lower layers, from the physical up to the transport layer, being responsible for the reliable communication of data between terminal equipment, and the three upper layers, from the session up to
the application layer, with a higher level of logical abstraction, interfacing with the user application.
Moreover, it was observed how service access points are employed to interconnect different layers.
An introduction to the TCP/IP architecture was given. The TCP/IP architecture, adopted by the Internet,
is the most used real implementation of the OSI-RM. Nevertheless, while the basis is the same, there are
some differences between these two architectures. While the OSI-RM is a seven-layer architecture, the
TCP/IP model is only composed of ve layers: the application layer, the transport layer, theInternet layer,
the DLL, and the physical layer. It was described that the application layer of the TCP/IP model includes
functionalities assigned to the application, presentation, and session layers of the OSI-RM. Moreover, the
application layer performs the segmentation of a message into multiple streams.
The application layer transfers a stream into the transport layer, adding its own overhead, namely
the TCP or UDP header. The stream received from the application data, together with the transport
layer header, is encapsulated into a segment. The segment is the message format of the transport
layer. A similar encapsulation procedure is performed by the lower layers on the transmitter side.
*
The reader should refer to Chapter 12 for a description of the FEC.
44Cable and Wireless Networks
The segment is transferred into the Internet layer, added with the IP header, and encapsulated
into a datagram. The IP header, among other elds, is composed of the source and destination IP
addresses, being composed of 32bits for IPv4. The datagram is the message format of the IP.
The DLL is composed of two sublayers: the upper sublayer, entitled LLC, and the lower sublayer, entitled MAC. The message format of the LLC sublayer is the LLC PDU, whereas the message format of the MAC sublayer is the frame. The frame header includes, among other elds, the
source and destination MAC address, that is, the physical or hardware address of the sender and
receiver. The MAC address is a 48-bit address eld. Note that the DLL deals with point-to-point
communications.
Finally, the physical layer also refers to point-to-point interchange of data. Similar to the DLL,
this layer refers to a point-to-point link among adjacent network nodes. This is the only layer where
data is physically moved across the nodes. The bits are the type of data interchanged by the physical
layer. The physical layer deals with all impairments of the transmission medium, such as interference, noise, distortion, or attenuation. Moreover, this layer also deals with transmission parameters,
such as modulation schemes, or digital encoding techniques, bandwidth, transmission rate, transmission power, equalization, or advanced receivers to mitigate the channel impairments.
REVIEW QUESTIONS
1. To which layer of the TCP/IP model do TCP and UDP belong?
2. Which of the OSI-RM layer is responsible for forwarding packets along the several nodes
(routers) of the network?
3. Let us consider that a router needs to send packets to a host in a LAN to which it is connected to, and that the corresponding MAC address is unknown. Which protocol is used
and what is the sequence of packets expected to be exchanged?
4. Which protocol of the TCP/IP architecture ensures that a connection is previously established before data is exchanged, and ensures that the correct sequence of packets is maintained on the receiver side?
5. How can a switch make a better usage of the LAN bandwidth?
6. What are routing tables used for?
7. What is the difference between a router and a switch?
8. For which purpose is the ARP used for?
9. What is the difference between the UDP and TCP? Enumerate services that use either
protocol.
10. Which of the OSI-RM layer is responsible for the end-to-end forwarding of data?
11. What are the advantages of using a network architecture model based on several layers?
What is the most implemented network architecture model based on layers?
LAB EXERCISES
1. Download and install the free network analyzer* Wireshark. Open the application in a
PC and select the interface connected to the Internet. You will see the IP datagrams that
are being exchanged by the NIC of the PC, with the source IP address, destination IP
address, and protocol type, along with other information. Select one of these packets. In a
window that appears at the bottom, visualize the content of the frame, the MAC sublayer,
the IP datagram, and the layer 4 message (e.g., TCP segment). Verify that a segment is
encapsulated into a datagram, and that a datagram is encapsulated into a frame. Verify the
addresses of the different layers, namely the MAC address, the IP address, and the port.
*
A network analyzer is also commonly referred to as packet sniffer.
Channel Impairments
EN
3
LEARNING OBJECTIVES
· Describe the different channel impairments experienced in telecommunication
systems.
· Dene the Shannon capacity.
· Describe the concept of attenuation.
· Describe the different types of noise.
· Describe the effects of distortion and the use of equalization to mitigate it.
· Identify the different sources of interference.
In a communication system, signals are subject to a myriad of impairments that accumulate over
the channel path between the transmitter and the receiver (see Figure 3.1). These signals are used to
allow the exchange of messages between these two parties. For the transmitted signal to be properly
extracted at the receiving side, the received signal must have a signal-to-noise plus interference ratio
(SNIR) higher than a certain threshold. Otherwise, the message cannot be properly understood by
the receiving party.
Several impairments degrade the SNIR. The SNIR degradation occurs in two different ways:
(1)by decreasing the signal level S and (2) by increasing the noise (N) and interference (I) levels.
Attenuation is the factor that originates a decrease in the signal level, whereas an increase in the
noise and interference levels is caused by different factors, namely:
· Different noise sources
· Distortion
· Other interferences
3.1 SHANNON CAPACITY
While in analog communications, the degradation of a signal is approximately linear with the
decrease in the SNIR level, in the case of digital signals, the bits degrade heavily below a certain threshold, originating an abrupt increase in the bit error rate (BER). This can be seen from
Figure3.2, whose curve is valid for the binary phase shift keying (BPSK) and the quadrature phase
shift keying (QPSK) modulation scheme.
The acceptable BER threshold depends on the service under consideration (the threshold level
for voice is different from that for le transfer). For voice, the tolerated bit error probability is
approximately 10−3, meaning that the BPSK or QPSK modulation requires a minimum of 7dB of
bit signal-to-noise ratio (SNR) (
Depending on the source of impairments and whether the signal is analog or digital, there are
different measures that can be used to mitigate it. Because the currently most used type of communications is digital, the description of this chapter focuses on this type of transmission.
*
As detailed in Chapter 6, the BER for BPSK is the same as that for QPSK. However, this is an exception, as for M-QAM
modulation schemes, increasing the modulation order M leads to a degradation of the BER. This occurs because the
Euclidian distance between constellation points decreases and, consequently, the modulation becomes more sensitive to
noise and interferences.
b/0
*
) (see Figure 3.2).
45
46Cable and Wireless Networks
()
Attenuated signal
Bit error probability
Signal
Transmitter
Transmission
medium
FIGURE 3.1 Generic chain of a communication system.
0
10
−1
10
−2
10
−3
10
−4
10
−5
10
plus noise and
interferences
Receiver
FIGURE 3.2 Bit error probability for BPSK and QPSK as a function of Eb/N0.
obtained through the full usage of the allowed spectrum. For an Additive White Gaussian Noise
channel, Claude Shannon derived, in 1948, the following capacity formula [Shannon 1948]:
This equation provides information about the maximum theoretical rate at which the transfer of
information bits* can be achieved, with an acceptable quality, in a certain transmission medium that
has a channel bandwidth W (in Hz), power of noise N, and a transmit signal power S (both in watts).
Dividing both sides of the above equation by the channel bandwidth W, we obtain the spectral ef-
ciency [Marques da Silva et al. 2012]:
*
Note that the Shannon capacity refers to the maximum rate of information bits, that is, overhead and control bits are not
included.
−6
10
−7
10
23456789101112
Eb/N0 (dB)
The capacity limit of any telecommunications system is taken to be the resulting throughput
S
N
CW
=+
log21bps (3.1)
47Channel Impairments
()
=+
MN×
MN×
fB
..
46
zk
()
C
=+
log21bit/s/Hz(3.2)
W
S
N
This is expressed in bits per second per hertz and gives us an indication of how many bits per second
can be transported in each hertz of the channel bandwidth.
Examining the voice-grade twisted pair, which has a typical channel bandwidth W of 3.1kHz,
and assuming an SNR* of 3.7dB,² from Equation 3.1, we conclude that the maximum speed of
information bits is 38103bps. Therefore, the solution to accommodate higher transmission rates
must correspond either to an increase of the available medium channel bandwidth or to an increase
of the signal power, or even to a decrease of the power of noise and interferences.
If multiple transmit and receive antennas are employed,³ the capacity may be raised. If there are
a sufcient number of receive antennas, it is possible to resolve all messages, as long as the channel
correlation between the antennas is not too high. The pioneer work of Foschini and Gans [1998]
established the mathematical roots from the information theory eld that, with multipath propagation, multiple antennas at both the transmitter and the receiver can establish essentially multiple
parallel channels that operate simultaneously on the same frequency band and at the same time.
In a more practical case of a time variable and randomly fading wireless channel, the capacity is
written as follows:
CW
log
1H
2
N
where H2 is the normalized channel-power transfer function. H is an
amplitude of the
channel, where M stands for the number of transmit and N for the number
2
(3.3)
⋅
power complex Gaussian
S
of receive antennas. Multiple antenna systems are described in Chapter 7. It is worth noting that, in
Equations 3.1 and 3.3, the letter N refers generically to all sources of noise and interference (not only
the noise). Therefore, it is important to dene all of these impairments that inuence this parameter.
Depending on the system and transmission medium, there are different factors, which are dened
in the following sections.
3.2 NYQUIST SAMPLING THEOREM
The Nyquist sampling theorem states that the minimum sampling rate used in the digitization process that assures a digital distortionless signal is given by
where
Using a sampling rate equal to or higher than
the analog signal reconstructed from its digital samples is not subject to a specic type of distortion, entitled aliasing. Let us suppose that there is a voice signal, with frequency components within
300Hz up to 3.4kHz. According to the Nyquist sampling theorem, these signals must be digitized
with a sampling rate equal to or higher than
*
Equation 3.1, while generic, establishes a relationship between the capacity and the SNR (as it corresponds to S/N).
However, the generic noise designation corresponds, in the sense of this equation, to noise plus interference (including
distortions, interferences, etc.). In other words, the S/N of Equation 3.1 should be understood as S/(N +I).
²
Because we are dealing with logarithmic units, this is calculated as SNR
³
Multiple antennas are adopted by the air interface of IEEE 802.11n and IEEE 802.11ac standards, WiMAX standard, longterm evolution (LTE) cellular systems, and so on.
=×2(3.4)
minmax
B
corresponds to the maximum frequency component present in the signal.
max
in the digitization process, it is assured that
f
min
f
min
=×=23
1010logSN.
dB
8kH
Hz
.
/=
48Cable and Wireless Networks
()
1.5
1.5
3.3 ATTENUATION
The propagation of a signal in any transmission medium is subject to attenuation. This effect corresponds
to a decrease in the signal strength. The level of attenuation depends on the medium, but, as a rule of
thumb, it increases with the distance at a variable scale. Figure 3.3 shows an example of a transmitted
signal and the corresponding attenuated signal, as a result of the propagation losses through a medium.
As an example, let us consider the propagation of electromagnetic waves in a free space. In this
case, the attenuation is due to the path loss, being known as free space path loss (FSPL). Therefore,
the propagation can be viewed as a sphere, whose surface area increases with the increase of the
distance from the transmit antenna. While the power can be considered as constant, because it
spreads out over the surface area of the sphere, and because its radius corresponds to the propagation distance, its power spatial density decreases with the distance.
From Chapter 5, it is seen that the FSPL is quantied by
FSPL =
4
d stands for the distance from the transmitter, f represents the frequency, and c denotes the speed of
light. Therefore, in the special case of electromagnetic waves, and assuming free space propagation, the attenuation increases with the square of the distance. In the case of a coaxial cable, the
dependence of the distance is different, as the attenuation tends to increase with the increase of the
logarithm of the distance.
Moreover, as a rule of thumb, the attenuation increases with the frequency of the signal being
transmitted. This is generically applicable to most of the transmission media.
Note that the attenuation depends on the type of propagation between the transmitter and the receiver.
Electromagnetic propagation is normally composed of several paths, namely a direct line of sight, and
several reected, diffracted, and scattered waves (e.g., in buildings and trees) [Burrows 1949]. As a result,
in real propagation scenarios (other than free space), the attenuation depends on distance with a higher
rate than its square. Normally, an exponent between 3 and 5 is experienced in scenarios subject to shad-
*
owing and multipath.
Consequently, we conclude that the attenuation experienced in real scenarios is
higher than the FSPL. This results from the three basic propagation effects [Theodore 1996]: (1) reection, (2) refraction, and (3) scattering. These propagation effects are characterized in Chapter 5.
Having a signal subject to attenuation, there is a need to increase its level such that detection
is possible at the receiver. For detection to be possible, the received signal needs to be above the
receiver's sensitivity threshold. The signal level is increased by an amplier at the receiver side.
Note that, similarly, the signal at the receiver's input must be above the amplier's sensitivity threshold, or otherwise the signal cannot be amplied. Therefore, for long distances, there is a need to use
ampliers in the transmission path, before the distance attenuation is too high, and before the signal
is below the amplier's and receiver's sensitivity threshold.
Because this device amplies its input signal, being composed of signal plus noise and interferences, amplifying this signal does not add any value in terms of SNR gain.² Moreover, an amplier
π/df c
2
[Proakis 1995], where
FIGURE 3.3 Transmitted signal and attenuated version caused by the propagation channel.
*
The shadowing effect is detailed in Chapter 5. The multipath effect is detailed later in Section 3.6.1.
²
It amplies the signal, the noise, and the interferences present at the input, leaving the SNR unchanged.
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
Distance
attenuation
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
49Channel Impairments
1.5
1.5
also introduces additional noise, leading to a noise factor* higher than 1. This means that the SNR
suffers a degradation after the amplication process.
An important advantage of digital signals relies on the ability to implement regeneration, which is a
more effective process than amplication, as it allows a gain in terms of SNR. A regenerator² includes
a detector followed by an amplier. Therefore, inserting regenerators along the transmission chain
allows recovering the original signal (performed by its detector), before the signal is amplied. This
enables partially removing the negative effects of noise and interferences, and thus, this tends to lead
to an improvement in the SNR value. In addition, as in the case of the amplier, a regenerator has to
be placed in locations along the transmission chain such that the distance attenuation does not degrade
the signal level below the regenerator's sensitivity threshold. As an example, note that synchronous
digital hierarchy (SDH) networks makes use of regenerators, typically every 60km of optical bers.
3.4 NOISE SOURCES
One of the most important impairments of a telecommunication medium is noise. Noise is always
present with higher or lower intensity. A receiver detects the desired (attenuated) signal superimposed with noise and interferences. Therefore, noise can be dened as unwanted impairments
superimposed on a desired signal, which tends to obscure its information content.
As previously described, a signal needs to be received with an SNR higher than a certain threshold to allow a good service quality.
Figure 3.4 shows an attenuated signal (on the left), due to the distance attenuation, equal to the
one considered in Figure 3.3. Because this signal does not present any kind of noise or interferences,
its SNR is innite. This signal is then received together with the noise present at the receiver's
antenna location (signal on the right). The resulting SNR is now degraded. Although the noise has
been added to the signal, because its power is not too high, we observe that the shape of the envelope
is similar to the original signal without noise.
A receiver uses a certain instant within the pulse duration to perform sampling. Based on the
sampled signal at the sampling instant, a decision is made about whether the received signal is
assumed as a symbol +1 or −1.³ In this case, the hard decision should be as follows: if the sampled
signal is above 0, it is assumed as +1; otherwise (with the sampled signal below 0), it is assumed that
the received symbol is a −1 (as the estimated transmitted symbol).
Figure 3.5 presents the same signals, but the plot on the right includes a signal subject to a stronger noise power. Note that, although the transmitted signal between instants 1 and 2 was +1, depending on the exact sampling instant, the sampled received signal can be decided as a −1 (because the
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
Addition of noise
(to the attenuated signal)
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
FIGURE 3.4 Addition of low power noise to an attenuated signal (possible received signal).
*
The noise factor is dened by Equation 3.10.
²
A regenerator is also known as a repeater.
³
In this example, we assume the use of amplitude shift keying (ASK) as dened in Chapter 6. A +1 level may correspond
to a logic state 1, whereas a −1 level may correspond to a logic state 0. In fact, the +1 and −1 levels can be any value,
depending on the transmitting power.
50Cable and Wireless Networks
1.5
1.5
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
FIGURE 3.5 Addition of high power noise to an attenuated signal (possible received signal).
Addition of noise
(to the attenuated signal)
1.0
0.5
0.0
−0.5
−1.0
−1.5
0.51.01.52.02.5
sampled value can be below zero, which is assumed as a decision threshold). In such a situation, a
symbol error occurs. This is because, for the same signal level as the one in Figure 3.4, the noise
power is much more intense, resulting in a lower SNR value. In the case of digital signals, the resulting bit error probability becomes higher, resulting in a degraded signal.
Depending on its sources, noise can be of different types. The most important types of noise can
be grouped as follows:
· External noise:
· Atmospheric
· Man-made
· Extraterrestrial noise
· Internal noise:
· Thermal
· Electronic
The total noise power, resulting from all different noise sources, is seen at the receiver's detector.
The different types of noise are dened in the following.
3.4.1 Atmospheric Noise
Atmospheric noise consists of an electromagnetic disturbance, being caused by a natural atmospheric phenomenon, such as lightning discharges in thunderstorms. It consists of cloud-to-ground
and cloud-to-cloud ashes. While more intense in tropical regions, it consists of a high-power
and low-duration current discharge, resulting in a high-power electromagnetic impairment. These
ashes occur approximately 100 times a second, on a worldwide scale, and the sum of all these
ashes results in the random atmospheric noise.
In an area surrounding thunderstorms, the noise presents an impulsive prole (i.e., very low
duration but high intensity). Because the pulse is very narrow in the time domain, its bandwidth is
very wide. This means that the noise is experienced by many nearby communication systems that
make use of different parts of the electromagnetic spectrum.
The combination of all distant thunderstorms (low-duration pulses) results in white noise* that is
felt, at distant locations, with continuity over time but with a lower power level. Its power varies with
the season and proximity of thunderstorm centers. In addition, because this phenomenon is more
frequent in tropical regions, the atmospheric noise tends to decrease with the increase of the latitude.
As described by the FSPL equation (Section 3.3), electromagnetic attenuation increases with frequency. Consequently, the higher frequency components of the noise are subject to higher attenuation
levels. This is the reason why atmospheric noise is felt at a long distance with higher power at lower
frequencies, and with lower power at higher frequencies. Consequently, the atmospheric noise dominates
at the VLF and LF bands (frequency bands are dened in Table 3.1). This can be seen from Figure 3.7.
*
White noise presents a constant power spectral density (PSD).
51Channel Impairments
Frequency Bands
frequency
Extremely high
frequency
Ultrahigh frequency Superhigh
frequency
microwave
communications
Satellite and
communications
Satellite
maritime, and
cellular
TV broadcast,
Radio and TV
Radio broadcast
communications
broadcast and
maritime
and maritime
andmaritime
Radio broadcast
and submarines
communications
Navigation, maritime,
VLFLFMFHFVHFUHFSHFEHF
and submarines
communications
DesignationVery low frequencyLow frequencyMedium frequency High frequencyVery high
Moreover, as described in Chapter 5 for the groundwave propagation, at low frequencies,
electromagnetic waves with horizontal polarization experience higher attenuation levels than vertically polarized waves. Consequently, vertically polarized atmospheric noise tends to be more
intense than horizontally polarized noise.
3.4.2 mAN-mAde Noise
Man-made noise is electromagnetic, being caused by human activity, namely by the use of electrical
equipment, such as car ignitions, domestic equipment, or vehicles. The intensity of this kind of noise
varies substantially with the region. Urban man-made noise tends to be more intense than rural
noise. This noise is characterized by the emission of low-duration and high-power pulses, when
the corresponding source is activated (e.g., when the car ignition is activated). Figure 3.6 shows the
typical man-made noise along the frequency spectrum, for different environments (business, residential, rural, and quiet rural). While very intense in the HF band, its noise gure (Fam) decreases
at higher frequencies.
3.4.3 extrAterrestriAl Noise
Extraterrestrial noise is a type of electromagnetic noise that comes from certain limited zones of
the cosmos and galaxies.
Extraterrestrial noise is also known as galactic noise and solar noise. An antenna directed toward
certain regions of the sky, such as the sun or other celestial objects, may experience powerful wideband noise. Note that this type of noise depends on the relative orientation of the antenna's radiation pattern. This orientation varies along the day because of the earth's rotation, and therefore,
attention needs to be paid to a sudden increase of noise experienced by some types of stations,
such as by a satellite earth station. The pattern of extraterrestrial (galactic) noise can be seen from
Figure 3.7.
80
60
am
40
20
0
FIGURE 3.6 Noise factor for AÐ business, BÐ residential, CÐ rural, DÐ quiet rural, EÐ galactic. (Data from
Lawrence, D.C., CCIR Report 322 Noise Variation Parameters, Technical Document 2813, June 1995.)
A
B
C
D
E
200300
1005020105210.50.2
f (MHz)
53Channel Impairments
TB
k
=×
−−
T
B
F
(dB)
t
(K)
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
2.9 × 10
8
20
18
16
14
12
a
10
8
6
4
2
1802.9 × 10
160
a
140
120
100
80
60
40
20
0
25252525
4
10
10
A
C
B
5
6
10
Frequency (Hz)
10
E
D
7
10
FIGURE 3.7 Comparison of noise gure (Fa [dB]) and temperatures (ta [K]), for different types of electromagnetic
noise: AÐ percentile 0.5 of atmospheric noise, BÐ percentile 0.5 of atmospheric noise, CÐ median man-made
noise for business, DÐ median man-made noise for galactic, and EÐ median man-made noise for rural. (Data from
Lawrence, D.C., CCIR Report 322 Noise Variation Parameters, Technical Document 2813, June 1995.)
Figure 3.7 shows different electromagnetic noise contributions in different environments. From
this gure, it is noticeable that atmospheric (galactic) noise dominates at the VLF and LF bands,
whereas man-made noise is more intense in the HF band.
Because noise is a random process, its measure is normally performed using statistical tools.
Therefore, noise is normally expressed in percentile. As an example, percentile 10 is a value characterized by having 90% of the samples with a value above this percentile 10 value, and having 10%
of the samples with a value below the percentile 10 value. Noise may also be expressed in median.
Median corresponds to the percentile 50 (50% of the samples are above the median value, and the
other 50% of the samples below).
3.4.4 thermAl Noise
Thermal noise is experienced inside electrical conductors (wires, electrolytes, resistors, etc.), being
caused by thermal agitation of charges at the amplier's input resistance. In the case of radio communications, thermal noise presents a wide variation of amplitude, depending on the temperature
viewed by the receive antenna.
The frequency prole of thermal noise presents a PSD approximately constant along the frequency spectrum, that is, thermal noise is approximately white.
The noise power
Pk
captured by its amplier's input resistance is given by [Carlson 1986]
P
n
= (3.5)
nBn
where:
138653 1JK
.000
is the Boltzmann constant, with
B
is the resistor's absolute temperature (expressed in Kelvin)
n
k
B
is the receiver's bandwidth (expressed in Hertz)
23
1
(expressed in Joules per Kelvin)
54Cable and Wireless Networks
P
n
σ
2
N
0
nB
Ff
aa
11
Note that, in statistical terms, the noise power
The PSD
corresponds to the noise power divided by the receiver's bandwidth B, being
corresponds to the noise variance
.
given by
NPB
=
/
0
n
kT
=
Bn
(3.6)
The root mean square (RMS) voltage of thermal noise, generated in a given amplier's input resistance R (expressed in ohm), is given by
vkTRB
= 4 (3.7)
n
In the case of wireless communications, the value of Tn captured by the amplifier's input
resistance depends on the orientation of the antenna's radiation pattern. The thermal noise
of a satellite earth station pointing toward the sky is typically very low,* as the temperature
of the sky is also low (200 K > T > 80 K). On the other hand, the thermal noise of a satellite
transponder is typically high, as it is pointing toward the earth, whose temperature is also high
(T> 300 K).
Another way to express the noise level is using the noise factor
coefcient, which is
f
a
dened by
P
f
n
=(3.8)
a
kT B
o
The noise factor is dened as the ratio between the received noise power and the noise power
delivered by a charge with the reference noise temperature of 300 K (
). Expressing this value in
T
o
logarithmic units leads us to the noise gure,² dened by
= 1010log(3.9)
3.4.5 electroNic Noise
Electronic noise is a type of internal noise generated in active elements (e.g., transistors) in the
interior of active equipment, namely in ampliers or in active lters. As a consequence, the SNR at
the output of active equipment is lower than that at its input. Similar to thermal noise, the electronic
noise level is typically quantied by the noise factor as follows:
SNR
IN
=
f
a
SNR
Alternatively, this can also be expressed in logarithmic units using Equation 3.9. Using this equivalence, from
computation of the SNR at its output (
, and knowing the received SNR at the input of a satellite transponder (
f
a
SNR
) is straightforward.
OUT
In the case of a cascade of N electronic devices (e.g., ampliers and lters), the resulting noise
factor f
*
Except if it is pointing toward an extraterrestrial object, such as the sun.
²
Note that Figures 3.6 and 3.7 express noise using the noise gure notation.
OUT
is
ff
OUT
f
=+−+
1
11
f
2
g
1
−
3
ggfggg
124123
(3.10)
OUT
∏
f
−
N
(3.11)
1
−
N
g
i
1
=
i
−
+
++
SNR
), the
IN
where:
=
()
SNR
OUT
S
⋅⋅
g
N
=
∏
SN
()
()
ht
()
g
stands for the device gain
i
fiN
1,,
i
stands for the noise factor of the ith electronic device
55Channel Impairments
Still in the case of a cascade of N electronic devices, the resulting
SNR
OUT
OUT
=
N
OUT
gS
OUTIN
=
gfN
OUTOUTIN
S
IN
=
fN
⋅
OUTIN
⋅
is
(3.12)
where:
g
is
OUT
OUT
is as dened by Equation 3.11
f
OUT
g
=1
i
i
Using Equation 3.12, we reach an equivalence similar to Equation 3.10, but for a cascade of electronic devices,
SNR
IN
SNR
(3.13)
OUT
from SNRIN and from f
, for a cascade of elec-
OUT
This allows us to compute the resulting
f
SNR
OUT
=
OUT
tronic devices.
Note that other types of noise may also be viewed as the noise generated in electronic
devices, but where the device gain should correspond to, for example, the path loss attenuation and the noise figure of the device is, for example, the noise figure of the thermal noise.
Therefore, with such an approach one could use Equation 3.11 to compute
Equation 3.12 to compute
mediate
.
R
SNR
from the initial
OUT
, that is, without computing all inter-
SNR
IN
and then use
f
OUT
3.5 INFLUENCE OF THE TRANSMISSION CHANNEL
An ideal transmission channel would be the one in which the received signal is equal to the transmitted one. Nevertheless, it is known that real channels introduce an attenuation and a phase shift
to signals. When the attenuation is constant over the signal's frequency components (i.e., over the
entire signal's bandwidth), it is said that the channel does not introduce amplitude distortion to signals. Similarly, when the phase shift is linear over the signal's frequency component, it is said that
the channel does not introduce phase distortion to signals. In these cases, only the amplication
process can be used before detection.
Nevertheless, very often the channel introduces different attenuations and nonlinear phase shifts
at different frequency components of signals. This is more visible when the signal bandwidth is relatively high.* In this case, the equalization process may also be required at the receiver side before
amplication and detection.
The channel's frequency response
lates mathematically the way the channel processes different frequency components of the signal
that crosses it, both in terms of the signal's attenuation and phase shift. Note that
to the Fourier transform of the channel's impulse response
*
Namely higher than the coherence bandwidth of the channel.
, also known as the channel's transfer function, trans-
H f
H f
.
corresponds
56Cable and Wireless Networks
Hf
()=()⋅()
Vf
()
vt
()
Vf
()
vt
()
λ
t =τ
θπ
vE (t)vR (t)
Sin(x)
The signal at the output of the channel is given by
Vf Vf
RE
(3.14)
where:
stands for the Fourier transform of the transmitted time domain signal
E
stands for the Fourier transform of the received time domain signal
R
E
(seeFigure 3.8)
R
Appendix I presents a short description of the Fourier transform theory.
3.5.1 delAyANd phAse shift
Let us consider a carrier-modulated signal in the carrier frequency
transmission medium between a transmitter and a receiver. The phase θ of the signal varies by 2π
radians for every propagation distance corresponding to a wavelength λ (see Figure 3.9). Moreover,
the propagation delay τ increases by
f
(
wavelength
instant
is the carrier frequency and τ is the carrier period). Similarly, the phase shift at the
c
is given by
.
τ= 2 f
c
for every propagation distance corresponding to a
τ=1/
f
c
As the propagation distance increases, the phase shift and the delay vary accordingly. In fact,
the received signal is composed of a superposition of components, including direct, reected, diffracted, and scattered components.
Transmitter
h(t)
, being propagated through the
f
c
Receiver
FIGURE 3.8 Generic communication system with the signals depicted in the time domain.
FIGURE 3.9 Sinusoidal wave and its characteristics.
1.0
0.8
0.6
0.4
0.2
0.0
−0.2
−0.4
−0.6
−0.8
−1.0
λ
π
0
2
3π
π
2
5π
2π3π4π5π6π
2
7π
2
x
9π
2
11π
2
57Channel Impairments
τθfdfd
()=−()()
()
//
()
θ
()
λ
Attenuation or phase shift
Because each of these elementary waves experiences a different distance,* each component
arrives at the receive antenna with a different phase shift and delay before they are superimposed.
As can be seen from Figure 3.10, the ideal² phase shift response of a channel corresponds to a
slope, with a constant gradient across different frequencies.³ Note that the time delay is related to
the phase shift by
12
, where both
f
τ f
and
f
are a function of the frequency f [Marques da Silva et al. 2010]. As a result, the superposition of waves from different paths
results in either constructive or destructive interference, amplifying§ or attenuating the signal power
seen at the receiver (relating to the line-of-sight wave propagation). The variation of the signal level
received from one or more multipaths originates a variation of the resulting superimposed signal
level known as fading.
Assuming a variation of distance between a transmit and a receive antenna, it is observed that
the envelope of the received signal level presents a cyclic period for every
distance variation. In
/2
fact, this uctuation of the signal level across distance can be viewed in the frequency spectrum as
distortion, which is explained in the following sections.
3.5.2 distortioN
Transmitted signals are not composed of a single frequency. On the contrary, signals are composed
of a myriad of frequency components, presenting a certain bandwidth. As an example, an audible
spectrum spans from around 20Hz up to around 20kHz.
As previously described, transmission media tend to introduce different attenuations at different frequencies. As a rule of thumb, the attenuation level tends to increase with an increase in the
frequency. Moreover, channels tend to introduce different delays¶ and nonlinear phase shifts at different frequency components. Therefore, the signal after propagation through a medium is subject to
FIGURE 3.10 Ideal channel's frequency response (distortionless).
*
Note that a reected wave experiences a path longer than the directed wave.
²
Which does not distort signals.
³
A phase shift prole dened by a curve introduces distortion.
§
Relating to the line-of-sight version of the signal.
¶
As a result of different delays at different frequency components.
Ideal channel phase shift response (slope)
Ideal channel attenuation response (flat)
Frequency
58Cable and Wireless Networks
Attenuation or phase shift
different attenuations and nonlinear phase shifts at different frequency components, and hence, the
received signal is different from the transmitted one. This effect is known as distortion* (Figure 3.11).
3.5.3 equAlizAtioN
It was described in the previous subsection that channels tend to introduce higher attenuation at
higher frequency components of the signal, whose effect is known as distortion. In this scenario, the
equalization process aims to mitigate the negative effects of distortion, by introducing higher amplication gains at higher frequency components of the signal, and lower amplication gains at lower
frequency components of the signal (see Figure 3.12). Ideally, this results in a signal equally attenuated after the combined system composed of the propagation channel plus the equalization process.
The same principle is applicable to the phase shift introduced by the channel, that is, the equalizer aims to transform the nonlinear phase shift introduced by the channel into a linear phase shift
introduced by the combined system composed of the propagation channel and the equalizer.
The equalizer is normally a part of the receiver, being especially important for signals with
higher bandwidths. Note that, from Equation 3.22, higher digital transmission rates correspond to
higher bandwidths. A frequency response² of a channel is approximately constant for a very narrowband signal (low transmission rate). In the case of a very narrowband signal, its frequency span
tends to zero. In this case, the curve of the channel's frequency response tends to its tangent at the
reference point. On the other hand, a wideband signal (high transmission rate) suffers heavily from
distortion, as the signal spans over a wide bandwidth. Therefore, increasing the transmission rate
demands higher processing from the receiver to perform the required equalization, and this processing is never optimum, resulting in some residual level of distortion.
Figure 3.13 shows an example of the attenuation introduced by a channel, as a function of the
frequency. Ideally, the equalizer's frequency response should be such that the attenuation of the
FIGURE 3.11 Example of a channel's frequency response that introduces distortion.
*
Note that distortion may also be introduced by devices, such as ampliers and lters.
²
A system's frequency response is a graphic that shows the attenuations and phase shifts introduced by the system at
different frequency components.
r
t
f
i
h
s
e
s
a
h
p
l
e
n
n
a
h
C
Channel attenuation response
Frequency
e
s
n
o
p
s
e
30
Attenuation coefficient (dB/km)
59Channel Impairments
25
20
15
10
Equalizer
Channel
Combined system =
channel + equalizer
5
0
1010.1
Frequency (kHz)
10
2
10
3
10
4
FIGURE 3.12 Example of a channel's frequency response, with the corresponding ideal equalizer.
(dB)
Frequency response
Ideal equalizer
FIGURE 3.13 Example of a frequency response of a channel and its corresponding ideal equalizer.
combined system that results from cascading the channel and the equalizer is a straight line, that is,
a continuous attenuation over the bandwidth of interest (absence of distortion). Note that this gure
only depicts the attenuation as a function of the frequency, but the same principle is applicable to
phase shift, where the ideal phase shift response is a straight line, instead of a curve (seeFigure3.10).
From the description of a signal from Section 3.5, it is known that the transfer function of a zero
force (ZF) equalizer is given by
Attenuation
Ideal combined system
(channel + equalizer)
Channel
Frequency
60Cable and Wireless Networks
()
1
Hf
()
()
Xf
()
xt
()
Eq
f
()
ZF
=
Hf
(3.15)
where
mean square error equalizer is given by
Eq
where Eq f
izer's impulse response.
Figure 3.14 depicts the concatenation of the channel and the equalization (part of the receiver).
Note that
An equalizer requires being tuned to the channel. This is normally performed using pilots or
training sequences. A pilot or training sequence consists of using a predened signal (sequence
of known symbols), which is periodically transmitted. While the receiver has knowledge about
the transmitted signal, it computes the difference between the transmitted signal and the received
one. This difference is a function of the distortion introduced by the channel. From this received
pilot sequence, the receiver may extract the channel coefcients (in terms of attenuations and
phase shifts at different frequencies), which are then utilized to implement the equalization
process.
As previously described, a wireless channel is typically subject to fluctuations (fading).
This signal variation is the result of variations of the distance between transmit and receive
antennas, variation of the environment surrounding the receiver, variations of the refraction
index, and so on.
Note that the channel may present flat fading or frequency selective fading. In the case
of flat fading, different frequency components of the signal experience constant attenuation
and linear phase shifts introduced by the propagation channel. In the case of frequency selective fading, different frequency components of the signal experience different attenuations
and nonlinear phase shifts introduced by the propagation channel (distortion).* Moreover,
in either types of fading, the attenuation and phase shifts may suffer variations over time. In
this case, the equalizer should be able to follow the channel parameters, requiring the periodic transmission of pilots in order to allow the implementation of the equalization process,
performed adaptively. This process is called adaptive equalization. Note that some advanced
receivers may also perform channel estimation directly from the received modulated signal
[Proakis 1995].
It is nally worth noting that the equalization process is never optimum. This results in some
residual level of distortion, which degrades the performance. In case the channel suffers rapid variations over time, the equalizer tends to have more difculties to nd the channel coefcients, and
therefore, the equalization process tends to be poorer.
stands for the channel transfer function. Similarly, the transfer function of a minimum
Hf
()
Hf
+
*
2
()
(3.16)
.
f
()
MMSE
stands for the equalizer's transfer function, that is, the Fourier transform of the equal-
represents the Fourier transform of the transmitted signal
=
1/SNR
X(f )
FIGURE 3.14 Generic communication including the channel and the equalizer at the receiver.
*
Note that frequency selective fading corresponds to a type of distortion.
H(f )
Eq(f )
61Channel Impairments
Surface (reflection)
3.6 INTERFERENCE SOURCES
The received SNIR needs to be higher than a certain threshold to achieve an acceptable BER performance. In Section 3.4, different types of noise were described. This section focuses on the description of different types of interferences. The main sources of interferences can be grouped into four
main categories:
· Intersymbol interference (ISI)
· Multiple access interference (MAI)
· Co-channel interference (CCI)
· Adjacent channel interference (ACI)
3.6.1 iNtersymbol iNterfereNce
Intersymbol interference occurs in digital transmissions of symbols when the channel is characterized by the existence of several paths (see Figure 3.15), where the delay of relevant signal replicas* that arrive at the receiver's antenna corresponds to a delay higher than the symbol period.
Note that the RMS delay spread corresponds to the delay of the RMS signal replicas that arrive
at the receiver's antenna.
In other words, ISI exists when the signal is propagated through a channel whose RMS delay
spread of the channel is higher than the symbol period. In this case, this effect can be viewed in
the frequency domain as having two sinusoids with frequency separation greater than the channel coherence bandwidth, being affected differently by the channel (in terms of attenuation and
delay/phase shift) [Benedetto et al. 1987]. This corresponds to distortion, but applied to digital
signals.
The channel coherence bandwidth is the bandwidth above which the signal presents frequency
selective fading, that is, different attenuations, different delays, and nonlinear phase shifts at different frequencies (the signal is severely distorted by the channel). In the case of frequency selective fading, and observing this effect in the time domain, it can be concluded that different digital
symbols suffer from interference from each other, whose effect is usually known as intersymbol
interference. This can be viewed as a type of distortion applicable to digital transmissions. This
effect can be seen from Figure 3.16. Note that ISI tends to increase with the increase of the signal's
bandwidth (increase of data rates, according to the Nyquist theorem).
On the other hand, if the signal's bandwidth is within the channel coherence bandwidth, the
channel is said to be frequency nonselective and the type of fading is characterized as at fading.
Path 2
Line-of-sight
(Path 1)
TransmitterReceiver
(a)(b)
FIGURE 3.15 Propagation of a signal in a multipath environment: (a) diagram with multipaths; (b) average
received power by each path.
*
Except those replicas whose average power is 30 dB, below the normalized average power.
Path 3
Path 1
Path 2
Received power
Path 3
Time
62Cable and Wireless Networks
()ττ
12
xt()
()
()
ΓΓ01
()
Γ=05
Γ=03
Sampling instants
Transmitted signal
value
x(t)
x(t − T
)
1
)
x(t − T
2
Σ = x(t) + x(t − T
(received signal)
FIGURE 3.16 Plot of received signals through different paths, and the resulting received signal.
) +x(t − T2)
1
− 1 + 1− 1− 1− 1− 1− 1+ 1+ 1+ 1
e noise may
easily originate
a bit error
In this case, all frequencies fade in unison (i.e., different signal frequency components present the
same attenuations and linear phase shifts). In this case, the channel does not originate ISI.
Because, in real radio propagation scenarios, there are always some levels of multipaths, the decision to evaluate the type of channel depends on the average power of the received multipaths. When
the channel prole has a normalized average power below 30dB for nondirect path replicas, although
these multipaths may present a delay higher than the RMS delay spread, it is normally assumed that
we are in the presence of a single-path channel (frequency nonselective fading or at fading). On the
other hand, when the channel prole has an average power of nondirect multipaths higher than this
threshold, it is normally assumed that the channel presents selectivity in frequency. In this case, ISI
is experienced in the digital transmission of symbols when the symbol rate is sufciently high. Note
that no serious ISI is likely to be experienced if the symbol duration is longer than several times the
delay spread. On the contrary, higher symbol rates correspond to higher levels of ISI.
Figure 3.16 depicts the received signal subject to interference (represented by
∑
=+−+ −
xtxtxt()()
direct path (represented by
xt−
τ1 and xt−
consists of a reected ray in a certain surface, and the corresponding signal level depends on the
reection index
τ2, respectively). Note that noise is not shown in this gure. Each multipath
<<
and the reection coefcient assumed for path 3 was
between the rst multipath and the direct path, whereas
) being composed of the cumulative sum of the signal received through
) and the signals received through two multipaths (represented by
. In the example of Figure 3.16, it was assumed that
. . Moreover,
τ
corresponds to the delay between the
2
τ
corresponds to the delay
1
. for path 2
second multipath and the direct path.
A decision is to be taken by the receiver at sampling instants. As can be seen from Figure 3.16,
in some cases, the resulting signal at sampling instants has a level above the one received through
the direct path, which results in a constructive interference of the multipaths. Nevertheless, in other
cases, the resulting signal has a level below the one received through the direct path, corresponding
to a destructive interference caused by ISI. In these cases, a low noise power may be enough to make
this sample resulting in an erroneous symbol estimation.
There are measures that can be implemented to mitigate the effects of ISI, namely the use
of equalization, channel coding with interleaving, antennas diversity, and frequency diversity
[Proakis 1995].
63Channel Impairments
vt
()
()
ti
T
S
⋅−
()
SS
()
ik
ik
01
()
()
()
tT
()=()
()
01
()
=
()
SS
In code division multiple access (CDMA) networks, the presence of multipaths is used by the
receiver (RAKE receiver*) in order to exploit multipath diversity.
Finally, it is worth noting that the variation of distance between a transmitter and a receiver also
originates the Doppler effect that results in a variation of the received carrier frequency relating to
the transmitted one. The Doppler frequency is given by fddtD/=θ (variation of the wave's phase).
3.6.1.1 Nyquist ISI Criterion
As previously described, ISI can be originated by the frequency selective channel. Moreover, ISI
can also be caused by the nonoptimum sampling instant of the detector.
Typically, the transfer function of the channel and the transmitted pulse shape are specied, and
the problem is to determine the transfer functions of transmit and receive lters so as to reconstruct
the original data symbol. The receiver extracts and then decodes the corresponding sequence of
channel coefcients from the output
pling the output vt
at time
T=S, where
that the weighted pulse contribution
(see Figure 3.17). The extraction process involves sam-
stands for the symbol period. The decoding requires
ht piTkT()
for k = i be free from ISI because of the over-
lapping tails of all other weighted pulse contributions represented by k ≠ i. This, in turn, requires
that it controls the overall time domain pulse shape
piTkT
−
()
SS
where p
vt
i
Because it is assumed that the pulse pt
= , by normalization. If pt
(with
) implies zero ISI, that is,
i=S
satises the condition of Equation 3.17, the receiver output
vth
is normalized such that p
pt
as follows:
=
10,
=
,
µ
i
(3.17)
≠
(for all i).
i
, the condition for zero
ISI is satised if [Proakis 1995]
+∞
−
Pf nRT
∑
n
=−∞
=
(3.18)
FIGURE 3.17 Location of the transmitter and receiver pulse shaping lters in the communication chain.
*
The RAKE receiver is described in Chapter 7.
Receiver pulse
shaping filter g
Transmitter
...
(t)
R
Receiver
Transmitter pulse
shaping filter g
v(t)
Equalizer
eq(t)
(t)
T
h(t)
...
64Cable and Wireless Networks
Pf
()
Pf
()
Gf
()=() () ()
TR
gt
TT
()()
gt
RR
()()
()
()
()
Vf
()
()
αα
()
fT
NS
()
()
BT
12
BB
TT
()()
12
RT
SS
α=0
R
Therefore, the Nyquist criterion for distortionless baseband transmission in the absence of noise can
be stated as follows: the frequency function
eliminates ISI, namely that caused by the nonop-
timum sampling instant, for samples taken at intervals TS provided that it satises Equation (3.18).
Note that
refers to the overall system, incorporating the transmit lter, the channel, the receive
lter, and so on, such that
Here,
Gf
=
and
pulse shaping lter and gtnTR−
form of x).
stands for the channel transfer function. The above system and signal description
Hf
µPf GfHf
Gf
=
, where
for the receiving pulse shaping lter ( x
(3.19)
−
gtnT
T
stands for the transmitting
is the Fourier trans-
refers to Figure 3.17, where the carrier modulator and demodulator, as well as the symbol modulator
and demodulator, were not depicted for the sake of simplicity.
Besides the nonoptimum sampling instant, the ISI generated because of the frequency selective
fading (i.e., intense multipath channel) is usually not completely removed by pulse shaping but by
different techniques such as equalization.
To ensure that ISI is not present at the receiver because of the nonoptimum sampling instant, the
Fourier transform of the signal at the equalizer's output
must be described by a function that
satises the Nyquist ISI criterion. In other words, if a communication channel satises the Nyquist
ISI criterion, the received signal is free of ISI originated by the nonoptimum sampling instant.
Apossibility of following the Nyquist ISI criterion consists of assuring that the signal
Vf
pres-
ents a pulse shape that follows the raised cosine function, dened as follows [Proakis 1995]:
Tf
SN
P f
()
1
=−−
2
T
S
2
π
f
α
π
f
2
α
0
ff
≤−
N
1sin
−
α
≤≤ +
()
N
ff
≥−
ff
N
1
α
()
1
N
α
1
()
(3.20)
where α stands for the rolloff factor, taking values between 0 and 1; it indicates the excess bandwidth over the ideal solution,
frequency
′
=+
Τ
. Specically, the bandwidth of a baseband transmission of absolute values is dened by
f
N
α
()
. However, in the case of bandpass transmissions (carrier modulated), the previ-
S
12/
, and is usually expressed as a percentage of the Nyquist
=
ously negative baseband part of the spectrum becomes positive, being also transmitted. Therefore,
the transmitted bandwidth is
fWW∈− +
(i.e., it is doubled), which is dened by
=⋅′2
, and
becomes
For the special case of single sideband (SSB) transmissions, only the positive (or negative) part of
the baseband spectrum is transmitted, which results in
We can use the Nyquist theorem to deduct the relationship between the minimum bandwidth
B
of a transmission medium and the symbol rate
min
the transmission medium is obtained by taking
transmitted in a baseband signal, through the following equivalence [Proakis 1995]:
*
The symbol rate is also referred to as the transmission rate, being expressed in symbols per second (symb/s) or in baud.
+
1 α
B
=
T
(3.21)
T
S
=′=+
BBT
TTS
/= 1. The minimum bandwidth
, being related to the symbol rate
R
S
B
min
(3.22)
=
2
α
/
.
of
B
min
,* which is
S
65Channel Impairments
BR
α=
22
()
()
)/
α=
α≠0
Note that this equivalence corresponds to the maximum symbol rate that can be accommodated in
the above-mentioned bandwidth. In the case of a bandpass signal (carrier modulated), this bandwidth is doubled and becomes
min=S
(3.23)
As an example, let us consider a satellite link with a 2-MHz bandwidth. From Equation 3.23, we
conclude that the maximum symbol rate (bandpass signal) that can be transmitted within this bandwidth is 2Msymbols/s. In case one needs to transmit 2 Mbps, the symbol constellation should
be such that each symbol transports 1 bit.* Alternatively, if one intends to transmit 4Mbps in the
referred satellite link bandwidth, a symbol constellation that accommodates 2 bits per symbol is the
solution (e.g., QPSK).
As can be seen from Figure 3.18, for the rolloff factor
0
and bandpass SSB transmission, we
obtain the minimum bandwidth capable of transmitting signals with zero ISI dened by
=
Bf
min
N
1
=
T
2
S
R
S
=
2
R
b
M
log
=
2
2
(3.24)
where RS stands for the symbol rate and Rb stands for the bit rate. Moreover, M stands for the symbols constellation order and log2M for the number of bits transported in each symbol.
Furthermore, the spectrum of the pulse shaping lter depicted in Figure 3.18 corresponds
to the Fourier transform of its impulsive response depicted in Figure 3.19. As can be seen from
Table A.1, the Fourier transform of the sin c function corresponds to the rectangular pulse, that
sin c 21
is,
=
WtWfW
(/
Π
(valid for
0). Because, for
, the pulse in the time
domain is a variation of the sinc function, its Fourier transform may be viewed as a variation of the
rectangular pulse.
P( f )
α = 0
α = 1
f
N
FIGURE 3.18 Raised cosine pulse in the frequency domain for α= 0 and 1.
*
Note that modulation schemes are dealt with in Chapter 6.
2f
N
f
66Cable and Wireless Networks
()
+
R
1
α
εα
()
()
α=
pt
()
si
π
()
2
()
ti
α=1
pt
()
α=1
//
()
52
SS
SS
α =
0.0
p(t )
α = 0.5
α = 1.0
−4T4T−3T3T−2T2T−TT
FIGURE 3.19 Raised cosine pulse in the time domain, for α= 0, 0.5, and 1.
t
It is also worth dening the spectral efciency. Assuming a baseband signal of an M-ary constel-
lation, the spectral efciency becomes
b
ε=
B
T
(3.25)
M
2
log
2
=
Naturally, for a bandpass signal (carrier modulated), the spectral efciency becomes
=
log
()
2
+
1M.
In the case of a baseband signal and binary transmission (each symbol transports a single bit),
the minimum channel bandwidth is Rb/2. Naturally, the transfer function that leads to the minimum bandwidth with
is always higher than the minimum bandwidth B
The function
ideal Nyquist channel and a second factor that decreases as
zero crossing of
pt
0
is not physically realizable. Consequently, the transmission bandwidth
.
min
consists of the product of two factors: the factor
1
/ t
at the desired sampling instants of time
nctT
/
characterizing the
S
for large t. The rst factor ensures
T=
, with i an integer (positive and
S
negative). The second factor reduces the tails of the pulse considerably below that obtained from the
ideal Nyquist channel, so that the transmission of binary waves using such pulses is relatively insensitive to sampling time errors. In fact, for
tudes of the oscillatory tails of
are smallest. Thus, the amount of ISI resulting from timing error
decreases as the rolloff factor α increases from zero to unity. The special case with
, this leads to the most gradual rolloff in that the ampli-
is known as
the full-cosine rolloff characteristic. This response exhibits two interesting properties:
tTW=±=±
· At
214
S
litude is exactly equal to the bit duration TS.
· There are zero crossings at
at the sampling times
, we have pt
tT T=±±32
tTT=±±
,2
=05. ; that is, the pulse width measured at half amp-
/,/
, ¼ in addition to the usual zero crossings
, ¼
67Channel Impairments
α=
ff
()
=≤
P fGfG f
()=() ()
TR
()
()
()
()=() ()=()
jf
G
RT
()=()
Pf
()
()=()
Pf
2
These two properties are extremely useful in extracting a timing signal from the received signal for the purpose of synchronization. However, the price paid for this desirable property is
the use of a channel bandwidth double that required for the ideal Nyquist channel corresponding to
0
.
Because of the smooth characteristics of the raised cosine spectrum, it is possible to design
practical lters for the transmitter and the receiver that approximate the overall desired frequency
response. In the special case of an ideal channel, that is, with H
1,,
W
Gf
where
T
lters, respectively, and P f
and
Gf
are the frequency responses (transfer function) of the transmit and receive
R
is the frequency response of the raised cosine pulse. Assuming that
the receive lter is matched to the transmit lter leads to
and
*
fGf
, where t0 is some nominal delay that is required to ensure physical implemen-
GfPf
()=()
T
(3.26)
P fGfG fGf
− 20π
e
TRT
t
2
. Ideally,
(3.27)
tation of the lter. Thus, the overall raised cosine spectral characteristic is split evenly between the
transmit and the receive lters. Note also that an additional delay is necessary to ensure the physical
realization of the receive lter. Moreover, note that the PSD is proportional to
is,
PSDT∝
Gf
.
Gf
()
T
2
=
, that
The pulse shaping lter in the transmitter has the main function to allow the symbols formatting
(in order to avoid ISI, as previously described) and to limit the spectrum inside the desired band,
whereas the receive lter intends not only to contribute to format the symbols jointly with the transmitting pulse shaping lter but also to eliminate the noise outside the signal's bandwidth, allowing
only the reception of the noise inside the signal's bandwidth.
3.6.2 multiple Access iNterfereNce
Multiple access interference occurs in networks that make use of multiple access techniques.
This type of interference is experienced when there is no perfect orthogonality between signals
from different users, viewed at the receiver's antenna of a certain user. In time division multiple access (TDMA) networks, this orthogonality is normally assured through guard periods,
which avoids the overlapping of signals transmitted in different time slots (from different users).
In frequency division multiple access (FDMA) networks, this orthogonality is assured through
the use of guard bands (see Figure 3.20), and through the use of lters that reject undesired inband interferences.
In CDMA networks, this kind of interference is normally present in real scenarios, and represents
the main limitation of CDMA networks. MAI exists in CDMA networks because of the following:
· The use of spreading sequences that are not orthogonal (nonzero cross-correlation between
different spreading sequences).
· Even using orthogonal spreading sequences, the orthogonality between spreading
sequences is not assured when the network is not synchronized.* For this reason, it is often
preferable to use quasi-orthogonal spreading sequences,² especially when in the presence
of an asynchronous network.
*
The uplink of a cellular network is normally asynchronous, that is, the transmission of symbols from different mobiles
does not start at the same instant.
²
Quasi-orthogonal spreading sequences present some level of cross-correlation. However, they present better
autocorrelation properties in asynchronous networks than orthogonal spreading sequences (e.g., gold sequences).
68Cable and Wireless Networks
Guard band
Sub-carrier 1
Guard band
Frequency
Sub-carrier 2
Guard band
Sub-carrier 3
Sub-carrier 4
(b)(a)
A single frequency band is
common to multiple channels
Frequency
Time slot 1
Time slot 2
Time
Time slot 3
Time slot 4
Guard time
Guard time
Guard time
FIGURE 3.20 Separation of channels in (a) frequency division multiplexing and (b) time division
multiplexing.
· Even in the downlink of a cellular network where synchronism normally exists between
different transmissions, and even with the use of orthogonal spreading sequences, the
multipath channel prole originates a relative level of asynchronism in the network.
This originates nonzero cross-correlation values between superimposed signals received
from different multipaths, especially when the channel presents frequency selectivity.
Consequently, MAI is also present in this scenario.
In CDMA networks, MAI is directly related to the received power from different users. A certain user, with an excessive power, originates a level of MAI higher than others. Therefore, in
CDMA networks, it is essential to use an effective power control to mitigate the fading effect, as
well as to mitigate the near-far problem.* Also, in these networks, MAI can be reduced by using
multiuser detection (MUD), power control, as well as sectored/adaptive antennas [Marques da
Silva et al. 2010].
3.6.3 co-chANNel iNterfereNce
Co-channel interference occurs when two different communications using the same channel interfere with each other. In a cellular environment, this occurs when a communication is interfered by
another communication being transmitted in the same carrier frequency but typically coming from
an adjacent cell. In cellular networks using TDMA/FDMA, this type of interference can be mitigated by avoiding the use of the same frequency bands in adjacent cells, introducing the concept
of a frequency reuse factor higher than 1.² In CDMA networks, this kind of interference is always
present because the whole spectrum is typically reused in all cells, making the reuse factor as 1. The
reuse factor refers to the reutilization of the same frequency bands in adjacent cellular networks.
In Figure 3.21a, different letters in different cells mean that different sets of frequency bands are
utilized in different cells. In this case, because each group of seven cells use different frequency
*
Near-far problem: because of path loss, a received signal originated from a transmission in the neighborhood is much
more powerful than a received signal originated from a transmission made at a long distance.
²
See Chapter 15.
69Channel Impairments
Frequency division multiplexing
D
C
D
C
E
A
B
D
C
A
B
G
(a)(b)
F
G
C
E
B
F
E
A
B
D
A
G
F
G
D
C
E
F
E
A
B
F
G
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
FIGURE 3.21 Two cellular environments with different reuse factors. (a) Reuse factor 7 and (b) reuse
factor 1.
bands, and the overlapping of frequency bands is only reutilized in another adjacent group of seven
cells, the reuse factor is 7.
The reuse factor 1 is normally adopted in CDMA networks. The reuse factor 1 means that all
frequency bands are utilized in all cells (see Figure 3.21b). This factor is adopted because, although
CCI occurs, leading to a decrease of performance, the gain in capacity is higher than the decrease of
performance. Moreover, CCI can be mitigated through the use of MUD and adequate power control.
In CDMA networks, this kind of interference is also known as MAI, being, however, generated by
users located in adjacent cells.
3.6.4 AdjAceNt chANNel iNterfereNce
Adjacent channel interference consists of an inadequate bandwidth overlapping of adjacent signals. This is due to inadequate frequency control, transmission with spurious, broadband noise,
intermodulation distortion (IMD), transmission with a bandwidth greater than the one to which the
operator is authorized, and so on. Guard bands are measures that are normally used to minimize the
inadequate frequency control (see Figure 3.22).
Because any transmitter's oscillator presents a certain level of broadband noise, direct interference may be generated from this source. The power generated by an oscillator presents typically a
Gaussian shape around the desired transmitting frequency. Therefore, the energy out of the desired
signal's spectrum is considered as broadband noise. As can be seen from Figure 3.23, broadband
noise consists of an unwanted signal transmitted in a frequency adjacent to the desired transmitted
signal. This noise has a power very much below the transmitted signal (typically −155dBc/Hz @
3MHz offset from the carrier). Nevertheless, receiving a signal at a short distance and very close in
frequency to a transmitted signal may result in direct interference. This type of direct interference
Signal 1Signal 2Signal 3
Guard
band
f
1
f
2
Guard
band
f
3
Frequency
FIGURE 3.22 Frequency control using guard bands between different channels.
70Cable and Wireless Networks
Frequency division multiplexing
Amplitude
Broadband
noise
Spurious of channel 1
interfering with channel 2
Bandwidth for
channel 1
Signal of
channel 1
f
1
Guard
band
Bandwidth for
channel 2
Signal of
channel 2
Spurious of channel 3
interfering with channel 2
Guard
band
f
2
Bandwidth for
channel 3
Signal of
channel 3
f
3
Frequency
FIGURE 3.23Broadband noise and spurious.
can be mitigated by using pre- and postselector lters, and by increasing the distance between adjacent transmit and receive antennas.
Spurious is another type of direct interference normally generated in transmitters. As can be seen
from Figure 3.23, spurious consists of an undesired transmission in a frequency band different from
the one reserved for sending the signal. A receiver located at a short distance from such a transmission with spurious may result in a high-power interfering signal that may block one or more channels. The measure that can be used to mitigate this direct interference consists of keeping transmit
and receive antennas sufciently spaced apart, to assure the required isolation. Moreover, spurious
transmission ltering is normally mandatory from frequency management regulators. Pre- and postselector lters may also mitigate the negative effects of spurious.
IMD is another type of interference consisting of undesired signal generated within nonlinear
elements (such as in a transmit amplier, receiver, low noise amplier, and multicoupler). Two or
more signals present at such a nonlinear element are processed, and additional signals are generated, at the sum and difference of multiple frequencies, being known as intermodulation products
(IMPs). As can be seen from Figure 3.24, IMPs can be of third order, fth order, seventh order
2f1 – f
3f1 – 2f
2
4f1 – 3f
2
FIGURE 3.24 Intermodulation products.
2
f
1
3rd order
IMPs
5th order
IMPs
7th order
f
2
2f2 – f
IMPs
1
3f2 – 2f
1
4f2 – 3f
1
Frequency
71Channel Impairments
f
1
f
12
ff
()
21
ff
()
and so on. Assuming that two isolated carriers
generated third-order IMPs become 2
−
and
and 2
are present at a nonlinear element, the
2
−
. Finally, it is worth noting that IMD
is considered as indirect interference, and its effect can be worse than direct interference. This can
be mitigated with the use of postselector lters (at the transmitter), as well as preselector lters (at
the receiver).
CHAPTER SUMMARY
This chapter provided a description of channel impairments, experienced in both cable and wireless transmission media. It was viewed that channel impairments accumulate over the channel path,
between the transmitter and the receiver, degrading the SNR. Note that for a transmitted signal to
be properly extracted at the receiving side, this should present an SNR higher than a certain threshold. In the case of transmission of digital signals, it was described that the degradation of the SNR
translates in a degradation of the BER.
This chapter dened the Shannon capacity, which corresponds to the maximum capacity that a
channel can support. Then, the attenuation channel impairment was dened. It was observed that
the attenuation results in a degradation of the SNR, by reducing the power of the signal.
The different electromagnetic noise sources were exposed in this chapter, degrading the SNR by
increasing the power of noise. This includes atmospheric noise, being generated by the atmosphere at
a long distance, caused by thunderstorms in tropical regions. The atmospheric noise is more intense
in the VLF and LF bands, and stronger in the vertically polarized electromagnetic waves, than in
horizontally polarized waves. Then, man-made noise was also dened, being generated by human
activity, namely by the use of electrical equipment, such as car ignitions, domestic equipment, or
vehicles. The intensity of man-made noise varies substantially with the region. It was described
that this noise tends to be more intense in urban than in rural environments. Extraterrestrial noise
was dened, such as galactic noise. Extraterrestrial noise can be intense when a receive antenna is
pointed toward a planet or a star, such as the sun. Moreover, thermal noise was also dened, being
more intense when an antenna is pointed toward a location with higher temperature. For example,
a receive antenna of a satellite transponder pointed toward the hearth experiences higher thermal
noise than that of a satellite ground station. Finally, electronic noise was also exposed, being generated in active elements, such as transistors.
This chapter described the inuence of the transmission channel on the signal quality. It was
described that the transmission channel may originate multiple effects, including delay and phase
shift, as well as distortion of signals. Then, equalization was described, comprising a process that
aims to mitigate the negative effects of the channel distortion.
The different sources of interference were also described, which include the following: (1) the
intersymbol interference, whose effects increase with the increase of the symbol rate; (2) the multiple access interference, which consists of an interference experienced when the channel is shared
between different users; (3) the co-channel interference, which results from the reutilization of
the same frequency bands, but typically from transmissions coming from adjacent locations; and
nally, (4) the adjacent channel interference, which is an interference type experienced when the
frequency bands of adjacent channels partially overlap.
REVIEW QUESTIONS
1. Which kinds of channel impairments do you know?
2. What are the effects of channel impairments in either analog or digital signals?
3. Which kinds of noise do you know?
4. What does intersymbol interference stand for?
5. What is the difference between adjacent channel interference and co-channel interference?
6. What is multiple access interference?
72Cable and Wireless Networks
7. Assuming baseband transmission, what is the minimum bandwidth necessary to accommodate a digital signal with a symbol rate of RS?
8. Assuming bandpass transmission (carrier modulated), what is the minimum bandwidth
necessary to accommodate a digital signal with a symbol rate of RS?
9. What does distortion stand for?
10. Which types of distortion do you know?
11. What does spurious stand for? How can we mitigate the negative effects of spurious?
12. What does thermal noise stand for?
13. How can we quantify the FSPL?
14. What is the meaning of the Shannon capacity?
15. Assuming a twisted pair with a 1 MHz bandwidth, and an SNR of 5dB, according to
the Shannon capacity limit, what is the maximum speed of information bits that can be
transmitted?
16. How are intermodulation products generated? What can be their negative effects? How can
we mitigate them?
17. What is the difference between atmospheric noise and man-made noise? Characterize
these two types of noise.
18. What does noise factor stand for?
19. How can we compute the noise factor of a system composed of a cascade of N electronic
devices?
20. What is the ideal frequency response of a channel, in terms of phase shift and attenuation?
21. How can we mitigate the effects of the nonideal frequency response of a system, in terms
of phase shift and attenuation?
22. According to the Nyquist theorem, what is the minimum sampling rate that can be
employed to digitize a signal with a spectrum in the range 8±60kHz?
LAB EXERCISES
1. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 1 of its laboratory
manual, perform experiment 4Ð amplitude modulation (AM).
2. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 2 of its laboratory
manual, perform experiment 1Ð AM (method 2) and product detection.
3. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 2 of its laboratory
manual, perform experiment 2Ð noise in AM communications.
4. Using the Emona Telecoms Trainer 101 laboratory equipment, and volume 2 of its laboratory
manual, perform experiment 9Ð signal-to-noise ratio and eye diagrams.
Cable Transmission Mediums
4
LEARNING OBJECTIVES
· Identify and describe the different cable transmission mediums.
· Describe the different types of twisted pairs.
· Describe the different types of coaxial cables.
· Describe the different types of optical bers.
· Identify and describe the interference parameters in metallic conductors.
The transmission medium that still dominates houses and ofces is the twisted pair. In the past, local
area networks (LANs) were made of coaxial cables, which were also used as a transmission medium
for medium- and long-range analog communications. Although their use in a LAN was replaced
by the twisted pair, the development of the cable television made the coaxial cable reused. With
the improvement of isolators and copper quality, as well as with the development of shielding, the
twisted pair became widely employed for providing high-speed data communications, in addition to
the initial use for analog telephony. Currently, most of the companies use IP telephony with the same
physical infrastructure as the one used for data, which represents a convergence between voice and
data. We observe an increase in the demand for optical bers, in LAN, MAN, and WAN segments,
because of their immunity to electromagnetic interferences and extremely high bandwidth.
As a rule of thumb, the attenuation of cable transmission mediums increases with the increase
of the distance, but at a different rate for different transmission mediums (i.e., it is different for
twisted pair, coaxial cable, and optical ber). Moreover, the attenuation and phase shift also tend
to increase with the increase of the frequency, whose effect is more visible at longer distances.
This results in distortion, and in the case of digital communications, it is viewed as intersymbol
interference.
of the link distance. Decreasing the link distance, the attenuation and phase shift at limit frequencies
also reduce, resulting in a higher throughput supported by the cable. Table 4.1 shows the typical
bandwidths for different cable transmission mediums.
As described in Section 3.2, when the transmitter and the receiver are sufciently far apart,
ampliers (used for analog signals) or regenerators (used for digital signals) need to be employed at
regular intervals, in order to improve the signal-to-noise ratio, as well as to allow keeping the signal
strength above the receiver's sensitivity threshold. The maximum distance where regenerators need
to be placed depends on the characteristics of the cable transmission medium and on the bandwidth
under consideration. Higher bandwidths require regenerators at shorter distances.
The following sections describe each of these important transmission mediums, in terms of use,
bandwidth, attenuation, distortion, resistance to interference, and so on.
*
Consequently, it can be stated that the available bandwidth decreases with the increase
4.1 TWISTED PAIRS
Low-bandwidth twisted pairs, normally referred to as voice-grade twisted pairs, have been widely
used for decades, at home and in ofces, for analog telephony. Twisted pairs were also widely used
to link houses and ofces with local telephone exchanges. In order to reduce distortion, inductors
(load coils) can be added to voice-grade twisted pairs, at certain distance intervals. This results in a
*
As previously described, a way to mitigate this effect is by employing an equalizer at the receiver.
73
74Cable and Wireless Networks
TABLE 4.1
Bandwidths of Different Cable Transmission Mediums
Voice-Grade
TwistedPair
(Grade1)
Bandwidth3.4kHz250MHz500MHz150THz
Twisted Pair
Category 6CoaxialOptical Fiber
atter frequency response of the twisted pair over the analog voiceband (300Hz to 3.4kHz), translating in a lower attenuation level. Note that a twisted pair with loading cannot be used to atten the
frequency response of twisted pair cables used for data, as the bandwidth of data communications
(several MHz or even GHz) is typically much higher than that of analog voice.
Because of the pre-existence of voice-grade twisted pairs in houses and ofces, their use for
data communications became a viable and inexpensive solution. Nevertheless, as they are very susceptible to noise, interferences, and distortion, these cables could not allow the data rates in use by
most of LANs. The improvement of twisted pair technology (such as shielding, twisting length, and
cable materials) increased the resistance to these impairments, leading to an increased bandwidth.
Consequently, these improved twisted pair characteristics, added to its reduced cost, resulted in a
massive use of this physical infrastructure, instead of the previously used coaxial cable.
Nevertheless, comparing the twisted pair with coaxial and optical ber, the distances and bandwidths reached with the initial twisted pair were less than those obtained with coaxial and optical
ber.
4.1.1 CharaCteristiCs
As can be seen from Figure 4.1, a twisted pair is considered as a transmission line, being composed
of two isolated and twisted conductors in a spiral pattern. The proper selection of the twisting length
of these conductors leads to a reduction of low-frequency interferences and crosstalk.
Crosstalk consists of an electromagnetic coupling of one conductor into another (wire pairs or
metal pins in a connector). The electromagnetic eld received by an adjacent conductor generates
an interfering current, being superimposed on the signal's current. This originates a degradation of
the signal-to-noise plus interference ratio.
The material employed in conductors is normally copper, whereas polyethylene is normally used
in isolators. In order to improve the crosstalk properties, twisted pairs are normally twisted and
bundled in two pairs (four wires): one pair for transmission and another pair for reception (full
duplex). In order to further improve the crosstalk properties, and to optimize the cables, two groups
of two pairs (i.e., eight wires) are twisted and wrapped together, using a protective sheath. This
results in cables composed of four pairs.
The quality of the twisted pair depends on several factors, such as the material and width of the
isolator, the copper wire purity and width (typically between 0.4 mm and 0.9mm), the twist length,
the type of shielding (when used), and the number of pairs twisted together. All of these parameters dene the impedance of the twisted pair, which results in a certain attenuation coefcient
(expressed in dB/km) and phase shift coefcient, both as a function of the signal's frequency. The
maximum bandwidth and distance supported by a certain type of cable depend on these parameters.
FIGURE 4.1 Twisted pair as composed of two isolated copper wires properly twisted.
75Cable Transmission Mediums
=⋅
()
α
α
()
Naturally, because of increased resistance to interference, multipair cabling presents a bandwidth
higher than single pair.
4.1.2 typesof proteCtion
Twisted pairs are normally grouped as unshielded twisted pairs (UTPs), foiled twisted pairs
(FTPs), shielded twisted pairs (STPs), or as screened STPs (S/STPs) [ANSI/TIA/EIA-568].
As the name refers, UTP cabling is not surrounded by any shielding, whereas STP presents a
shielding with a metallic braid or sheathing, applied to each individual pair of wires, that protects
wires from noise and interferences.
The attenuation coefcient of a 0.5mm copper wire UTP cable can be seen from (as a function
of the frequency).
The resulting attenuation, expressed in decibel, is given by
Al f
db
(4.1)
where:
l stands for the cable length (in km)
stands for the attenuation coefcient (in dB/km)
f
UTP cabling is employed in Ethernet and telephone networks, being normally installed during building construction. Because it does not present any shielding, UTP cabling is very subject to noise and
external interferences, presenting typically an impedance of 100W. UTP cabling is typically less
expensive than STP, and also less expensive than coaxial and ber optic cables. Furthermore, STP
and FTP are more difcult to handle than UTP. Consequently, a cost±benet analysis needs to be
done before a decision is made about the type of cabling to employ.
STP cabling includes a metal grounded shielding surrounding each pair of wires, presenting a
typical impedance of 150W. STP supports up to 10Gbps, being considered in 10GBASET technology employed to implement the IEEE802.3 LAN.
When the shielding is applied to multiple pairs, instead of a single pair of wires, it is referred to
as screening. This is the case of FTP, being also referred to as screened UTP (S/UTP). It consists
of UTP cabling whose shielding surrounds the cable (screened), not presenting shielding in each
pair of copper wires. Consequently, while this cabling presents good resistance to interferences
originated from outside of the cable, the crosstalk properties (interference between different pairs
of cabling) are typically poorer than STP.
Finally, screened FTP (S/FTP) cabling, also referred to as S/STP cabling, presents a shielding
surrounding both individual pairs and the entire group of copper pairs, and therefore, it is both
externally and internally protected from adjacent pairs (crosstalk). The above description is summarized in Table 4.2.
TABLE 4.2
Protection Type for Different
Twisted Pair Cablings
ShieldingScreening
UTPNoNo
FTPNoYe s
STPYe sNo
S/STP (S/FTP)
YesYe s
76Cable and Wireless Networks
4.1.3 Categories
Another way to characterize twisted pair cablings is to group them into different categories, from
1 to 7. As can be seen from Table 4.3, the increased cabling category results in a higher bandwidth
and data rate. The better performance is achieved at the cost of better and thicker copper wires,
isolation, improved shielding, or improved twisting. Consequently, higher bandwidths tend to correspond to higher costs. With the exception of the voice-grade twisted pair, cables of the other
categories comprise four pairs of conductors.
Although the standards [ANSI/TIA/EIA-568-A; ISO/IEC 11801] recognize only categories 3 up
to 6, categories (or grade) 1, 2, and 7 are also listed in Table 4.3, as these designations are normally
assigned to cabling congurations. This table lists the cabling and connectors, as well as the corresponding bandwidths and maximum data rates. Note that the copper connectors listed in Table4.3
are dened in Tables 4.4 and 4.5, respectively, for T568A* and T568B� terminations of 8P8C modular connectors (commonly referred to as RJ45).
As previously described, the maximum bandwidth supported by a cable transmission medium
depends on the link distance. Shorter distances allow accommodating higher bandwidths, and vice
TABLE 4.3
Characteristics of Different Twisted Pair Categories (@100m)
versa. This results from the fact that different frequencies present different attenuation coefcients
and different delays, whose effect is more visible at longer distances. In fact, the attenuation of a
twisted pair increases approximately exponentially with the increase of the frequency. The effect
that results from this impairment is known as distortion (attenuation and/or phase distortion) and, in
the case of digital communications, results in intersymbol interference. Improved twisted pair quality results in longer distances for the same bandwidth, or higher bandwidth for the same distances,
as compared to lower quality twisted pair. As a rule of thumb, for digital signals, there is a need to
use regenerators at a distance interval of 2±3km of twisted pair cable.
The bandwidths listed in Table 4.3, for different categories, are those specied for 100m of dis-
tance (90m of cable plus 10m of patch cord). These values may be exceeded for shorter distances.
Category 1 UTP consists of low-quality twisted pairs specied for analog voice only (without
external isolation). The twisting of multiple pairs, previously individually twisted, into the same
cable originates the category 2 UTP. Some authors also consider categories 2 and 3 as a voice-grade
twisted pair. Nevertheless, the exceeding bandwidth, besides that of the analog voice, is used for
data communications. Categories 3 and 4 UTP cabling is similar to category 2, but with improved
copper and isolation, as well as using a twisting length that improves the resistance against noise
and interferences. This results in the ability to support a bandwidth of 16MHz for category 3 and
20MHz for category 4 twisted pair cabling. Category 3 cables are considered in the LAN standard
IEEE 802.3 (at 10Mbps). Moreover, categories 3 and 4 are considered in the LAN standard IEEE
802.3u at 100Mbps, using several parallel pairs.
While consisting of either UTP or FTP, category 5 is currently installed during construction in
most ofces. It supports a bandwidth of 100MHz, being considered by the LAN standard IEEE
802.3u (at 100Mbps) and by IEEE 802.3ab (at 1Gbps), in the latter case using four negotiated
parallel pairs for transmitting or receiving. Category 5e (enhanced) refers to category 5 cabling with
an improved shielding performance in terms of near end crosstalk (NEXT), attenuation to crosstalk
ratio (ACR), equal level far end crosstalk (ELFEXT), and so on (see Section 4.4). These improved
characteristics make the full-duplex operation possible in each pair, requirement that is important
for the implementation of IEEE 802.3 at 1Gbps (i.e., IEEE 802.3z).
Category 6 supports 250MHz of bandwidth and a data rate of 1Gbps, being based on UTP
(adopted by the LAN standard IEEE 802.3ab), STP, or FTP cabling (IEEE 802.3z). There is a
variation, referred to as category 6a, which allows twice the bandwidth of category 6, that is,
500MHz.
78Cable and Wireless Networks
Attenuation coefficient (dB/km)
Finally, category 7 is dened to support 600MHz of bandwidth and data rates as high as 40Gbps
(contrary to the other categories, the listed value refers to 40m). This is achieved using S/STP,
which makes use of double shielding, resulting in a high level of immunity to noise and interferences. There is a variation, referred to as category 7a, dened to support frequencies up to 1GHz.
Let us focus on Figure 4.2. For avoiding distortion, the bandwidth of the signal should be carefully selected such that the frequency response is approximately at, or such that the receiver's
equalizer is able to counteract the frequency selectivity of the channel. Note that this gure plots in
the ordinates the attenuation coefcient. This means that the attenuation value is a function of the
link distance. Therefore, one can conclude that a certain twisted pair cable can support higher bandwidths at shorter distances, and lower bandwidths at longer distances. The maximum bandwidths
supported by certain link distances depend on these attenuation coefcient curves. Table 13.3 shows
the maximum link distances that can be supported by different twisted pair categories, for different
signal bandwidths. In fact, depending on the consumed bandwidth, that is, depending on the transmission rate, the trafc generated by hosts can be split into class types.
It is worth noting that the link distance may, under specic circumstances, be limited by other
than the attenuation factor. In fact, factors such as the ACR or the ELFEXT may also be the link
bottleneck. As described at the end of this chapter, these factors should be positive for the link to
be viable. A specic link, with a certain length and signal bandwidth, may not be constrained by
the attenuation factor, but by the corresponding NEXT (or ELFEXT), whose value should not be
negative. In this case, the link length may have to be decreased in order to make this factor positive.
4.1.4 ConneCtorsand Cables
Twisted pair cables may use different types of connectors. The IEEE 802.3z at 1 Gbps
(1000BASE-CX) uses the DB9 or HSSDC connectors. Most of the other physical layers of the
IEEE 802.3 network use the 8P8C modulator connector, also referred to as the ISO 8877 connector
(sometimes also generically called RJ45 connectors).
Tables 4.4 and 4.5 list the two most common termination layouts utilized in 8P8C modulator
connectors, namely T568A and T568B.
Using one or another connector layout is not relevant, as long as the whole installation is coherent.
Nevertheless, it is worth noting that the standard ANSI/TIA/EIA-T568-A is the most common. In
fact, the ISO 8877 connector adopts ANSI/TIA/EIA-T568-A, leaving the standard ANSI/TIA/EIAT568-B as an option.
30
25
20
15
10
5
0
0.110
101
Frequency (kHz)
10
2
10
3
4
FIGURE 4.2 Attenuation coefcient of a twisted pair with 0.5mm copper wires as a function of frequency.
79Cable Transmission Mediums
In terms of cabling, although more than four pairs could be employed in horizontal cabling, the
latter conguration is rare. The four pairs can be employed to serve two terminals in full-duplex
operation. In contrast, in vertical cabling, depending on the number of terminals to serve, a cable
with more than four pairs can be employed.
In the most common four-pair conguration, the color codes employed in four pair cables are
listed in Table 4.6.
IEEE 802.3 cabling, also referred to as Ethernet cabling, may have two different basic
congurations:
· Straight-through cable is typically used to interconnect:
· A switch/hub to a computer (PC or server) or network printer
· A router to a modem
· A router to a switch/hub
· Crossover cable is typically used to interconnect:
· A router to a computer
· Two co mput er s
· Two switches/hubs
· Two ro ute rs
The visual identication of straight-through and crossover cables is simple. The wire arrangement
of both cable terminals of straight-through cables (and patch cables used to interconnect, e.g., a
patch panel to a switch) is the same, whereas the wire arrangement of the two sides of crossover
cables is different. Pins 1 and 2* are used for transmit, whereas pins 3 and 6� are used for receive.
The transmit pins in one terminal of a crossover cable becomes the receive pins in the other terminal, and vice versa. Consequently, pins 1 and 2 in terminal A of a crossover cable become pins 3
and 6 of the terminal B, and pins 3 and 6 of terminal A become pins 1 and 2 of terminal B. While a
straight-through cable (either T568A or T568B) has the same wire arrangements at both terminals,
a crossover cable can be viewed as a cable with T568A wire arrangement on one terminal and with
T568B wire arrangement on the other terminal. The straight-through termination of T568B can be
memorized with the mnemonic OGBB (orange, green, blue, brown), where the rst wire of each
pair has a full color (example: wire 1 is full orange), and the second wire of each pair has white with
lists (example: wire 2 is orange with lists). In the case of T568A, the mnemonic becomes GOBB,
with the same remaining rules.
When conguring Cisco equipment from a workstation using a console cable for the interconnection between these two pieces of equipment, the wire arrangement is different from the two
described above. In this case, the wire arrangement is reversed, that is, wire 1 on one termination
becomes wire 8 on the other, wire 2 on one termination becomes wire 7 on the other, wire 3 on one
termination becomes wire 6 on the other, and so on. Very often, a console cable connects to the
TABLE 4.6
Color Code of Four-Pair Twisted Pair Cables
PairColorConductor 1Conductor 2
1BlueBlueWhite with blue lists
2OrangeOrangeWhite with orange lists
3GreenGreenWhite with green lists
4BrownBrownWhite with brown lists
*
Pair 3 of T568A termination or pair 2 of T568B termination.
�
Pair 2 of T568A termination or pair 3 of T568B termination.
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.