0% found this document useful (0 votes)
17 views

ComputerNetwork Unit 2

Uploaded by

its4alok6332
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

ComputerNetwork Unit 2

Uploaded by

its4alok6332
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Unit-2: Data Link layers

Services and Data Link Devices (Switch, Bridge); Framing, Flow Control and Error Control; Elementary
Data link Protocols; Sliding Window Protocols; HDLC, SLIP and PPP Media Access Control Layer ( Carrier
Sense Multiple Access/ Collision Detection)

Multiple Access
The upper sublayer of Datalink layer, that is responsible for flow and error control is called the logical link
control (LLC) layer; the lower sublayer that is mostly responsible for multiple- access resolution is called the
media access control (MAC) layer.

According to CSMA/CD, a node should not send a packet unless the network is clear of traffic. If two nodes
send packets at the same time, a collision occurs and the packets are lost. Then both nodes send a jam signal,
wait for a random amount of time, and retransmit their packets. Any part of the network where packets from two
or more nodes can interfere with each other is considered a collision domain. A network with a larger number of
nodes on the same segment has a larger collision domain and typically has more traffic. As the amount of traffic
in the network increases, the likelihood of collisions increases.

CSMA/CD Algorithm:
1. If the medium is idle, transmit; otherwise, go to step 2.
2. If the medium is busy, continue to listen until the channel is idle, then transmit immediately.
3. If a collision is detected during transmission, transmit a brief jamming signal to assure that all stations
know that there has been a collision and then cease transmission.
4. After transmitting the jamming signal, wait a random amount of time, then attempt to transmit again.
(Repeat from step 1.
Traditional Ethernet uses CSMA/CD.

Bridge:
A networking component used either to extend or to segment networks. Bridges work at the OSI data-link layer.
They can be used both to join dissimilar media such as unshielded twisted-pair (UTP) cabling and fiber-optic
cabling, and to join different network architectures such as Token Ring and Ethernet. Bridges regenerate signals

Page: 64
but do not perform any protocol conversion, so the same networking protocol (such as TCP/IP) must be running
on both network segments connected to the bridge. Bridges can also support Simple Network Management
Protocol (SNMP), and they can have other diagnostic features.

How it works?

Bridges operate by sensing the source MAC addresses of the transmitting nodes on the network and
automatically building an internal routing table. This table is used to determine which connected segment to
route packets to, and it provides the filtering capability that bridges are known for. If the bridge knows which
segment a packet is intended for, it forwards the packet directly to that segment. If the bridge doesn’t recognize
the packet’s destination address, it forwards the packet to all connected segments except the one it originated on.
And if the destination address is in the same segment as the source address, the bridge drops the packet. Bridges
also forward broadcast packets to all segments except the originating one.

Hub:
The basic networking component used in traditional 10-Mbps Ethernet networks to connect network
stations to form a local area network (LAN). Hubs can be used for
• Connecting about a dozen computers to form a workgroup or departmental LAN
• Connecting other hubs in a cascaded star topology to form a larger LAN of up to roughly a hundred
computers
How It Works
Hubs are the foundation of traditional 10BaseT
Ethernet networks. The hub receives signals
from each station and repeats the signals to all
other stations connected to the hub. In active
hubs (which all of today’s hubs are), the signal
received from one port is regenerated
(amplified) and retransmitted to the other ports
on the hub. Hubs thus perform the function of a
repeater and are sometimes called multiport
repeaters. From a logical cabling point of view,
stations wired into a hub form a star topology.
Hubs generally have RJ-45 ports for unshielded
twisted-pair (UTP) cabling, and they range in
size from 4 to 24 or more ports for connecting
stations to the hub, plus one or more uplink
ports for connecting the hub to other hubs in a
cascaded star topology. Hubs generally have various light-emitting diode (LED) indicator lights to indicate the
status of each port, link status, collisions, and so on.

Switch:
Switch is essentially a multi-port bridge. Switches allow the segmentation of the LAN into separate collision
domains. Each port of the switch represents a separate collision domain and provides the full media bandwidth
to the node or nodes connected on that port. With fewer nodes in each collision domain, there is an increase in
the average bandwidth available to each node, and collisions are reduced.

Page: 65
Why Switches:
In a LAN where all nodes are connected directly to the switch, the throughput of the network increases
dramatically. The three primary reasons for this increase are:
• Dedicated bandwidth to each port
• Collision-free environment
• Full-duplex operation

Hub VS Switch:
Hub Switch
Works on physical layer Works on Datalink layer
Half-duplex Full Duplex
Hub extends the collision domain Switch splits the collision domain (Each
port of the switch acts as a collision
domain)
Multiport Repeater Multiport Bridge
Overall Bandwidth is shared Each port receives its own bandwidth.

Page: 66
Cheap Expensive
Not used in todays market due to degraded Mostly used today.
performance

There are three forwarding methods a switch can use:


• Cut through (cut-through switching is a switching method for packet switching systems, wherein the
switch starts forwarding that frame (or packet) before the whole frame has been received, normally as
soon as the destination address is processed. This technique reduces latency through the switch, but
decreases reliability.)
• Store and forward - the switch, unlike cut through, buffers and typically, performs a checksum on each
frame before forwarding it on.
• Fragment free (Fragment-free switching is suitable for backbone applications in a congested network, or
when connections are allocated to a number of users. The packets are sent through the switch as a
continuous flow of data--the transmit and receive rates are always the same. Because of this, fragment-
free switching cannot pass packets to higher speed networks, for example, to forward packets from a 10
Mbit/s to a 100 Mbit/s Ethernet network. Therefore, if you opt for fragment-free switching, you cannot
make direct connections to higher speed networks from that port.)

Framing:
The data link layer, needs to pack bits into frames, so that each frame is distinguishable from another. The Data
Link layer prepares a packet for transport across the local media by encapsulating it with a header and a trailer to
create a frame.

The Data Link layer frame includes:


• Data - The packet from the Network layer
• Header - Contains control information, such as addressing, and is located at the beginning of the PDU
• Trailer - Contains control information added to the end of the PDU

Page: 67
Our postal system practices a type of framing. The simple act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves as the delimiter. In addition, each envelope defines the
sender and receiver addresses since the postal system is a many-to-many carrier
facility. Framing in the data link layer separates a message from one source to a destination, or from other
messages to other destinations, by adding a sender address and a destination address. The destination address
defines where the packet is to go; the sender address helps the recipient acknowledge the receipt.

Fixed-Size Framing
Frames can be of fixed or variable size. In fixed-size framing, there is no need for defining the boundaries of the
frames; the size itself can be used as a delimiter. An example of this type of framing is the ATM wide-area
network, which uses frames of fixed size called cells.

Variable-Size Framing
variable-size framing is prevalent in local- area networks. In variable-size framing, we need a way to define the
end of the frame and the beginning of the next. Historically, two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.

Character-Oriented Protocols
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system such as ASCII (see
Appendix A). The header, which normally carries the source and destination addresses and other control
information, and the trailer, which carries error detection or error correction redundant bits, are also multiples of
8 bits. To separate one frame from the next, an 8-bit (I-byte) flag is added at the beginning and the end of a
frame. The flag, composed of protocol-dependent special characters, signals the start or end of a frame .
Any pattern used for the flag could also be part of the information. To fix this problem, a byte-stuffing strategy
was added to character-oriented framing. In byte stuffing (or character stuffing), a special byte is added to the
data section of the frame when there is a character with the same pattern as the flag. The data section is stuffed
with an extra byte. This byte is usually called the escape character (ESC), which has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and treats the next
character as data, not a delimiting flag.
Character-oriented protocols present a problem in data communications. The universal coding systems in use
today, such as Unicode, have 16-bit and 32-bit characters that conflict with 8-bit characters. We can say that in
general, the tendency is moving toward the bit-oriented protocols that we discuss next.

Bit-Oriented Protocols
In a bit-oriented protocol, the data section of a frame is a sequence of bits to be interpreted by the upper layer as
text, graphic, audio, video, and so on. However, in addition to headers (and possible trailers), we still need a
delimiter to separate one frame from the other. Most protocols use a special 8-bit pattern flag 01111110 as the
delimiter to define the beginning and the end of the frame.

Fig:A frame in a bit-oriented protocol

Page: 68
This flag can create the same type of problem we saw in the byte-oriented protocols. That is, if the flag pattern
appears in the data, we need to somehow inform the receiver that this is not the end of the frame. We do this by
stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like a flag. The strategy is called bit
stuffing. In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added. This extra stuffed
bit is eventually removed from the data by the receiver. Note that the extra bit is added after one 0 followed by
five 1s regardless of the value of the next bit. This guarantees that the flag field sequence does not inadvertently
appear in the frame.

LAN

Fig:Bit Stuffing and unstuffing This means


that if the flaglike pattern 01111110 appears in the data, it will change to 011111010 (stuffed) and is not mistaken
as a flag by the receiver. The real flag 01111110 is not stuffed by the sender and is recognized by the receiver.

Architecture:
The architecture of a LAN can be considered as a set of layered protocols.
In OSI terms, the higher layer protocols are totally independent of the LAN architecture. Hence, only lower
order layers are considered for the design of LAN architecture.
The datalink layer of LAN is split into two sub layers.
- Medium Access Control (MAC),
- Logical Link Control Layer (LLC).
The IEEE 802 committee had formulated the standards for LAN.

Page: 69
IEEE Standards:

LLC Frame Format:

Destination Service Access Point (DSAP) -- IEEE 802.2 header begins with a 1 byte field, which identifies the
receiving upper-layer process.

Source Service Access Point (SSAP) -- Following the DSAP address is the 1-byte address, which identifies the
sending upper-layer process.

Control -- The Control field employs three different formats, depending on the type of LLC frame used:

• Information (I) frame -- Carries upper-layer information and some control information.
• Supervisory (S) frame -- Provides control information. An S frame can request and suspend

Page: 70
transmission, reports on status, and acknowledge receipt of I frames. S frames do not have an
Information field.
• Unnumbered (U) frame -- Used for control purposes and is not sequenced. A U frame can be used to
initialize secondaries. Depending on the function of the U frame, its Control field is 1 or 2 bytes. Some
U frames have an Information field.
Data -- Variable-length field bounded by the MAC format implemented. Usually contains IEEE 802.2
Subnetwork Access Protocol (SNAP) header information, as well as application-specific data.

MAC Frame Format:

IEEE 802.3 Ethernet Frame Format:

Flow Control:
Flow control is a technique for assuring that a transmitting entity does not overwhelm a receiving entity with
data. Flow control is a set of procedures that tells the sender how much data it can transmit before it must wait
for an acknowledgment from the receiver.

Page: 71
Stop and wait Flow Control:
A source entity transmits a frame. After reception, the destination entity indicates its willingness to accept
another frame by sending back an acknowledgment to the frame just received. The source must wait until it
receives the acknowledgment before sending the next frame. The source must wait until it receives the
acknowledgment before sending the next frame. The destination can thus stop the flow of
data by simply withholding acknowledgment. With the use of multiple frames for a single message, the stop-
and-wait procedure may be inadequate. The essence of the problem is that only one frame at a
time can be in transit. For very high data rates, or for very long distances between sender and receiver, stop- and-
wait flow control provides inefficient line utilization.

Sliding Window Flow Control:


Major draw back of stop-and-wait flow control is only one frame can be transmitted at a time; this leads to
inefficiency if propagation delay is much longer than transmission delay. Sliding window flow control allows the
transmission of multiple frame. It assigns each frame a k-bit sequence number and the range of sequence no. is
[0..2k -1 ].
Let us examine how this might work for two stations, A and B, connected via a full-duplex link. Station B
allocates buffer space for n frames. Thus, B can accept n frames, and A is allowed to send n frames without
waiting for any acknowledgments. To keep track of which frames have been acknowledged, each is labeled with
a sequence number. B acknowledges a frame by sending an acknowledgment that
includes the sequence number of the next frame expected. This acknowledgment also implicitly announces that
B is prepared to receive the next n frames, beginning with the number specified. This scheme can also be used to
acknowledge multiple frames

Page: 72
How Flow control is achieved?
• Receiver can control the size of the sending window.
• By limiting the size of the sending window data flow from sender to receiver can be limited .

The example assumes a 3-bit sequence number field and a maximum window size of seven frames. Initially, A
and B have windows indicating that A may transmit seven frames, beginning with frame 0 (FO). After
transmitting three frames (FO, F1, F2) without acknowledgment, A has shrunk its window to four frames. The
window indicates that A may transmit four frames, beginning with frame number 3. B then transmits an RR
(receive-ready) 3, which means: "I have received all frames up through frame number 2 and am ready to receive
frame number 3; in fact, I am prepared to receive seven frames, beginning with frame number 3." With this
acknowledgment, A is back up to permission to transmit seven frames, still beginning with frame 3. A proceeds
to transmit frames 3, 4, 5 , and 6. B returns an RR 4, which allows A to send up to and including frame F2.

Example of Sliding window Protocol

Page: 73
Error Control:
When data is transmitted over a cable or channel, there is always a chance that some of the bits will be changed
(corrupted) due to noise, signal distortion or attenuation. If errors do occur, then some of the bits will either
change from 0 to 1 or from 1 to 0.

Error Control allows the receiver to inform the sender of any frames lost or damaged in transmission and
coordinates the retransmission of those frames by the sender. Error control is divided in two main categories:
Error Detection It allows a receiver to check whether received data has been corrupted during transmission. It
can, for example, request a retransmission.

Error Correction This type of error control allows a receiver to reconstruct the original information when it has
been corrupted during transmission.

In the data link layer, the term error control refers primarily to methods of error detection and retransmission.
Error control in the data link layer is often implemented simply: Any time an error is detected in an exchange,
specified frames are retransmitted. This process is called automatic repeat request (ARQ).

Error control in the data link layer is based on automatic repeat request, which is the retransmission of data.

There are 2 ways to correct found errors:


 Forward error correction (FEC is accomplished by adding redundancy to the transmitted information
using a predetermined algorithm. Each redundant bit is invariably a complex function of many original
information bits. The original information may or may not appear in the encoded output; codes that
include the unmodified input in the output are systematic, while those that do not are nonsystematic.)
and
 Automatic repeat request (ARQ) ( in which the receiver detects transmission errors in a message and
automatically requests a retransmission from the transmitter. Usually, when the transmitter receives the
ARQ, the transmitter retransmits the message until it is either correctly received or the error persists
beyond a predetermined number of retransmissions.A few types of ARQ protocols are Stop-and-wait
ARQ, Go-Back-N ARQ and Selective Repeat ARQ.).

Hamming distance:
One of the central concepts in coding for error control is the idea of the Hamming distance. The Hamming
distance between two words (of the same size) is the number of differences between the corresponding bits. We
show the Hamming distance between two words x and y as d(x, y). The Hamming distance can easily be found if
we apply the XOR operation on the two words and count the number of 1s in the result. Note that the Hamming
distance is a value greater than zero.

Let us find the Hamming distance between two pairs of words.


1. The Hamming distance d(OOO, 011) is 2 because 000 XOR 011 is 011 (two 1s).
2. The Hamming distance d(10101, 11110) is 3 because 10101 XOR 11110 is 01011 (three 1s).

Error Detection:
There are three ways to detect errors.
1. Parity check
2. CRC
3. Checksum

Page: 74
1. Parity Check:
The simplest error-detection scheme is to append a parity bit to the end of a block of data . A typical example is
ASCII transmission, in which a parity bit is attached to each 7-bit ASCII character. The value of this bit is
selected so that the character has an even number of 1s (even parity) or an odd number of 1s (odd parity).

One bit Even Parity


So, for example,
if the transmitter is transmitting an ASCII G (1110001) and using odd parity, it will append a 1 and transmit
11100011. The receiver examines the received character and, if the total number of 1s is odd, assumes that no
error has occurred. If one bit (or any odd number of bits) is erroneously inverted during transmission (for
example, 11QO0011), then the receiver will detect an error. Note, however, that if two (or any even number) of
bits are inverted due to error, an undetected error occurs. Typically, even parity is used for synchronous
transmission and odd parity for asynchronous transmission. The use of the parity bit is not foolproof, as noise
impulses are often long enough to destroy more than one bit, particularly at high data rates.

2. Cyclic Redundancy Check:

CRC codes operate as follows. Consider the d-bit piece of data, D, that the sending node wants to send to the
receiving node. The sender and receiver must first agree on a r+1 bit pattern, known as a generator, which we
will denote as G. We will require that the high and low order bits of G must be 1 (e.g., 10111 is acceptable, but
0101 and 10110 are not) . The key idea behind CRC codes is shown in Figure. For a given piece of data, D, the
sender will choose r additional bits, R, and append them to D such that the resulting d+r bit pattern (interpreted
as a binary number) is exactly divisible by G using modulo 2 arithmetic. The process of error checking with
CRC's is thus simple: the receiver divides the d+r received bits by G. If the remainder is non-zero, the receiver
knows that an error has occurred; otherwise the data is accepted as being correct.

All CRC calculations are done in modulo 2 arithmetic without carries in addition or borrows in subtraction. This
means that addition and subtraction are identical, and both are equivalent to the bitwise exclusive-or (XOR) of
the operands. Thus, for example,

1011 XOR 0101 = 1110


1001 XOR 1101 = 0100

Also, we similarly have

Page: 75
1011 - 0101 = 1110
1001 - 1101 = 0100

Multiplication and division are the same as in base 2 arithmetic, except that any required addition or
subtraction is done without carries or borrows. As in regular binary arithmetic, multiplication by 2 k left shifts a
bit pattern by k places. Thus, given D and R, the quantity D*2r XOR R yields the d+r bit pattern shown in Figure
above.

International standards have been defined for 8-, 12-, 16- and 32-bit generators, G. An 8-bit CRC is used to
protect the 5-byte header in ATM cells.

Figure below illustrates this calculation for the case of D = 101110, d = 6 and G = 1001, r=3. The nine bits
transmitted in this case are 101110 011. You should check these calculations for yourself and also check that
indeed D2r = 101011 * G XOR R.

A second way to viewing the CRC process is to express all values as polynomials in a dummy variable X, with
binary coefficients. The coefficients corresponds to the bits in the binary number. Thus for D=110011 we have,
D(X)=X5+X4+X+1 for P=11001 we have P(X)=X4+X3+1. Arithmetic operations are again modulo 2.

Page: 76
3. Checksum:
The checksum is used in the Internet by several protocols although not at the data link layer. However,
we briefly discuss it here to complete our discussion on error checking.

Example1:
Suppose our data is a list of five 4-bit numbers that we want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example, if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11,
12,0,6,36), where 36 is the sum of the original numbers. The receiver adds the five numbers and compares the
result with the sum. If the two are the same, the receiver assumes no error, accepts the five numbers, and
discards the sum. Otherwise, there is an error somewhere and the data are not accepted.

Example2:
We can make the job of the receiver easier if we send the negative (complement) of the sum, called the
checksum. In this case, we send (7, 11, 12,0,6, -36). The receiver can add all the numbers received (including the
checksum). If the result is 0, it assumes no error; otherwise, there is an error

Internet Checksum
Traditionally, the Internet has been using a 16-bit checksum. The sender calculates the checksum by following
these steps.
Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to O.
3. All words including the checksum are added ushtg one's complement addition.
4. The sum is complemented and becomes the checksum.
5. The checksum is sent with the data.

The receiver uses the following steps for error detection.

Receiver site:
1. The message (including checksum) is divided into 16-bit words.
2. All words are added using one's complement addition.
3.The sum is complemented and becomes the new checksum.
4.If the value of checksum is 0, the message is accepted; otherwise, it is rejected.

Error Correction:
Any time an error is detected in an exchange, specified frames are retransmitted. This process is called automatic
repeat request (ARQ). Error control in the data link layer is based on automatic repeat request, which is the
retransmission of data.

Three versions of ARQ have been standardized:


Stop-and-wait ARQ
Go-back-N ARQ
Selective-reject ARQ

Page: 77
Stop-and-wait ARQ:
Stop-and-wait ARQ is based on the stop-
and-wait flow-control technique outlined
previously and is depicted in Figure
alongside.

• The source station transmits a


single frame and then must await an
acknowledgement (ACK). No other
data frames can be sent until the
destination stations reply arrives at
the source station.

• The sending device keeps a copy of


the last frame until it receives an
acknowledgement for that frame.
Keeping a copy allows the sender to
retransmits lost or damaged frames
until they are received correctly.

• For identification purposes, both


data frames and acknowledgement
frames (ACK) are numbered o & 1.
A data 0 frame is acknowledged by
and ACK1 frame, indicating that the
receiver has received data frame 0
and is now expecting data frame 1.

• The sender starts a timer when it


sends a frame. If an
acknowledgement is not received
within an alloted time period, the
sender assumes that the frame was Fig.Stop and Wait ARQ
lost or damage and resends it.

• The receiver sends only +ve ACK for frame received safe and sound. It is silent about the frames
damaged or lost. The acknowledgement number always define the number of next expected frame. If
frame 0 is received, ACK1 is sent; if frame 1 is received ACK 0 is sent.

Bidirectional Transmission:
The stop-and-wait mechanism we have discussed is unidirectional. However, we can have bi-directional
transmission if the two parties have two separate channels for the full-duplex transmission or share the same
channel for half-duplex transmission.
Piggybacking: is a method to combine a data frame with an acknowledgement. For example, stations A and B
both have data to send. Instead of sending separate data and ACK frames, Station A sends a data frame that
includes an ACK , station B behaves in a similar manner.
Piggybacking can save BW because the overhead from a data frame and ACK frame (addresses, CRC, etc) can
be combined into just one frame.

Page: 78
Go-Back-N ARQ:
• The form of error control based onsliding-
window flow control that is most
commonly used is called go-back-N ARQ.

• In go-back-N ARQ, a station may send a


series of frames sequentially numbered
modulo some maximum value.

• The number of unacknowledged frames


out-standing is determined by window
size, using the sliding-window flow
control technique.

• While no errors occur, the destination will


acknowledge (RR = receive ready)
incoming frames as usual.

• If the destination station detects an error


in a frame, it sends a negative
acknowledgment (REJ = reject) for that
frame. The destination station will discard
that frame and all future incoming frames
until the frame in error is correctly
received. Thus, the source station, when it
receives an REJ, must retransmit the
frame in error plus all succeeding frames
that were transmitted in the interim.

Consider that station A is sending frames to Fig:Go-Back-N ARQ


station B. After each transmission, A sets an acknowledgment timer for the frame just transmitted. The go-back-
N technique takes into account the following contingencies:

1. Damaged frame. There are three sub-cases:


a) A transmits frame i. B detects an error and has previously successfully received frame (i - 1). B sends
REJ i, indicated that frame i is rejected. When A receives the REJ, it must retransmit frame i and all
subsequent frames that it has transmitted since the original transmission of frame i.

b) Frame i is lost in transit. A subsequently sends frame (i + 1). B receives frame (i + 1) out of order and
sends an REJ i. A must retransmit frame i and all subsequent frames.

c) Frame i is lost in transit, and A does not soon send additional frames. B receives nothing and returns
neither an RR nor an REJ. When A's timer expires, it transmits an RR frame that includes a bit known as

Page: 79
the P bit, which is set to 1.B interprets the R R frame with a P bit of 1as a command that must be
acknowledged by sending an R R indicating the next frame that it expects. When A receives the RR, it
retransmits frame i.

2. Damaged RR. There are two sub-cases:


a) B receives frame i and sends R R (i + I), which is lost in transit. Because acknowledgments are
cumulative (e.g., RR 6 means that all frames through 5 are acknowledged), it may be that A will receive
a subsequent R R to a subsequent frame and that it will arrive before the timer associated with frame i
expires.

b) If A's timer expires, it transmits an RR command as in Case lc. It sets another timer, called the P-bit
timer. If B fails to respond to the RR command, or if its response is damaged, then A's P-bit timer will
expire. At this point, A will try again by issuing a new RR command and restarting the P-bit timer. This
procedure is tried for a number of iterations. If A fails to obtain an acknowledgment after some
maximum number of attempts, it initiates a reset procedure.

3. Damaged REJ. If an REJ is lost, this is equivalent to Case lc.

Selective-reject ARQ
With selective-reject ARQ, the only frames retransmitted are those that receive a negative acknowledgment, in
this case called SREJ, or that time-out. This would appear to be more efficient than go-back-N, because it
minimizes the amount of retransmission. On the other hand, the receiver must maintain a buffer large enough to
save post-SREJ frames until the frame in error is retransmitted, and it must contain logic for reinserting that
frame in the proper sequence. The transmitter, too, requires more complex logic to be able to send a frame out of
sequence. Because of such complications, select-reject ARQ is much less used than go-back- N ARQ.

High -Level Data Link Control(HDLC)


High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over point-to-point and
multipoint links. HDLC is a synchronous Data Link layer bit-oriented protocol developed by the International
Organization for Standardization (ISO). The current standard for HDLC is ISO 13239. HDLC was developed
from the Synchronous Data Link Control (SDLC) standard proposed in the 1970s. HDLC provides both
connection-oriented and connectionless service. HDLC uses synchronous serial transmission to provide error-
free communication between two points. HDLC defines a Layer 2 framing structure that allows for flow control
and error control through the use of acknowledgments. Each frame has the same format, whether it is a data
frame or a control frame. When you want to transmit frames over synchronous or asynchronous links, you must
remember that those links have no mechanism to mark the beginnings or ends of frames. HDLC uses a frame
delimiter, or flag, to mark the beginning and the end of each frame.

HDLC Frame Format:

Page: 80
Fig: HDLC Frame Format

HDLC defines three types of frames:


1. Information frames :(I-frames)
2. Supervisory frames (S-frames)
3. Unnumbered frames (U-frames)
Each type of frame serves as an envelope for the transmission of a different type of message. I-frames are used
to transport user data and control information relating to user data (piggybacking). S-frames are used only to
transport control information. U-frames are reserved for system management. Information carried by U-frames is
intended for managing the link itself.

LEGEND
N(S) = Send sequence number
N(R) = Receive sequence number
P/F = Poll/final bit

Page: 81
I Frames:
I- frames are designed to carry user data from the network layer. In addition, they can include flow and error
control information (piggybacking).
• The first bit defines the type. If the first bit of the control field is 0, this means the frame is an I-frame.
• The next 3 bits, called N(S), define the sequence number of the frame. Note that with 3 bits, we can
define a sequence number between and 7; but in the extension format, in which the control field is 2
bytes, this field is larger.
• The last 3 bits, called N(R), correspond to the acknowledgment number when piggybacking is used .
• The single bit between N(S) and N(R) is called the P/F bit. The P/F field is a single bit with a dual
purpose. It has meaning only when it is set (bit = 1) and can mean poll or final. It means poll when the
frame is sent by a primary station to a secondary (when the address field contains the address of the
receiver). It means final when the frame is sent by a secondary to a primary (when the address field
contains the address of the sender).

S Frame:
Supervisory frames are used for flow and error control whenever piggybacking is either impossible or
inappropriate (e.g., when the station either has no data of its own to send or needs to send a command or
response other than an acknowledgment). S-frames do not have information fields.
• If the first 2 bits of the control field is 10, this means the frame is an S-frame.
• The last 3 bits, called N(R), corresponds to the acknowledgment number (ACK) or negative
acknowledgment number (NAK) depending on the type of S-frame.
• The 2 bits called code is used to define the type of S-frame itself. With 2 bits, we can have four types of
S-frames, as described below :

1. Receive ready (RR). If the value of the code subfield is 00, it is an RR S-frame. This kind of frame
acknowledges the receipt of a safe and sound frame or group of frames. In this case, the value N(R) field
defines the acknowledgment number.

2. Receive not ready (RNR). If the value of the code subfield is 10, it is an RNR S-frame. This kind of
frame is an RR frame with additional functions. It acknowledges the receipt of a frame or group of
frames, and it announces that the receiver is busy and cannot receive more frames. It acts as a kind of
congestion control mechanism by asking the sender to slow down. The value of NCR) is the
acknowledgment number.

3. Reject (REJ). If the value of the code subfield is 01, it is a REJ S-frame. This is a NAK frame, but not
like the one used for Selective Repeat ARQ. It is a NAK that can be used in Go-Back-N ARQ to
improve the efficiency of the process by informing the sender, before the sender time expires, that the
last frame is lost or damaged. The value of NCR) is the negative acknowledgment number.

4. Selective reject (SREJ). If the value of the code subfield is 11, it is an SREJ S-frame. This is a NAK
frame used in Selective Repeat ARQ. Note that the HDLC Protocol uses the term selective reject instead
of selective repeat. The value of N(R) is the negative acknowledgment number.

The fifth field in the control field is the P/F bit as discussed before.
The next 3 bits, called N(R), correspond to the ACK or NAK value.

Page: 82
U Frames:
Unnumbered frames are used to exchange session management and control information between connected
devices. Unlike S-frames, U-frames contain an information field, but one used for system management
information, not user data. As with S-frames, however, much of the information carried by U-frames is contained
in codes included in the control field. U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit. Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.

Flag Field:
Flag fields delimit the frame at both ends with the unique pattern 01111110. A single flag may be used as the
closing flag for one frame and the opening flag for the next. On both sides of the user-network interface,
receivers are continuously hunting for the flag sequence to synchronize on the start of a frame. While receiving a
frame, a station continues to hunt for that sequence to determine the end of the frame. Since the pattern 01111110
may appear in the frame as well, a procedure know and bit stuffing is used.
After detecting a starting flag, the receiver monitors the bit stream. When a pattern of five 1s appears, the sixth
bit is examined. If this bit is 0, it is deleted. If the sixth bit is a 1 and the seventh bit is a 0, the combination is
accepted as a flag. If the sixth and seventh bits are both 1, the sender is indicating an abort condition. With the
use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame. This property is known as
data transparency.

Address Field:
The address field identifies the secondary station that transmitted or is to receive the frame. This field is not
needed for point-to-point links, but is always included for the sake of uniformity.

Control Field: It defines the three types of frames I,U and S Frame for HDLC.

Information Field: This field is present only in I frame and some U Frame.

Frame Check Sequence Field: Its and error detecting code calculated from the remaining bits of the frame,
exclusive of flags. The normal code is 16 bit CRC code.

PPP: (Point-to-point protocol):


Although HDLC is a general protocol that can be used for both point-to-point and multi- point configurations,
one of the most common protocols for point-to-point access is the Point-to-Point Protocol (PPP). Today, millions
of Internet users who need to connect their home computers to the server of an Internet service provider use PPP.
The majority of these users have a traditional modem; they are connected to the Internet through a telephone
line, which provides the services of the physical layer.

PPP provides several services:


1. PPP defines the format of the frame to be exchanged between devices.
2. PPP defines how two devices can negotiate the establishment of the link and the exchange of data.
3. PPP defines how network layer data are encapsulated in the data link frame.
4. PPP defines how two devices can authenticate each other.
5. PPP provides multiple network layer services supporting a variety of network layer protocols.

Page: 83
6. PPP provides connections over multiple links.
7. PPP provides network address configuration. This is particularly useful when a home user needs a
temporary network address to connect to the Internet.

On the other hand, to keep PPP simple, several services are missing:
1. PPP does not provide flow control. A sender can send several frames one after another with no concern
about overwhelming the receiver.
2. PPP has a very simple mechanism for error control. A CRC field is used to detect errors. If the frame is
corrupted, it is silently discarded; the upper-layer protocol needs to take care of the problem. Lack of
error control and sequence numbering may cause a packet to be received out of order.
3. PPP does not provide a sophisticated addressing mechanism to handle frames in a multipoint
configuration.

Framing
PPP is a byte-oriented protocol. Framing is done according to the discussion of byte- oriented protocols above.

Fig: PPP Frame Format

Flag. A PPP frame starts and ends with a I-byte flag with the bit pattern 01111110. Although this pattern is the
same as that used in HDLC, there is a big difference. PPP is a byte-oriented protocol; HDLC is a bit-oriented
protocol. The flag is treated as a byte, as we will explain later.

• Address. The address field in this protocol is a constant value and set to 11111111 (broadcast address).
During negotiation (discussed later), the two parties may agree to omit this byte.

• Control. This field is set to the constant value 11000000 (imitating unnumbered frames in HDLC). As
we will discuss later, PPP does not provide any flow control. Error control is also limited to error
detection. This means that this field is not needed at all, and again, the two parties can agree, during
negotiation, to omit this byte.

• Protocol. The protocol field defines what is being carried in the data field: either user data or other
information. We discuss this field in detail shortly. This field is by default 2 bytes long, but the two
parties can agree to use only 1 byte.

• Payload field. This field carries either the user data or other information. The data field is a sequence of
bytes with the default of a maximum of 1500 bytes; but this can be changed during negotiation. The data
field is byte- stuffed if the flag byte pattern appears in this field. Because there is no field defining the
size of the data field, padding is needed if the size is less than the maximum default value or the
maximum negotiated value.

Page: 84
• FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.

Byte Stuffing
The similarity between PPP and HDLC ends at the frame format. PPP, as we discussed before, is a byte-oriented
protocol totally different from HDLC. As a byte-oriented protocol, the flag in PPP is a byte and needs to be
escaped whenever it appears in the data section of the frame. The escape byte is 01111101, which means that
every time the flaglike pattern appears in the data, this extra byte is stuffed to tell the receiver that the next byte
is not a flag.

PPP Stack
Although PPP is a data link layer protocol, PPP uses another set of other protocols to establish the link,
authenticate the parties involved, and carry the network layer data. Three sets of protocols are defined to make
PPP powetful: the Link Control Protocol (LCP), two Authentication Protocols (APs), and several Network
Control Protocols (NCPs). At any moment, a PPP packet can carry data from one of these protocols in its data
field. Note that there is one LCP, two APs, and several NCPs. Data may also come from several different
network layers.

Fig:PPP Layered Architecture

The Link Control Protocol (LCP) is responsible for establishing, maintaining, configuring, and terminating
links. It also provides negotiation mechanisms to set options between the two endpoints. Both endpoints of the
link must reach an agreement about the options before the link can be established. All LCP packets are carried in
the payload field of the PPP frame with the protocol
field set to C02 1 in hexadecimal .

Authentication Protocols
Authentication plays a very important role in PPP because PPP is designed for use over dial-up links where
verification of user identity is necessary. Authentication means validating the identity of a user who needs to
access a set of resources. PPP has created two protocols for authentication: Password Authentication Protocol
and Challenge Handshake Authentication Protocol. Note that these protocols are used during the authentication
phase.
PAP The Password Authentication Protocol (PAP) is a simple authentication procedure with a two-step
process:
1. The user who wants to access a system sends an authentication identification (usually the user name) and
a password.
2. The system checks the validity of the identification and password and either accepts or denies
connection.

Page: 85
Fig:PAP Authentication Protocol

Note: Passwords are sent in clear text in PAP.


CHAP The Challenge Handshake Authentication Protocol (CHAP) is a three-way hand-shaking
authentication protocol that provides greater security than PAP. In this method, the password is kept secret; it is
never sent on-line.

1. The system sends the user a challenge packet containing a challenge value, usually a few bytes.
2. The user applies a predefined function that takes the challenge value and the user's own password and
creates a result. The user sends the result in the response packet to the system.
3. The system does the same. It applies the same function to the password of the user (known to the
system) and the challenge value to create a result. If the result created is the same as the result sent in the
response packet, access is granted; otherwise, it is denied. CHAP is more secure than PAP, especially if
the system continuously changes the challenge value. Even if the intruder learns the challenge value and
the result, the password is still secret.

Fig: CHAP Authentication protocol

Network Control Protocols


PPP is a multiple-network layer protocol. It can carry a network layer data packet from protocols defined by the
Internet, OSI, Xerox, DECnet, AppleTalk, Novel, and so on. To do this, PPP has defined a specific Network
Control Protocol for each network protocol. For example, IPCP (Internet Protocol Control Protocol) configures
the link for carrying IP data packets. Xerox CP does the same for the Xerox protocol data packets, and so on.
Note that none of the NCP packets carry network layer data; they just configure the link at the network layer for
the incoming data.

Page: 86

You might also like