ComputerNetwork Unit 2
ComputerNetwork Unit 2
Services and Data Link Devices (Switch, Bridge); Framing, Flow Control and Error Control; Elementary
Data link Protocols; Sliding Window Protocols; HDLC, SLIP and PPP Media Access Control Layer ( Carrier
Sense Multiple Access/ Collision Detection)
Multiple Access
The upper sublayer of Datalink layer, that is responsible for flow and error control is called the logical link
control (LLC) layer; the lower sublayer that is mostly responsible for multiple- access resolution is called the
media access control (MAC) layer.
According to CSMA/CD, a node should not send a packet unless the network is clear of traffic. If two nodes
send packets at the same time, a collision occurs and the packets are lost. Then both nodes send a jam signal,
wait for a random amount of time, and retransmit their packets. Any part of the network where packets from two
or more nodes can interfere with each other is considered a collision domain. A network with a larger number of
nodes on the same segment has a larger collision domain and typically has more traffic. As the amount of traffic
in the network increases, the likelihood of collisions increases.
CSMA/CD Algorithm:
1. If the medium is idle, transmit; otherwise, go to step 2.
2. If the medium is busy, continue to listen until the channel is idle, then transmit immediately.
3. If a collision is detected during transmission, transmit a brief jamming signal to assure that all stations
know that there has been a collision and then cease transmission.
4. After transmitting the jamming signal, wait a random amount of time, then attempt to transmit again.
(Repeat from step 1.
Traditional Ethernet uses CSMA/CD.
Bridge:
A networking component used either to extend or to segment networks. Bridges work at the OSI data-link layer.
They can be used both to join dissimilar media such as unshielded twisted-pair (UTP) cabling and fiber-optic
cabling, and to join different network architectures such as Token Ring and Ethernet. Bridges regenerate signals
Page: 64
but do not perform any protocol conversion, so the same networking protocol (such as TCP/IP) must be running
on both network segments connected to the bridge. Bridges can also support Simple Network Management
Protocol (SNMP), and they can have other diagnostic features.
How it works?
Bridges operate by sensing the source MAC addresses of the transmitting nodes on the network and
automatically building an internal routing table. This table is used to determine which connected segment to
route packets to, and it provides the filtering capability that bridges are known for. If the bridge knows which
segment a packet is intended for, it forwards the packet directly to that segment. If the bridge doesn’t recognize
the packet’s destination address, it forwards the packet to all connected segments except the one it originated on.
And if the destination address is in the same segment as the source address, the bridge drops the packet. Bridges
also forward broadcast packets to all segments except the originating one.
Hub:
The basic networking component used in traditional 10-Mbps Ethernet networks to connect network
stations to form a local area network (LAN). Hubs can be used for
• Connecting about a dozen computers to form a workgroup or departmental LAN
• Connecting other hubs in a cascaded star topology to form a larger LAN of up to roughly a hundred
computers
How It Works
Hubs are the foundation of traditional 10BaseT
Ethernet networks. The hub receives signals
from each station and repeats the signals to all
other stations connected to the hub. In active
hubs (which all of today’s hubs are), the signal
received from one port is regenerated
(amplified) and retransmitted to the other ports
on the hub. Hubs thus perform the function of a
repeater and are sometimes called multiport
repeaters. From a logical cabling point of view,
stations wired into a hub form a star topology.
Hubs generally have RJ-45 ports for unshielded
twisted-pair (UTP) cabling, and they range in
size from 4 to 24 or more ports for connecting
stations to the hub, plus one or more uplink
ports for connecting the hub to other hubs in a
cascaded star topology. Hubs generally have various light-emitting diode (LED) indicator lights to indicate the
status of each port, link status, collisions, and so on.
Switch:
Switch is essentially a multi-port bridge. Switches allow the segmentation of the LAN into separate collision
domains. Each port of the switch represents a separate collision domain and provides the full media bandwidth
to the node or nodes connected on that port. With fewer nodes in each collision domain, there is an increase in
the average bandwidth available to each node, and collisions are reduced.
Page: 65
Why Switches:
In a LAN where all nodes are connected directly to the switch, the throughput of the network increases
dramatically. The three primary reasons for this increase are:
• Dedicated bandwidth to each port
• Collision-free environment
• Full-duplex operation
Hub VS Switch:
Hub Switch
Works on physical layer Works on Datalink layer
Half-duplex Full Duplex
Hub extends the collision domain Switch splits the collision domain (Each
port of the switch acts as a collision
domain)
Multiport Repeater Multiport Bridge
Overall Bandwidth is shared Each port receives its own bandwidth.
Page: 66
Cheap Expensive
Not used in todays market due to degraded Mostly used today.
performance
Framing:
The data link layer, needs to pack bits into frames, so that each frame is distinguishable from another. The Data
Link layer prepares a packet for transport across the local media by encapsulating it with a header and a trailer to
create a frame.
Page: 67
Our postal system practices a type of framing. The simple act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves as the delimiter. In addition, each envelope defines the
sender and receiver addresses since the postal system is a many-to-many carrier
facility. Framing in the data link layer separates a message from one source to a destination, or from other
messages to other destinations, by adding a sender address and a destination address. The destination address
defines where the packet is to go; the sender address helps the recipient acknowledge the receipt.
Fixed-Size Framing
Frames can be of fixed or variable size. In fixed-size framing, there is no need for defining the boundaries of the
frames; the size itself can be used as a delimiter. An example of this type of framing is the ATM wide-area
network, which uses frames of fixed size called cells.
Variable-Size Framing
variable-size framing is prevalent in local- area networks. In variable-size framing, we need a way to define the
end of the frame and the beginning of the next. Historically, two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.
Character-Oriented Protocols
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system such as ASCII (see
Appendix A). The header, which normally carries the source and destination addresses and other control
information, and the trailer, which carries error detection or error correction redundant bits, are also multiples of
8 bits. To separate one frame from the next, an 8-bit (I-byte) flag is added at the beginning and the end of a
frame. The flag, composed of protocol-dependent special characters, signals the start or end of a frame .
Any pattern used for the flag could also be part of the information. To fix this problem, a byte-stuffing strategy
was added to character-oriented framing. In byte stuffing (or character stuffing), a special byte is added to the
data section of the frame when there is a character with the same pattern as the flag. The data section is stuffed
with an extra byte. This byte is usually called the escape character (ESC), which has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and treats the next
character as data, not a delimiting flag.
Character-oriented protocols present a problem in data communications. The universal coding systems in use
today, such as Unicode, have 16-bit and 32-bit characters that conflict with 8-bit characters. We can say that in
general, the tendency is moving toward the bit-oriented protocols that we discuss next.
Bit-Oriented Protocols
In a bit-oriented protocol, the data section of a frame is a sequence of bits to be interpreted by the upper layer as
text, graphic, audio, video, and so on. However, in addition to headers (and possible trailers), we still need a
delimiter to separate one frame from the other. Most protocols use a special 8-bit pattern flag 01111110 as the
delimiter to define the beginning and the end of the frame.
Page: 68
This flag can create the same type of problem we saw in the byte-oriented protocols. That is, if the flag pattern
appears in the data, we need to somehow inform the receiver that this is not the end of the frame. We do this by
stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like a flag. The strategy is called bit
stuffing. In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added. This extra stuffed
bit is eventually removed from the data by the receiver. Note that the extra bit is added after one 0 followed by
five 1s regardless of the value of the next bit. This guarantees that the flag field sequence does not inadvertently
appear in the frame.
LAN
Architecture:
The architecture of a LAN can be considered as a set of layered protocols.
In OSI terms, the higher layer protocols are totally independent of the LAN architecture. Hence, only lower
order layers are considered for the design of LAN architecture.
The datalink layer of LAN is split into two sub layers.
- Medium Access Control (MAC),
- Logical Link Control Layer (LLC).
The IEEE 802 committee had formulated the standards for LAN.
Page: 69
IEEE Standards:
Destination Service Access Point (DSAP) -- IEEE 802.2 header begins with a 1 byte field, which identifies the
receiving upper-layer process.
Source Service Access Point (SSAP) -- Following the DSAP address is the 1-byte address, which identifies the
sending upper-layer process.
Control -- The Control field employs three different formats, depending on the type of LLC frame used:
• Information (I) frame -- Carries upper-layer information and some control information.
• Supervisory (S) frame -- Provides control information. An S frame can request and suspend
Page: 70
transmission, reports on status, and acknowledge receipt of I frames. S frames do not have an
Information field.
• Unnumbered (U) frame -- Used for control purposes and is not sequenced. A U frame can be used to
initialize secondaries. Depending on the function of the U frame, its Control field is 1 or 2 bytes. Some
U frames have an Information field.
Data -- Variable-length field bounded by the MAC format implemented. Usually contains IEEE 802.2
Subnetwork Access Protocol (SNAP) header information, as well as application-specific data.
Flow Control:
Flow control is a technique for assuring that a transmitting entity does not overwhelm a receiving entity with
data. Flow control is a set of procedures that tells the sender how much data it can transmit before it must wait
for an acknowledgment from the receiver.
Page: 71
Stop and wait Flow Control:
A source entity transmits a frame. After reception, the destination entity indicates its willingness to accept
another frame by sending back an acknowledgment to the frame just received. The source must wait until it
receives the acknowledgment before sending the next frame. The source must wait until it receives the
acknowledgment before sending the next frame. The destination can thus stop the flow of
data by simply withholding acknowledgment. With the use of multiple frames for a single message, the stop-
and-wait procedure may be inadequate. The essence of the problem is that only one frame at a
time can be in transit. For very high data rates, or for very long distances between sender and receiver, stop- and-
wait flow control provides inefficient line utilization.
Page: 72
How Flow control is achieved?
• Receiver can control the size of the sending window.
• By limiting the size of the sending window data flow from sender to receiver can be limited .
The example assumes a 3-bit sequence number field and a maximum window size of seven frames. Initially, A
and B have windows indicating that A may transmit seven frames, beginning with frame 0 (FO). After
transmitting three frames (FO, F1, F2) without acknowledgment, A has shrunk its window to four frames. The
window indicates that A may transmit four frames, beginning with frame number 3. B then transmits an RR
(receive-ready) 3, which means: "I have received all frames up through frame number 2 and am ready to receive
frame number 3; in fact, I am prepared to receive seven frames, beginning with frame number 3." With this
acknowledgment, A is back up to permission to transmit seven frames, still beginning with frame 3. A proceeds
to transmit frames 3, 4, 5 , and 6. B returns an RR 4, which allows A to send up to and including frame F2.
Page: 73
Error Control:
When data is transmitted over a cable or channel, there is always a chance that some of the bits will be changed
(corrupted) due to noise, signal distortion or attenuation. If errors do occur, then some of the bits will either
change from 0 to 1 or from 1 to 0.
Error Control allows the receiver to inform the sender of any frames lost or damaged in transmission and
coordinates the retransmission of those frames by the sender. Error control is divided in two main categories:
Error Detection It allows a receiver to check whether received data has been corrupted during transmission. It
can, for example, request a retransmission.
Error Correction This type of error control allows a receiver to reconstruct the original information when it has
been corrupted during transmission.
In the data link layer, the term error control refers primarily to methods of error detection and retransmission.
Error control in the data link layer is often implemented simply: Any time an error is detected in an exchange,
specified frames are retransmitted. This process is called automatic repeat request (ARQ).
Error control in the data link layer is based on automatic repeat request, which is the retransmission of data.
Hamming distance:
One of the central concepts in coding for error control is the idea of the Hamming distance. The Hamming
distance between two words (of the same size) is the number of differences between the corresponding bits. We
show the Hamming distance between two words x and y as d(x, y). The Hamming distance can easily be found if
we apply the XOR operation on the two words and count the number of 1s in the result. Note that the Hamming
distance is a value greater than zero.
Error Detection:
There are three ways to detect errors.
1. Parity check
2. CRC
3. Checksum
Page: 74
1. Parity Check:
The simplest error-detection scheme is to append a parity bit to the end of a block of data . A typical example is
ASCII transmission, in which a parity bit is attached to each 7-bit ASCII character. The value of this bit is
selected so that the character has an even number of 1s (even parity) or an odd number of 1s (odd parity).
CRC codes operate as follows. Consider the d-bit piece of data, D, that the sending node wants to send to the
receiving node. The sender and receiver must first agree on a r+1 bit pattern, known as a generator, which we
will denote as G. We will require that the high and low order bits of G must be 1 (e.g., 10111 is acceptable, but
0101 and 10110 are not) . The key idea behind CRC codes is shown in Figure. For a given piece of data, D, the
sender will choose r additional bits, R, and append them to D such that the resulting d+r bit pattern (interpreted
as a binary number) is exactly divisible by G using modulo 2 arithmetic. The process of error checking with
CRC's is thus simple: the receiver divides the d+r received bits by G. If the remainder is non-zero, the receiver
knows that an error has occurred; otherwise the data is accepted as being correct.
All CRC calculations are done in modulo 2 arithmetic without carries in addition or borrows in subtraction. This
means that addition and subtraction are identical, and both are equivalent to the bitwise exclusive-or (XOR) of
the operands. Thus, for example,
Page: 75
1011 - 0101 = 1110
1001 - 1101 = 0100
Multiplication and division are the same as in base 2 arithmetic, except that any required addition or
subtraction is done without carries or borrows. As in regular binary arithmetic, multiplication by 2 k left shifts a
bit pattern by k places. Thus, given D and R, the quantity D*2r XOR R yields the d+r bit pattern shown in Figure
above.
International standards have been defined for 8-, 12-, 16- and 32-bit generators, G. An 8-bit CRC is used to
protect the 5-byte header in ATM cells.
Figure below illustrates this calculation for the case of D = 101110, d = 6 and G = 1001, r=3. The nine bits
transmitted in this case are 101110 011. You should check these calculations for yourself and also check that
indeed D2r = 101011 * G XOR R.
A second way to viewing the CRC process is to express all values as polynomials in a dummy variable X, with
binary coefficients. The coefficients corresponds to the bits in the binary number. Thus for D=110011 we have,
D(X)=X5+X4+X+1 for P=11001 we have P(X)=X4+X3+1. Arithmetic operations are again modulo 2.
Page: 76
3. Checksum:
The checksum is used in the Internet by several protocols although not at the data link layer. However,
we briefly discuss it here to complete our discussion on error checking.
Example1:
Suppose our data is a list of five 4-bit numbers that we want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example, if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11,
12,0,6,36), where 36 is the sum of the original numbers. The receiver adds the five numbers and compares the
result with the sum. If the two are the same, the receiver assumes no error, accepts the five numbers, and
discards the sum. Otherwise, there is an error somewhere and the data are not accepted.
Example2:
We can make the job of the receiver easier if we send the negative (complement) of the sum, called the
checksum. In this case, we send (7, 11, 12,0,6, -36). The receiver can add all the numbers received (including the
checksum). If the result is 0, it assumes no error; otherwise, there is an error
Internet Checksum
Traditionally, the Internet has been using a 16-bit checksum. The sender calculates the checksum by following
these steps.
Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to O.
3. All words including the checksum are added ushtg one's complement addition.
4. The sum is complemented and becomes the checksum.
5. The checksum is sent with the data.
Receiver site:
1. The message (including checksum) is divided into 16-bit words.
2. All words are added using one's complement addition.
3.The sum is complemented and becomes the new checksum.
4.If the value of checksum is 0, the message is accepted; otherwise, it is rejected.
Error Correction:
Any time an error is detected in an exchange, specified frames are retransmitted. This process is called automatic
repeat request (ARQ). Error control in the data link layer is based on automatic repeat request, which is the
retransmission of data.
Page: 77
Stop-and-wait ARQ:
Stop-and-wait ARQ is based on the stop-
and-wait flow-control technique outlined
previously and is depicted in Figure
alongside.
• The receiver sends only +ve ACK for frame received safe and sound. It is silent about the frames
damaged or lost. The acknowledgement number always define the number of next expected frame. If
frame 0 is received, ACK1 is sent; if frame 1 is received ACK 0 is sent.
Bidirectional Transmission:
The stop-and-wait mechanism we have discussed is unidirectional. However, we can have bi-directional
transmission if the two parties have two separate channels for the full-duplex transmission or share the same
channel for half-duplex transmission.
Piggybacking: is a method to combine a data frame with an acknowledgement. For example, stations A and B
both have data to send. Instead of sending separate data and ACK frames, Station A sends a data frame that
includes an ACK , station B behaves in a similar manner.
Piggybacking can save BW because the overhead from a data frame and ACK frame (addresses, CRC, etc) can
be combined into just one frame.
Page: 78
Go-Back-N ARQ:
• The form of error control based onsliding-
window flow control that is most
commonly used is called go-back-N ARQ.
b) Frame i is lost in transit. A subsequently sends frame (i + 1). B receives frame (i + 1) out of order and
sends an REJ i. A must retransmit frame i and all subsequent frames.
c) Frame i is lost in transit, and A does not soon send additional frames. B receives nothing and returns
neither an RR nor an REJ. When A's timer expires, it transmits an RR frame that includes a bit known as
Page: 79
the P bit, which is set to 1.B interprets the R R frame with a P bit of 1as a command that must be
acknowledged by sending an R R indicating the next frame that it expects. When A receives the RR, it
retransmits frame i.
b) If A's timer expires, it transmits an RR command as in Case lc. It sets another timer, called the P-bit
timer. If B fails to respond to the RR command, or if its response is damaged, then A's P-bit timer will
expire. At this point, A will try again by issuing a new RR command and restarting the P-bit timer. This
procedure is tried for a number of iterations. If A fails to obtain an acknowledgment after some
maximum number of attempts, it initiates a reset procedure.
Selective-reject ARQ
With selective-reject ARQ, the only frames retransmitted are those that receive a negative acknowledgment, in
this case called SREJ, or that time-out. This would appear to be more efficient than go-back-N, because it
minimizes the amount of retransmission. On the other hand, the receiver must maintain a buffer large enough to
save post-SREJ frames until the frame in error is retransmitted, and it must contain logic for reinserting that
frame in the proper sequence. The transmitter, too, requires more complex logic to be able to send a frame out of
sequence. Because of such complications, select-reject ARQ is much less used than go-back- N ARQ.
Page: 80
Fig: HDLC Frame Format
LEGEND
N(S) = Send sequence number
N(R) = Receive sequence number
P/F = Poll/final bit
Page: 81
I Frames:
I- frames are designed to carry user data from the network layer. In addition, they can include flow and error
control information (piggybacking).
• The first bit defines the type. If the first bit of the control field is 0, this means the frame is an I-frame.
• The next 3 bits, called N(S), define the sequence number of the frame. Note that with 3 bits, we can
define a sequence number between and 7; but in the extension format, in which the control field is 2
bytes, this field is larger.
• The last 3 bits, called N(R), correspond to the acknowledgment number when piggybacking is used .
• The single bit between N(S) and N(R) is called the P/F bit. The P/F field is a single bit with a dual
purpose. It has meaning only when it is set (bit = 1) and can mean poll or final. It means poll when the
frame is sent by a primary station to a secondary (when the address field contains the address of the
receiver). It means final when the frame is sent by a secondary to a primary (when the address field
contains the address of the sender).
S Frame:
Supervisory frames are used for flow and error control whenever piggybacking is either impossible or
inappropriate (e.g., when the station either has no data of its own to send or needs to send a command or
response other than an acknowledgment). S-frames do not have information fields.
• If the first 2 bits of the control field is 10, this means the frame is an S-frame.
• The last 3 bits, called N(R), corresponds to the acknowledgment number (ACK) or negative
acknowledgment number (NAK) depending on the type of S-frame.
• The 2 bits called code is used to define the type of S-frame itself. With 2 bits, we can have four types of
S-frames, as described below :
1. Receive ready (RR). If the value of the code subfield is 00, it is an RR S-frame. This kind of frame
acknowledges the receipt of a safe and sound frame or group of frames. In this case, the value N(R) field
defines the acknowledgment number.
2. Receive not ready (RNR). If the value of the code subfield is 10, it is an RNR S-frame. This kind of
frame is an RR frame with additional functions. It acknowledges the receipt of a frame or group of
frames, and it announces that the receiver is busy and cannot receive more frames. It acts as a kind of
congestion control mechanism by asking the sender to slow down. The value of NCR) is the
acknowledgment number.
3. Reject (REJ). If the value of the code subfield is 01, it is a REJ S-frame. This is a NAK frame, but not
like the one used for Selective Repeat ARQ. It is a NAK that can be used in Go-Back-N ARQ to
improve the efficiency of the process by informing the sender, before the sender time expires, that the
last frame is lost or damaged. The value of NCR) is the negative acknowledgment number.
4. Selective reject (SREJ). If the value of the code subfield is 11, it is an SREJ S-frame. This is a NAK
frame used in Selective Repeat ARQ. Note that the HDLC Protocol uses the term selective reject instead
of selective repeat. The value of N(R) is the negative acknowledgment number.
The fifth field in the control field is the P/F bit as discussed before.
The next 3 bits, called N(R), correspond to the ACK or NAK value.
Page: 82
U Frames:
Unnumbered frames are used to exchange session management and control information between connected
devices. Unlike S-frames, U-frames contain an information field, but one used for system management
information, not user data. As with S-frames, however, much of the information carried by U-frames is contained
in codes included in the control field. U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit. Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
Flag Field:
Flag fields delimit the frame at both ends with the unique pattern 01111110. A single flag may be used as the
closing flag for one frame and the opening flag for the next. On both sides of the user-network interface,
receivers are continuously hunting for the flag sequence to synchronize on the start of a frame. While receiving a
frame, a station continues to hunt for that sequence to determine the end of the frame. Since the pattern 01111110
may appear in the frame as well, a procedure know and bit stuffing is used.
After detecting a starting flag, the receiver monitors the bit stream. When a pattern of five 1s appears, the sixth
bit is examined. If this bit is 0, it is deleted. If the sixth bit is a 1 and the seventh bit is a 0, the combination is
accepted as a flag. If the sixth and seventh bits are both 1, the sender is indicating an abort condition. With the
use of bit stuffing, arbitrary bit patterns can be inserted into the data field of the frame. This property is known as
data transparency.
Address Field:
The address field identifies the secondary station that transmitted or is to receive the frame. This field is not
needed for point-to-point links, but is always included for the sake of uniformity.
Control Field: It defines the three types of frames I,U and S Frame for HDLC.
Information Field: This field is present only in I frame and some U Frame.
Frame Check Sequence Field: Its and error detecting code calculated from the remaining bits of the frame,
exclusive of flags. The normal code is 16 bit CRC code.
Page: 83
6. PPP provides connections over multiple links.
7. PPP provides network address configuration. This is particularly useful when a home user needs a
temporary network address to connect to the Internet.
On the other hand, to keep PPP simple, several services are missing:
1. PPP does not provide flow control. A sender can send several frames one after another with no concern
about overwhelming the receiver.
2. PPP has a very simple mechanism for error control. A CRC field is used to detect errors. If the frame is
corrupted, it is silently discarded; the upper-layer protocol needs to take care of the problem. Lack of
error control and sequence numbering may cause a packet to be received out of order.
3. PPP does not provide a sophisticated addressing mechanism to handle frames in a multipoint
configuration.
Framing
PPP is a byte-oriented protocol. Framing is done according to the discussion of byte- oriented protocols above.
Flag. A PPP frame starts and ends with a I-byte flag with the bit pattern 01111110. Although this pattern is the
same as that used in HDLC, there is a big difference. PPP is a byte-oriented protocol; HDLC is a bit-oriented
protocol. The flag is treated as a byte, as we will explain later.
• Address. The address field in this protocol is a constant value and set to 11111111 (broadcast address).
During negotiation (discussed later), the two parties may agree to omit this byte.
• Control. This field is set to the constant value 11000000 (imitating unnumbered frames in HDLC). As
we will discuss later, PPP does not provide any flow control. Error control is also limited to error
detection. This means that this field is not needed at all, and again, the two parties can agree, during
negotiation, to omit this byte.
• Protocol. The protocol field defines what is being carried in the data field: either user data or other
information. We discuss this field in detail shortly. This field is by default 2 bytes long, but the two
parties can agree to use only 1 byte.
• Payload field. This field carries either the user data or other information. The data field is a sequence of
bytes with the default of a maximum of 1500 bytes; but this can be changed during negotiation. The data
field is byte- stuffed if the flag byte pattern appears in this field. Because there is no field defining the
size of the data field, padding is needed if the size is less than the maximum default value or the
maximum negotiated value.
Page: 84
• FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.
Byte Stuffing
The similarity between PPP and HDLC ends at the frame format. PPP, as we discussed before, is a byte-oriented
protocol totally different from HDLC. As a byte-oriented protocol, the flag in PPP is a byte and needs to be
escaped whenever it appears in the data section of the frame. The escape byte is 01111101, which means that
every time the flaglike pattern appears in the data, this extra byte is stuffed to tell the receiver that the next byte
is not a flag.
PPP Stack
Although PPP is a data link layer protocol, PPP uses another set of other protocols to establish the link,
authenticate the parties involved, and carry the network layer data. Three sets of protocols are defined to make
PPP powetful: the Link Control Protocol (LCP), two Authentication Protocols (APs), and several Network
Control Protocols (NCPs). At any moment, a PPP packet can carry data from one of these protocols in its data
field. Note that there is one LCP, two APs, and several NCPs. Data may also come from several different
network layers.
The Link Control Protocol (LCP) is responsible for establishing, maintaining, configuring, and terminating
links. It also provides negotiation mechanisms to set options between the two endpoints. Both endpoints of the
link must reach an agreement about the options before the link can be established. All LCP packets are carried in
the payload field of the PPP frame with the protocol
field set to C02 1 in hexadecimal .
Authentication Protocols
Authentication plays a very important role in PPP because PPP is designed for use over dial-up links where
verification of user identity is necessary. Authentication means validating the identity of a user who needs to
access a set of resources. PPP has created two protocols for authentication: Password Authentication Protocol
and Challenge Handshake Authentication Protocol. Note that these protocols are used during the authentication
phase.
PAP The Password Authentication Protocol (PAP) is a simple authentication procedure with a two-step
process:
1. The user who wants to access a system sends an authentication identification (usually the user name) and
a password.
2. The system checks the validity of the identification and password and either accepts or denies
connection.
Page: 85
Fig:PAP Authentication Protocol
1. The system sends the user a challenge packet containing a challenge value, usually a few bytes.
2. The user applies a predefined function that takes the challenge value and the user's own password and
creates a result. The user sends the result in the response packet to the system.
3. The system does the same. It applies the same function to the password of the user (known to the
system) and the challenge value to create a result. If the result created is the same as the result sent in the
response packet, access is granted; otherwise, it is denied. CHAP is more secure than PAP, especially if
the system continuously changes the challenge value. Even if the intruder learns the challenge value and
the result, the password is still secret.
Page: 86